Blog post

Paying more for less – the false economy of data re-entry

Office“We can just solve the problem with a few cheap data entry clerks.”

Every time I hear this phrase, I am stunned. Introducing a process which involves re-keying data from one system to another is, pretty much without fail, a bad idea. Often it seems enticingly cheap – pick up a few students or offshore the data problem and there’s one less thing to worry about.

Data rekeying is a false economy, in this post we’ll explore some of the reasons why rekeying will come back to haunt you.

It is riddled with errors

People make mistakes. It could be as simple as a few typos resulting in data mismatches. Mr Smith is not the same as Mr Smuth. It could be transposed numbers. Or it could be a lot worse … typos have been responsible for businesses shutting down, for bank busing stock trades and even for blasphemous bibles!

It is slow

Even with a large team ready to receive incoming data on System A and rekey it into System B, the best turnaround time will be a few minutes.  And this scenario assumes having a pool of people ready at all times to receive and rekey source data – a setup that will be expensive to operate. More often than not, we see contractors sitting in role working through a backlog of data to rekey. Turnaround times vary from a few hours to a day or two.

It is stupefyingly boring to do

Data reentry jobs are awful. The work is monotonous, often the content is boring and the likelihood is that the jobs are poorly paid. Often the systems in use are legacy systems with poor user experience. This can result in disengaged employees resulting in even more errors. Admittedly, it is not possible to ensure that every job is well-paid and fulfilling. It’s not a bad thing to strive for though, and setting out to avoid creating unfulfilling jobs seems like a good thing to do.

It just becomes more expensive over time

Once embedded into your organisation’s data set, errors create a ripple effect. Data is mismatched in other systems resulting in spurious duplicates. Numbers are wrong leading to inaccurate reporting and poorly informed decision making. Errors can trigger subsequent processing issues – order fulfillment problems, invoices to the wrong customers (or not at all),  etc.

It doesn’t have to be this way

If you were to design an organisation’s data flows, data would captured at source and systematically carried through the organisation. Some attention should be paid to the source system to ensure that data captured is as accurate as possible. This will often be a combination of well designed UIs, field validation, field prompts and so on. Once data is inside the organisation, attention should be paid to ensure it is kept “clean”, matched to existing data on hand and delivered to the point where it needs to be used.

Recursyv’s Seamless integration can be used to ensure that data is captured once and moved throughout the organisation as necessary. Additional validations and data augmentation can be included as part of syncing data from one system to another. Contact us to learn more.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.