Have you given much thought to the quality of the data you use to make decisions for your company? If the data isn’t accurate you are likely to draw the wrong conclusion. How can you be sure that the hours reported for a specific project are accurate? You might decide to commit more resources to an already well-staffed project, simply because several people haven’t submitted time cards. How good is the data you use? Incomplete data is corrupt data.
Nipping it in the bud
Most would agree that correcting the issue as close to its origin is ideal. Even a small deviation can snowball into a real problem the longer it permeates the system. For example, let’s look at those late time sheets again. In the weekly report the labor numbers are off, a small problem. However if it’s not quickly corrected revenue is effected because hours that aren’t reported can’t be billed. Now it’s a bigger problem. Ultimately these types of errors can mean the quarterly report to investors might not be accurate. Now it’s a really big problem.
But we’re only human
Human error is a major factor in data discrepancies. Catching the errors early requires diligent monitoring. Most companies try to minimize human error by training and re-training employees for compliance. Medium to large companies look for errors on the back end where only sizable inconsistencies are detected. Companies that can afford audit teams to spot check the data have the best results, but these audits are costly and again, are subject to human error. In addition, executives don’t have the time to read every report or review data. They frequently depend on a trusted manager to extrapolate out the important data and report negative information. The resulting report is very subjective to that manager’s interpretation. If the manager leaves the company, the organization must start over with a new manager and a new bias toward the analysis.
Obviously, monitoring a system manually is labor-intensive so it is very difficult to accomplish unless the amount of data and number of systems is very small.
So what’s the answer?
We’ve identified 3 methods to quality data:
- As with any system, the best way to improve quality is to catch the errors and eliminate them as close to their source as possible. If employees haven’t submitted their timesheets, why not setup a system to remind them and let their bosses know what’s missing? You will see faster results, and better results, when the corrective action is closest to the people who can fix the problem.
- Reduce the number of systems involved to the minimum. A single data stream is ideal. Many different systems generating independent reports is a recipe for disaster. If a company uses a separate software for time sheet management and project management, hours reported on the project may not sync with the time sheets. Feeding the information from the unique systems to a program that aggregates the information and checks for discrepancies is optimal.
- Minimizing human interpretation is also key. Data that looks quantitative may actually be subjective, dependent on the preparer. If the data that is provided to auditors or managers is directly from the system and only shows the exceptions, the results will be more accurate and timely.
In a perfect world all data would be void of human interpretation and error. It would be collected from a single data system, or from systems that worked together. The best of all situations would be a program that merges multiple systems, sorts out redundancies and discrepancies, and also generates reports from the data. Only then can companies make determinations driven from quantitative data they can trust. Imagine never making a decision blind again.
Follow us on LinkedIn for more information on generating quality data.