Recognizing Dirty Data
When asked to define "data quality," people usually think of error-free data entry. It is true that sloppy data entry habits are often the culprit, but data quality is also affected by the way we store and manage data. For example, old file structures, such as flat files, did not have strong data typing rules, and it was common practice to use REDEFINE and OCCURS clauses with those structures. A REDEFINE clause allows you to change the data type of a data element or a group of data elements. For example, a character name field can be redefined and reused as a numeric amount field or a date field. An OCCURS clause allows you to define an array of repeating data elements. For example, an amount field can occur 1–12 times, if you were capturing monthly totals for January through December. Relational database management systems and the new generation of object-oriented programming practices no longer encourage such untidy data typing habits, but they do not provide any deterrence for other types of data abuse, such as some extensible markup language (XML) document type definition (DTD) usage that propagates into the relational databases. Many of the dirty data examples described in the following list can be found in relational databases as often as they can be found in flat files:
- Incorrect data—For data to be correct (valid), its values must adhere to its domain (valid values). For example, a month must be in the range of 1–12, or a person’s age must be less than 130. Correctness of data values can usually be programmatically enforced with edit checks and by using lookup tables.
- Inaccurate data—A data value can be correct without being accurate. For example, the state code "CA" and the city name "Boston" are both correct, but when used together (such as Boston, CA), the state code is wrong because the city of Boston is in the state of Massachusetts, and the accurate state code for Massachusetts is "MA." Accuracy of dependent data values is difficult to programmatically enforce with simple edit checks or lookup tables. Sometimes it is possible to check against other fields or other files to determine if a data value is accurate in the context in which it is used. However, many times accuracy can be validated only by manually spot-checking against paper files or asking a person (for instance, a customer, vendor, or employee) to verify the data.
- Business rule violations—Another type of inaccurate data value is one that violates business rules. For example, an effective date should always precede an expiration date. Another example of a business rule violation might be a Medicare claim for a patient who is not yet of retirement age and does not qualify for Medicare.
- Inconsistent data—Uncontrolled data redundancy results in inconsistencies. Every organization is plagued with redundant and inconsistent data. This is especially prevalent with customer data. For example, a customer name on the order database might be "Mary Karlinsky," the same name on the customer database might be "Maria Louise Karlinsky," and on a downstream customer-relationship, decision-support system the same name might be spelled "Mary L. Karlynski."
- Incomplete data—During system requirements definition, we rarely bother to gather the data requirements from down-stream information consumers, such as the marketing department. For example, if we build a system for the lending department of a financial institution, the users of that department will most likely list Initial Loan Amount, Monthly Payment Amount, and Loan Interest Rate as some of the most critical data elements. However, the most important data elements for users of the marketing department are probably Gender Code, Customer Age, or Zip Code, of the borrower. Thus, in a system built for the lending department, data elements, such as Gender Code, Customer Age, and Zip Code might not be captured at all, or only haphazardly. This often is the reason why so many data elements in operational systems have missing values or default values.
- Nonintegrated data—Most organizations store data redundantly and inconsistently across many systems, which were never designed with integration in mind. Primary keys often don’t match or are not unique, and in some cases, they don’t even exist. More and more frequently, the development or maintenance of systems is outsourced and even off-shored, which puts data consistency and data quality at risk. For example, customer data can exist on two or more outsourced systems under different customer numbers with different spellings of the customer name and even different phone numbers or addresses. Integrating data from such systems is a challenge.