The Pro's & Con's of
Credit Bureau Data
Let's explore a high level overview of the data that is received and used by a typical Credit Bureau and understand how the data gets to a Credit Bureau and see the multiple opportunities for data errors to creep in. This then sets the base of how our EI solutions can solve key issues.
Credit Data Capture 1
First the data is captured by one of the approximate 130 credit providers licenced by the NCR (Data Contributors) when they consider a person's application for a loan or credit product. That's 130 opportunities to get the content of each data field incorrect as each company applies its own rules and systems for the data capturing of every application.
We have all faced the challenges of trying to spell and clarify names and contact details to call centre agents - with both parties perhaps having a tough job understanding some accents on poor connection cell phone communications.
Their objective is to capture the data for business purposes and not necessarily to capture it in the best structure or validated format, as 99% of their information required for risk purpose assessment is obtained via matching an ID and Name to a Credit Bureau file in order to receive a Credit Score. The rest of the data captured from a person is mainly for legal and communication purposes.
Data Aggregation Process 2
The data from the approximate 130 Data Contributors is then collated and processed at a central Data Transmission Hub (DTH). Their objectives are to standardize the data and then aggregate it for reporting purposes before providing the data (normally monthly) to each of the main Credit Bureaus. In reality the main data elements that are core objectives are the Identity and Payment elements as they underpin a healthy accurate credit system. Contact data is important but is in reality is a lesser objective as it is not used in creating a payment profile or credit score and therefore undergoes a lower level of validation. As an example - If a person has applied at three credit suppliers for credit, but used three slightly different address formats (or different addresses) how does the aggregation process work to standardise this data. Its difficult and so some standardisation rules will be applied, but those rules cannot solve all the issues.
The data is standardised for structural purposes, but contact data as an example is not validated against standardised tables to ensure that each of the data elements is actually valid. As an example, if they receive an address as "5 Hippopotamus Str, Timbuktu" they probably correctly standardise the field structure even though there is no Timbuktu suburb with a Hippopotamus street in SA.
CB Data Process 3
Each Credit Bureau then gets a copy of the "standardised" data from the DTH as their "base input" of updated data.
They then may add some further data sources from separate data sets that they may have access to, after which they apply their own data standardisation processes. Lastly they add this new aggregated data to their own "history" data sets. Each Credit Bureau therefore applies its own processes and methodologies to aggregate these total datasets.
As an example - In many cases a slightly different address to the one held "on file" by the CB may now be considered to be a "new address" when in reality it may only contain sufficient misspellings to be considered a different address. But which one is now the correct version? The original address or the mis-spelt address?
Therefore in the end each Credit Bureau has a similar core set of base data each month, but in reality their final data can be quite different from each other in terms of validation, content, structure and enhancement.
EI identified these and other typical problems with CB data many years ago and realised that we needed to provide our clients with better quality contact data.
We also realised that we needed to create our own National Address Database with enriched information and the ability to display it in geospatial mapping views. This was a requirement so that we create proprietary stability "scores"and other enhanced information for extended analytics.
We therefore developed a number of specialized software solutions (InfoArchitect, AddressXpress, InfoXplorer, etc.) that used extensive data validation tables built by us. As an example we needed to build a table with almost every street and suburb in South Africa to validate addresses. The software does far more than validate addresses, but an address is a very important and overlooked component for analysis to optimise Advertising and Marketing along with identifying and linking potential fraud.
InfoArchitect and our other software, along with processes developed and refined over decades, is therefore a key part of our "magic formulae" for delivering Actionable Customer Intelligence that is focused on extracting every bit of value from customer data.
Please see our Solutions pages for information on how we can assist you to solve your customer data issues.