Data Audit

 
 

Why data audit?

Do you use data to make decisions? If so, have you asked yourself whether it is helping you make better decisions? This is the essence of our data audit: we check that your use of data is helping you to make better decisions.

With the introduction of AI, data processing and analysis is becoming more sophisticated and more complex. It is also starting to become more opaque, with a dwindling number of people really understanding how the system works.

In this context, it’s easy for algorithmic decisions to diverge from your business objectives. Most often the divergence will happen slowly, but given the current state of AI – and the fact that it isn’t really ‘intelligent’ – there is also a risk of rather sudden and dramatic mistakes.

Our data audit will help you understand and quantify the risks associated with your use of data. Our approach is both collaborative and bespoke. Every use of data is different, and so there is no one-size-fits-all off-the-shelf solution to data audit. We look at your specific use cases for data and evaluate the risks relevant only to that specific context.

The objective is to help you become self-sufficient. After first identifying the risks, we then help you design and build processes to measure and monitor your data processes. This will ensure that you continue to make the best data-based decisions for your business long into the future.

 
 
 

What are these risks?

We have identified three main areas of risk:

  • Your AI doesn’t work in real life. Everything seemed to work when the team built the system, but when put into production, performance is much worse than expected. There are lots of reasons why this might be the case, ranging from data cleaning processes, through to the techniques used to train machine learning models. First and foremost, however, your business needs to be aware of the problem, and this requires some form of ongoing monitoring.

  • Bias against individuals and other ethical considerations. There is an increasing body of evidence showing that AI based systems can unwittingly make decisions which are biased against certain individuals or groups. Often, this is because a machine learning model has picked up on patterns which are already present in society (for example, most engineers are currently men, and so an AI might not recommend engineering jobs to women). Regardless of the source of the problem, it is vital that there is monitoring in place to ensure that the issue is identified and can therefore be addressed.

  • Cost considerations and technical debt. With the rapid progress being made in AI research, the excitement will tempt many engineers to adopt new, bleeding edge techniques. In many cases, there will be a performance gain, but new and relatively untested techniques also introduce risks. Do the performance gains from this system outweigh the development costs? How much will it cost to maintain the system at its current level of performance? Fundamentally, we need to ask whether you achieve a similar result with a simpler (and cheaper) system.