Computer applications are increasingly coming with an “AI inside” label. Machine learning is at the heart of software that businesses and governments are relying on to make a wide range of decisions. But AI systems are not infallible. Sometimes, errors can seem harmless, but the stakes can be high in scenarios such as hiring, loan grants, policing, and criminal justice.

AI bias, or algorithmic bias, refers to repeated errors in machine learning models that create unfair (or even illegal) outcomes. The creators of AI software don’t set out to develop biased applications, but bias creeps in. For example, the current state of affairs is not ideal, so biases lurk in the baseline data; a model is used outside the intended scope or the architecture is not the right one. With increased adoption of AI, such unintended consequences are also increasing and drawing attention to the dark side of AI.

In response, many academics, along with legal and technology experts, are discussing ethical AI—AI systems that adhere to the principles of fairness, accountability, transparency, and trust. A challenge for practitioners is to realize these principles in practice. In other words, what does an architecture for ethical AI applications look like?

An ethical AI architecture consists of three layers—data, model, and governance.

 

Data Layer

At its core, machine learning is about identifying patterns in data; humans create the data or decide which data to use. Being cognizant of the possibility of biased decisions and taking steps to mitigate potential harms are also human responsibilities. We need to take care that we are collecting, labeling, and using training data correctly to prevent the algorithms from reinforcing stereotypes and/or bias leaking into predictions and recommendations.

Additionally, machine learning models can lose their accuracy post-deployment. You’ll encounter edge cases or unearth new biases or the context changes completely. Your architecture has to enable real-world users to provide feedback and incorporate their inputs into the model through retraining. A human-in-loop design pattern can be handy for performance monitoring and iterative retraining.

 

Model Layer

For many machine learning methods, the link between input data and output decisions is not clear. Input data undergoes a series of complex transformations, but even the creators of these models may not understand the inner workings. Such opaqueness clashes with our notions of fairness. The algorithm rejected your loan application, but can’t tell you why.

To address this, there are newer techniques emerging broadly called explainable AI. Here, the models also provide some reasoning about their conclusions. Explore the feasibility of such explainable AI methods for your applications.

 

Governance Layer

The governance layer consists of technology and tools, but also education and training. From a technical point of view, a central repository of machine learning models and their descriptions enables reuse of models and promotes consistent standards throughout the organization. Version control of models, datasets, and decisions allows for audits. You can even roll back to a stable version in case of unexpected behavior of an updated model.

On the education side, many people put AI on a pedestal. Not surprisingly, we end up thinking that decisions made by AI are superior to human judgment. But as we’ve seen, that’s not necessarily the case. The stakeholders involved need to know when machine learning works best and what the limitations are. Eliminating algorithmic biases is a core tenet of ethical AI. You use the outlined architectural principles to make your AI applications ethical by design.