When AI Loses Its Way

Algorithmic BrAInAI Insights - The AI Imperative: When AI Loses Its Way

Prefer to listen?

If you prefer to listen to, instead of reading the text on this page, all you need to do is to put your device sound on, hit the play button on the left,  sit back, relax and leave everything else to us.

When AI Loses Its Way

In 2016, the investigative news organization ProPublica released an exposé on COMPAS, a risk-prediction AI algorithm used by courts in southern Florida to estimate a defendant’s chance of re-offending within a given timeframe.

COMPAS’s underlying algorithm is a trade secret belonging to its maker, then Northpointe (now Equivant), which means that no one knows how it makes predictions or has access to the data it has been trained on, so no one is in a position to even question or validate its logic.

COMPAS became a key illustration of why people cannot trust AI after it was shown that the algorithm produced diverse outputs based on race.

If companies want their workers to accept, utilise, and eventually trust AI technologies, they must, to the degree to which it is legally feasible, open up the black box to people who will be required to interact with the technology. If corporations utilise AI to make predictions, they owe humans an explanation as to how the decisions are being made. This has recently been reflected in the European Commission’s draft AI Act, to which we have referred before.

Most Recent Insights

Do you believe that AI can be of help to your organisation?

At Algorithmic BrAIn, one of the Equinox Group companies, we have developed a comprehensive staged checklist to ensure that you leave no one of your important considerations out when planning your AI journey. We’d love to be able to help you get this right and if you think we can help you in this, we’d be thrilled to hear from you.