- September 25, 2022
- Posted by: Bernard Mallia
Prefer to listen?
If you prefer to listen to, instead of reading the text on this page, all you need to do is to put your device sound on, hit the play button on the left, sit back, relax and leave everything else to us.
When AI Loses Its Way
In 2016, the investigative news organization ProPublica released an exposé on COMPAS, a risk-prediction AI algorithm used by courts in southern Florida to estimate a defendant’s chance of re-offending within a given timeframe.
COMPAS’s underlying algorithm is a trade secret belonging to its maker, then Northpointe (now Equivant), which means that no one knows how it makes predictions or has access to the data it has been trained on, so no one is in a position to even question or validate its logic.
COMPAS became a key illustration of why people cannot trust AI after it was shown that the algorithm produced diverse outputs based on race.
If companies want their workers to accept, utilise, and eventually trust AI technologies, they must, to the degree to which it is legally feasible, open up the black box to people who will be required to interact with the technology. If corporations utilise AI to make predictions, they owe humans an explanation as to how the decisions are being made. This has recently been reflected in the European Commission’s draft AI Act, to which we have referred before.