- September 4, 2022
- Posted by: Bernard Mallia
- Categories:
Prefer to listen?
If you prefer to listen to, instead of reading the text on this page, all you need to do is to put your device sound on, hit the play button on the left, sit back, relax and leave everything else to us.
The Risks of AI Implementation
Many companies struggle to implement AI and fail to achieve the productivity improvements they were after with their AI investments. The reasons for such failure usually revolve around leadership, communication, people, technology, data and process issues.
Business executives often fail to emphasise that they are using AI to help people increase productivity rather than to replace them and this creates fear, resentment and resistance from employees. They also struggle to set the right expectations for AI implementations, and to effectively communicate with employees about how AI will impact their work. At times, they do not understand this fully themselves, so they might be at a loss in interpreting AI implementations, which results in erratic communication even if the executives are generally good communicators. Many organisations also lack clear processes for incorporating AI into their business. Indeed, lack of understanding about AI among employees can also lead to resistance to its adoption, as does the lack of leadership or lack of understanding of AI from the leadership of an organisation.
Meanwhile, IT departments often lack the necessary skills to develop, deploy and maintain AI applications, and end up slamming the breaks on AI projects, or end up jumbling them up by focusing exclusively on technical aspects, ignoring the business aspects of the implementation either partially or entirely. Lack of well-structured, quality data can also limit the ability of AI applications to improve productivity.
There should be no presumption that implementing AI is an easy management task and having a well-thought roadmap before attempting to go down the AI path is essential, as throwing money at the problem without having a clear strategy usually does nothing to resolve the core issues and makes AI adoption much more costly than it should be.
The top sixteen problems that we have encountered in the AI implementations we were involved in on behalf of our clients (not listed in any particular ordering) are the following:
Broadly speaking, these risks may be said to fall under the following categories:
1. Lack of governance
Many AI projects are undertaken without clear objectives, leadership or governance structures in place. This can lead to projects being abandoned, not achieving their intended outcomes or achieving their intended outcomes at a much higher cost.
2. Security and privacy concerns
AI systems often rely on large amounts of data, which may include personal information. If this data is not properly secured, it could be accessed and used inappropriately, and worse still it might be poisoned by malicious data intended to change AI outcomes.
3. Technology risk
AI technologies are constantly evolving and changing, which can make it difficult to keep up with. This can lead to organisations using or adopting outdated or unsupported technologies even if they undertook their feasibility study only a few months prior to implementation, and which in turn may not work as intended or may no longer be supported by the vendor by the time that the software is implemented.
4. People risk
AI projects often require embracing new technologies and toolsets, as well as the acquisition of new skills and knowledge. These two things can be – and in our experience invariably is – difficult to find within an organisation. This can lead to projects being delayed or cancelled due to resistance or a lack of resources.
5. Data risk
AI systems often rely on large amounts of data and a clean underlying ontology. Such data can be difficult to obtain and clean. If the data is not of a good enough quality, it may lead to inaccurate or misleading results.
To the foregoing risks, one would also need to add a number of other risks that are not generic but that will apply specifically to the project, programme and context in question.
Organisations need to be aware of potential risks when implementing AI and should take active steps to mitigate them and to avoid running into them in the first place.
Indeed, most organisations end up grappling with data and AI ethics through ad hoc discussions on a per-service basis, despite the significant costs of getting it wrong. Teams either miss risks, rush to address problems as they arise ending up in a constant firefighting mode that impedes strategic vision, or cross their fingers and bury their heads in the sand in the hopes that the issue will go away on its own when there is no clear framework in place for how to identify, analyse, and mitigate risks. Where organisations have attempted to address the problem on a large scale, they have a tendency to establish tight, vague, and overly-broad regulations that inevitably put the brakes on production and cause false positives in risk detection. When you add third-party providers, who may or may not be considering these issues at all, these issues multiply by orders of magnitude.
Organisations need a strategy for reducing AI risk that deals at least with how to exploit data and create AI solutions responsibly and without running afoul of the law. An operationalised approach to data and AI ethics must systematically and exhaustively identify ethical risks throughout the organisation from IT, HR, Marketing, Product or Service all the way to Operations and beyond. It should also be tailored to the organisation’s specific risks which will vary depending on the organisation’s size, business model, geographic footprint and other sui generis factors, pretty much on the same lines of other risk-management tactics. Importantly, underlying the latter, there should be a good technical – and more importantly still a good business – understanding of AI, where it fits in and how it will be developed and deployed.
