Algorithmic BrAInAI Insights - The AI Imperative: The Risks of AI Implementation

Prefer to listen?

If you prefer to listen to, instead of reading the text on this page, all you need to do is to put your device sound on, hit the play button on the left,  sit back, relax and leave everything else to us.

The Risks of AI Implementation

Many companies struggle to implement AI and fail to achieve the productivity improvements they were after with their AI investments. The reasons for such failure usually revolve around leadership, communication, people, technology, data and process issues.

Business executives often fail to emphasise that they are using AI to help people increase productivity rather than to replace them and this creates fear, resentment and resistance from employees. They also struggle to set the right expectations for AI implementations, and to effectively communicate with employees about how AI will impact their work. At times, they do not understand this fully themselves, so they might be at a loss in interpreting AI implementations, which results in erratic communication even if the executives are generally good communicators. Many organisations also lack clear processes for incorporating AI into their business. Indeed, lack of understanding about AI among employees can also lead to resistance to its adoption, as does the lack of leadership or lack of understanding of AI from the leadership of an organisation.

Meanwhile, IT departments often lack the necessary skills to develop, deploy and maintain AI applications, and end up slamming the breaks on AI projects, or end up jumbling them up by focusing exclusively on technical aspects, ignoring the business aspects of the implementation either partially or entirely. Lack of well-structured, quality data can also limit the ability of AI applications to improve productivity.

There should be no presumption that implementing AI is an easy management task and having a well-thought roadmap before attempting to go down the AI path is essential, as throwing money at the problem without having a clear strategy usually does nothing to resolve the core issues and makes AI adoption much more costly than it should be.

The top sixteen problems that we have encountered in the AI implementations we were involved in on behalf of our clients (not listed in any particular ordering) are the following:

  • 1. It is difficult to integrate cognitive projects with existing processes and systems;

  • 2. The business case for AI is poorly understood and/or articulated;

  • 3. Procurement and systemic decisions – even the most important ones – are taken on obsolete information (which in this area can mean that they are based on information that is a few months old);

  • 4. AI technologies and expertise are too expensive;

  • 5. People with expertise in the technology are very difficult to find with the talent pool for AI experts being too small relative to the demand there is for their skills;

  • 6. Managers (sometimes even IT managers) don’t understand cognitive technologies and how they work;

  • 7. Organisational structures don’t – and sometimes can’t – accommodate cognitive technologies because they were devised in times when they didn’t have to;

  • 8. The data available is too unstructured or inconsistent for AI algorithms to be effective to any significant degree;

  • 9. Existing IT infrastructure can’t support cognitive projects;

  • 10. Security and privacy risks and concerns are a show-stopper;

  • 11. There is a lack of governance around AI initiatives and a lack of leadership commitment to AI projects;

  • 12. Implementation timelines are unrealistic and fail to allow for contingencies;

  • 13. AI Project teams are not interdisciplinary and end up giving rise to a myopic AI strategic vision;

  • 14. Organisational culture resistance to change is high, creating implementation frictions;

  • 15. Technologies are immature for the purpose for which they are being envisaged for deployment; and

  • 16. Technologies have been oversold by vendors.

Broadly speaking, these risks may be said to fall under the following categories:

  • 1. Lack of governance

    Many AI projects are undertaken without clear objectives, leadership or governance structures in place. This can lead to projects being abandoned, not achieving their intended outcomes or achieving their intended outcomes at a much higher cost.

  • 2. Security and privacy concerns

    AI systems often rely on large amounts of data, which may include personal information. If this data is not properly secured, it could be accessed and used inappropriately, and worse still it might be poisoned by malicious data intended to change AI outcomes.

  • 3. Technology risk

    AI technologies are constantly evolving and changing, which can make it difficult to keep up with. This can lead to organisations using or adopting outdated or unsupported technologies even if they undertook their feasibility study only a few months prior to implementation, and which in turn may not work as intended or may no longer be supported by the vendor by the time that the software is implemented.

  • 4. People risk

    AI projects often require embracing new technologies and toolsets, as well as the acquisition of new skills and knowledge. These two things can be – and in our experience invariably is – difficult to find within an organisation. This can lead to projects being delayed or cancelled due to resistance or a lack of resources.

  • 5. Data risk

    AI systems often rely on large amounts of data and a clean underlying ontology. Such data can be difficult to obtain and clean. If the data is not of a good enough quality, it may lead to inaccurate or misleading results.

To the foregoing risks, one would also need to add a number of other risks that are not generic but that will apply specifically to the project, programme and context in question.

Organisations need to be aware of potential risks when implementing AI and should take active steps to mitigate them and to avoid running into them in the first place.

Indeed, most organisations end up grappling with data and AI ethics through ad hoc discussions on a per-service basis, despite the significant costs of getting it wrong. Teams either miss risks, rush to address problems as they arise ending up in a constant firefighting mode that impedes strategic vision, or cross their fingers and bury their heads in the sand in the hopes that the issue will go away on its own when there is no clear framework in place for how to identify, analyse, and mitigate risks. Where organisations have attempted to address the problem on a large scale, they have a tendency to establish tight, vague, and overly-broad regulations that inevitably put the brakes on production and cause false positives in risk detection. When you add third-party providers, who may or may not be considering these issues at all, these issues multiply by orders of magnitude.

Organisations need a strategy for reducing AI risk that deals at least with how to exploit data and create AI solutions responsibly and without running afoul of the law. An operationalised approach to data and AI ethics must systematically and exhaustively identify ethical risks throughout the organisation from IT, HR, Marketing, Product or Service all the way to Operations and beyond. It should also be tailored to the organisation’s specific risks which will vary depending on the organisation’s size, business model, geographic footprint and other sui generis factors, pretty much on the same lines of other risk-management tactics. Importantly, underlying the latter, there should be a good technical – and more importantly still a good business – understanding of AI, where it fits in and how it will be developed and deployed.

Most Recent Insights

Do you believe that AI can be of help to your organisation?

At Algorithmic BrAIn, one of the Equinox Group companies, we have developed a comprehensive staged checklist to ensure that you leave no one of your important considerations out when planning your AI journey. We’d love to be able to help you get this right and if you think we can help you in this, we’d be thrilled to hear from you.