- September 8, 2022
- Posted by: Bernard Mallia
Prefer to listen?
If you prefer to listen to, instead of reading the text on this page, all you need to do is to put your device sound on, hit the play button on the left, sit back, relax and leave everything else to us.
AI Adoption in the Short Run
In the short run, AI adoption can be considered in terms of a 4-stage onboarding process framework which sees AI as a productivity-augmenting personal assistant, then as a monitor, a coach, and lastly as an autonomous colleague. In a nutshell, this allows AI to take over the simple, menial tasks first while the humans concentrate on higher-value work. As AI progressively works and learns its way through to higher value-added work, it can be relied on and entrusted with progressively more substantive judgments. Essentially, in this progressive evolutionary framework, AI may be thought of as an apprentice that develops into a partner over time, providing skills, knowledge and insight – only that this apprentice-cum-colleague never tires, never gets bored, never has personal problems that it brings to the workplace and doesn’t suffer from mood swings or demotivation.
Based on our practical experience in the field, we believe that this approach should work for artificial intelligence irrespective of the scale of its adoption. The four-phase strategy for integrating AI that we delineate below enables organisations to build widespread trust, which is essential for adoption, and progress toward a distributed human-AI cognitive system where both people and AI are always evolving.
The four-phase approach to implementing AI that we delineate below allows organisations to cultivate people’s trust – a key condition for adoption – and to work towards a distributed human-AI cognitive system in which people and AI both continually improve.
Many organisations have experimented with stage 1, and some have progressed to stage 2. A handful have also managed to make it to stage 3 at varying levels of success. For now, stage 4 is feasible only for the big corporates like Google, Microsoft, Meta, Ali Baba, Ant Financial and a handful of others. Stage 4 takes significant investment to get to but also provides considerably more value to the organisations who got there as they engage with AI. However, things are also starting to change with Stage 4 availability as these big corporates start making their platforms available as a service, as well as with some open-source alternatives like BLOOM, an autoregressive Large Language Model, starting to be released.
The first phase of onboarding AI is about automating simple, repetitive tasks that are currently carried out by, and generally frustrate, human employees. The goal is to make people’s lives easier and free them up to focus on more interesting, higher-value-added work, and is not very different from the process of training a new human assistant. The trainee learns by watching the person s/he is shadowing, by performing the tasks her/himself, and by asking questions. The principles in implementing AI as an assistant are not very different.
Data sorting is a popular task for AI assistants. Several businesses have employed recommendation systems to assist customers sort through hundreds of goods and identify the ones that are most relevant to them since the mid-1990s, with Amazon and Netflix being among the industry leaders in this technology.
This kind of data sorting is nowadays needed for an increasing number of businesses. For instance, when picking which companies to invest in, portfolio managers have access to considerably more information than they can reasonably digest, and new information is constantly being released, adding to the historical record. In this case, software can simplify the work by automatically screening stocks based on predetermined investment criteria. Meanwhile, NLP can determine which news is most pertinent to a corporation and may even gauge how analysts’ reports generally feel about an impending corporate event. A pioneer in the adoption of such technology in the workplace is London-based investment company Marble Bar Asset Management (MBAM), which was created in 2002. To assist portfolio managers in sorting through massive amounts of information regarding business events, news developments, and stock movements, it has built a cutting-edge platform called RAID (Research Analysis & Information Database).
Modelling what a human may do is another way AI might be useful. Anyone who has used Google or MS Excel is aware that as they input a search term or a formula, autocompletion prompts start to emerge. On a smartphone, predictive text offers a comparable approach to hasten typing. This type of user modelling, also known as judgmental bootstrapping, was created more than 30 years ago and is well suited for use in decision-making. When confronted with several options, AI would use this information to predict the choice that the employee is most likely to make based on that employee’s prior selections, and it would then offer that choice as a starting point. This speeds up work rather than actually doing it. Of course, human users can freely overwrite as needed and are always in the driving seat. AI simply assists them by them by emulating or foreseeing their writing style.
Employees should not find it difficult to use AI in this manner. We already do this in our daily lives when we use the autocomplete feature to fill up internet forms or when we call upon Google or Alexa to carry out tasks for us. An employee can, for instance, specify guidelines for an AI assistant to follow when completing forms at work. Really and truly, several software tools used in the workplace today (like, say, approval workflow systems) already consist of collections of decision rules that have been developed by humans. The AI assistant encodes the rules in the situations in which the employee actually follows them. The employee’s behaviour need not change in any way as a result of the AI assistant’s learning, and neither should there be any conscious or unconscious attempt to educate or teach the assistant.
The second phase of AI onboarding is about entrusting AI with more substantive tasks that are currently carried out by human employees. It entails using the AI system to provide real-time feedback and to monitor a given situation. In this stage, AI is used to track how well a system is doing against specific KPIs and objectives. It can also be used to identify patterns of behaviour that may indicate a problem or opportunity. The goal is to have AI take over the routine tasks so that people can focus on the non-routine ones. In this phase, AI functions as a monitor, providing employees with real-time feedback and suggestions for improvement. An example is an HR system that uses AI to monitor staff absences, tardiness or sickness levels in order to flag up potential issues early on before the probationary period has expired.
Another good example of how this might work in practice comes from Waze, the popular traffic and navigation satellite app owned by Google. Waze uses data collected from its users (location, speed, route taken) to generate real-time traffic information that helps other drivers make better decisions about which route to take and when to leave for their destination. The app also provides users with personalised recommendations for alternative routes and departure times based on live traffic conditions collected in this way.
Waze’s monitoring capabilities go beyond simply giving directions; the app also nudges users to change their behaviour in order to improve their driving experience (and that of other drivers). For instance, if a driver frequently arrives late for appointments because s/he fails to account for traffic conditions, Waze will start suggesting earlier departure times based on historical data. Similarly, if a driver tends to deviate from the recommended route, Waze will adjust its recommendations accordingly. By constantly monitoring user behaviour and making suggestions for improvement, Waze is able to nudge users towards better driving habits without them even being aware of it.
In much the same way, an AI assistant at work could constantly monitor employees’ performance and offer suggestions for improvement. For instance, if an employee regularly misses deadlines because s/he underestimates the amount of time needed to complete a task, the AI assistant could start suggesting longer timelines based on historical data. Similarly, if an employee tends to deviate from the recommended course of action, the AI assistant could adjust its recommendations accordingly. By constantly monitoring employee behaviour and making suggestions for improvement, the AI assistant could nudge employees towards better work habits without them even being aware of it.
According to studies in psychology, behavioural economics, and cognitive science, human reasoning abilities are restricted and imprecise, particularly when it comes to statistical and probabilistic issues the likes of which are frequently encountered in business. Several research papers on court judgements have revealed that judges are more likely to grant political asylum before lunch than after, that they are more likely to reduce jail terms for defendants on their birthdays, and that they are more likely to do so if their favourite football team wins the day before. It is evident that justice may be served more effectively if software informed human decision-makers when a choice they were considering was at odds with their past choices or with the choice that would be predicted by an examination of only legal factors.
This sort of input can be provided by AI. In another study, the authors demonstrated that AI systems can anticipate asylum judgments on the day a case begins with about 80% accuracy using a model made up of fundamental legal characteristics. The computer now has learning capabilities that allow it to mimic a judge’s decision-making process by pulling on that judge’s prior rulings.
This approach extends effectively to different contexts. For instance, the system used by Marble Bar Asset Management’s portfolio managers alerts them when they are considering buy or sell decisions that could increase the overall portfolio risk, such as by increasing exposure to a specific sector or geography. This is done through a pop-up during the computerised transaction process and allows them to make the necessary adjustments. As long as business risk guidelines are followed, a portfolio manager may disregard the popup. However, the popup helps the portfolio manager in reconsidering their choices and contributes to the avoidance of costly mistakes.
AI can be programmed to precisely predict what a user would do in specific circumstances thanks to machine-learning systems (absent lapses in rationality owing to, for example, overconfidence or fatigue). The system can alert a user if a choice they are going to make conflicts with their previous choices. When making a lot of decisions quickly and with human employees who can be fatigued or preoccupied, this is extremely useful.
AI is not, of course, always right. Under specific circumstances, AI can guide a human in the wrong direction rather than just correcting for certain behavioural biases, since its proposals don’t always take into account certain accurate private facts to which the human decision maker is privy. Because of this, utilising it in such circumstances should resemble a conversation in which the algorithm gives nudges based on the data it has access to, and the human teaches the AI by explaining why they chose to ignore a certain nudge. By doing this, the AI becomes incrementally better at what it does while maintaining the autonomy of the human decision-maker.
Regrettably, a lot of AI systems are designed to usurp that autonomy. Employees frequently can’t authorise a bank transaction when an algorithm flags it as potentially fraudulent, for example, without first getting approval from a supervisor or even an external auditor. Customers and customer support personnel sometimes find this to be a source of nuisance and irritation because it may sometimes be very hard to reverse a machine’s choice. Employees are frequently unable to challenge AI decisions since the reasoning behind them is often obscure, even when errors have been made. This is not so much a feature of AI as it is a product of the corporate control requirements that have been hard-coded into the system when the system was being designed.
When computers gather information about people’s decisions, privacy is yet another serious concern. In addition to providing humans with the choice over how they interact with AI, we also need to ensure that whatever information it gathers about them is kept private, secured and is tightly held in encrypted format under lock and key. The technical team and management should be structurally and functionally separate. Otherwise, employees could desist from interacting with the system as they might suffer the consequences of dealing with the system where mistakes are possible. All interactions should also be logged in detail.
In order to achieve organisational consistency in norms and practices, organisations also need to establish guidelines in relation to how to build and engage with AI. The criteria underlying the threshold beyond which a nudge becomes a necessity, the circumstances under which an employee should follow the AI’s instructions or refer it to a superior rather than accept or reject it, and the level of predictive accuracy necessary to show a nudge or to provide a reason for one are all possible inclusions in these rules and need to be catered for accordingly in the systems design.
We advise managers and systems designers to include employees in the design process to help them feel in control during Stage 2. This entails engaging employees as experts to define the data that will be used and to establish the truth, acquainting them with the models during development, asking them to define their wish lists for the new system and providing training and interaction as those models are deployed. Employees will get an understanding of how the models are created, how the data is handled, and how the algorithms arrive at providing their recommendations.
The third stage of AI adoption is about using the technology to provide employees with feedback and guidance on their performance. In this case, rather than simply monitoring what people do, AI is used to offer advice on how they could improve. This might take the form of real-time suggestions during a task (e.g., “you might want to try doing X instead of Y”) or it could be more generalised feedback given post hoc, i.e. after an event has taken place, like, for example, “here are some things you did well in your last presentation…”.
In recently-published surveys undertaken by polling organisations that are independent of each other, between 55% and 65% of respondents (depending on which survey you look at) claimed that it would be ideal for them to get performance feedback on a daily or a weekly basis.
The issue is that only through the thorough examination of important choices and behaviours can strengths and areas for development be revealed. This calls for the documentation of expected results and their comparison with what really transpired over a period of time. Consequently, feedback given to employees often comes from their hierarchical superiors during a formal or semi-formal review rather than at a time or in a manner of their choice. This is problematic since, as Tessa West of New York University discovered in a recent neuroscience research, individuals respond to feedback better when they feel more in control of the dialogue and that their autonomy is safeguarded (for example, by being allowed to decide when it is delivered). There is also the fact that pay rises are usually tied to this feedback exercise, which puts employees on the defensive by default, rather than making them own up to their shortcomings and finding ways of getting the right assistance to be able to improve.
AI could be able to provide a solution for this issue. Employees might quickly receive feedback from an AI system, thereby allowing them to assess their own performance and think back on variations and mistakes. They could get a better understanding of their decision-making processes and practices by receiving a monthly report that analyses data from their prior conduct while making such feedback impersonal and while severing the tie that there might be between the manager’s mood on the day s/he is providing feedback and the feedback received. A few organisations, mostly in the financial industry, are already using this approach. At MBAM, for instance, a data analytics system that records investment choices at the individual level provides feedback to portfolio managers intended to teach them practical and data-backed lessons on the spot.
The data can show various and fascinating biases held by Portfolio Managers. Some investors may stick onto underperforming assets longer than they ought to because they are more loss-averse than others. Others could be overconfident and take a position in an investment that is too large. The analysis recognises these behaviours and proclivities and, like a coach, offers tailored feedback that draws attention to behavioural changes over time while making recommendations on how to make better judgments. However, it is Portfolio Managers who choose what to take in from the feedback provided. The management of MBAM thinks that this “trading upgrade” is becoming a key differentiator that both enhances the organisation’s labour market appeal and aids in the development of portfolio managers, thus resulting in greater value added for clients and shareholders alike.
Additionally, an AI coach learns from the choices made by an empowered human employee, much as a competent mentor gains knowledge from the insights of those being mentored. A human can disagree with the coach and that generates new data that will alter the AI’s implicit model. For instance, a portfolio manager might inform the system why he or she decided not to trade a highlighted stock due to recent corporate developments. The system continuously gathers data that it examines to offer insights thanks to feedback.
Employees are more likely to see AI as a secure feedback channel that strives to assist rather than judge performance if they can relate to and manage interactions with it. To do this, designing the appropriate interface always helps. For instance, at MBAM, trade-enhancing features like graphics are tailored to each portfolio manager’s tastes, preferences and choices.
As in Stage 2, it is crucial to involve employees in the system design. People will be even more afraid of losing their control when AI acts as a coach than when it acts as either an assistant or a monitor. In this delicate adoption stage, it may appear to be both a companion and a rival, and no human wants to feel inferior to, or less intelligent than a machine. More serious worries may exist about privacy and autonomy. Since being honest is a requirement of working with a coach, some people might be reluctant to be open with a coach whose algorithms nobody really understands who might divulge unpleasant information to higher levels of management or HR staff.
There are, of course, drawbacks to using AI in the ways that the first three stages outlines. In the long-term, at least up to this third stage, new technologies are likely, on balance, to produce more employment than they destroy, but in the short-term, labour market disruptions that will require costly retraining will inevitably transpire.
The Autonomous Colleague
The fourth and final stage of AI adoption sees the technology taking on more substantial roles within organisations – essentially becoming an autonomous colleague that works alongside humans rather than simply assisting them with, or coaching them on, their work. In this stage, AI is used to carry out entire tasks or workflows independently of humans or in liaison with them. This might include things like processing expense claims, approving leave requests, generating written reports or creating parametric designs. The key difference here is that humans are not involved in these activities at all except for providing inputs or feedback – the tasks are entirely carried out by the AI system except in instances where regulation disallows it and mandates a human-based final decision.
Despite the fact that the technology to produce this form of combined human-machine intelligence already exists, this stage is very difficult to get to and requires an AI-centric organisation. To ensure that individuals can trust the AI as much as they would a human partner, any such integration of AI must go through great pains to avoid introducing new or existing biases. It also needs to respect privacy concerns. Given the abundance of evidence highlighting how difficult it is to develop trust among humans, it is a very significant barrier in and of itself.
Regulation, Trust and Impact
Understanding has traditionally been a fundamental building block for establishing trust in human interactions, but in constructing the edifice of understanding it is not easy to identify what the components of an explanation should be, let alone the components of a good explanation, and AI has traditionally been built in a black box manner where not even the original programmers knew how the system evolved once it was launched. Trust, even if misplaced (this can only be established post hoc with hindsight) can only take place when someone is aware of and understands another’s ideals, aspirations, and goals, and after that someone has not provided evidence that it/she/he is not inimical to the person’s best interests. Since employees’ fear of AI is typically based on a lack of knowledge of how AI functions, understanding is potentially well-suited to fostering human-AI partnerships.
A new European Commission proposed regulation, the Artificial Intelligence Act (AI Act) aims to introduce a common regulatory and legal framework for AI encompassing all sectors (except for the military), and three of the several things it is proposing to enforce is transparent, explainable and documented AI systems. The proposed regulation classifies AI applications by risk and regulates them accordingly. While the AI Act takes a very heavy-handed approach that will slow down AI development in Europe and that will raise its development costs, the statutory documentation will also provide a basis for looking at what people think are plausible justifications for AI choices. When an explanation is given in terms of a logical combination of features (i.e. “This outcome transpired because it had this or that characteristic[s]”) AI decisions become more persuasive to humans. AI systems will inevitably become more open as research into what makes AI explainable develops and as AI tools to explain other AI programs and their decisions come to the fore, and this will foster confidence.
It has never been easy to adopt new technologies, and the greater the impact a technology has, the more difficult adoption becomes because of the disruption it leads to. AI is particularly challenging to adopt precisely because of its wide-ranging impacts. However, adoption will go well if it is undertaken advertently and with meticulous planning and without the unnecessary rush that characterises several of the projects in this space. This is exactly the reason why, irrespectively of the applicable legal frameworks, organisations must make sure that AI is responsibly designed and developed, especially in terms of transparency, decision autonomy, and privacy, and that it involves the people who will be using it and interacting with it. If not, people will understandably be wary of being limited – perhaps even supplanted – by machines that are making all kinds of judgments and executing all sorts of tasks.
The key to developing a trustworthy connection with AI is to get over these concerns. Such concerns do not stem from AI systems themselves but are a function of human nature. But then again, it is humans who set the rules for all of the four AI Stages discussed above. AI may, with careful design, become a genuine colleague in the workplace, enhancing human intuition and creativity by processing vast amounts of diverse data quickly and consistently and taking over the brain-numbing, boring repetitive tasks that some jobs are all about.
The impact of AI adoption will vary depending on the stage of adoption that an organisation is at. In the early stages, the main benefits are likely to be increased efficiency and productivity as simple, repetitive tasks are automated. As organisations move into the later stages, however, the benefits will become more strategic in nature as AI starts to take on more complex roles within the organisation and performing those roles faster and with increased consistency and accuracy.
At present, most organisations are still in the early stages of adoption (i.e., Stage 1 or 2). This means that they are primarily focused on using AI to automate simple tasks and improve operational efficiency. In the future, however, a shift towards using AI for more strategic purposes such as decision-making, customer service and innovation should be expected.
There are potential risks associated with each stage of AI adoption. For example, in stage 1 there is a risk that employees may feel threatened by the introduction of chatbots or digital assistants as they fear that these technologies will replace them in their jobs. In stage 2, meanwhile, there is a risk that employees may feel ‘Big Brother’ is watching them if they are constantly being monitored by an AI system. In stage 3, the risk is that employees may feel patronised or infantilised if they are given constant feedback and guidance from an AI coach – particularly if this feedback is perceived to be negative or critical in nature.
Notwithstanding this, it is important to note that these risks can be mitigated through careful planning and implementation – for example, by ensuring that employees understand why chatbots are being introduced and what role they will play within the organisation (stage 1), or by providing employees with training on how to interpret and use data from an AI monitoring system (stage 2).