How AI is and will be Impacting Businesses, Societies and Polities in which they Operate

Algorithmic BrAInAI Insights - The AI Imperative: The Most Complex Task Ever

Prefer to listen?

If you prefer to listen to, instead of reading the text on this page, all you need to do is to put your device sound on, hit the play button on the left,  sit back, relax and leave everything else to us.

How AI is and will be Impacting Businesses, Societies and Polities in which they Operate

The most significant impact of AI on businesses and societies is – and will increasingly be – the transformation of how they operate. Artificial intelligence is the modern-day equivalent of electricity and the transformational revolution that electrification bought about, and just like electricity, AI has the power to transform every industry in existence and has already begun doing so. The use of this new “electricity” is not, however, free from risk. Just like the original electricity came with the risk of electrocution, almost-total dependence and implementation difficulties, AI comes with certain risks some of which we know all too well, and others which we are just starting to discover.

In the sphere of business, AI has the potential to automate many tasks that are currently done manually, even those white-collar jobs that up till a mere decade ago were out of reach for even the best-available automation technologies. This not only saves time but also reduces recruitment, payroll, training, litigation and human error costs and down-time significantly. In addition, it frees up low-skilled and highly-skilled employees alike to focus on more important tasks that are still more costly and difficult to automate, or where regulatory requirements preclude automation. The same is true for other areas of business such as marketing and sales. With the help of AI, businesses can automate repetitive tasks such as email marketing, Search Engine Optimisation (SEO) and lead generation.

AI will enable businesses to automate many of their processes and functions. This will result in increased efficiency and productivity as well as reduced costs. Additionally, AI will allow businesses to better understand and predict customer behaviour. This will enable them to provide more personalised experiences and products/services that are better tailored to customer needs. Finally, AI will help businesses make better decisions by providing them with insights that are not possible to obtain through human intelligence alone.

AI will also transform how businesses interact with their customers. By way of example, Natural Language Processing (NLP)-powered chatbots can be used to provide a more personalised experience for customers by understanding their needs and preferences and then providing relevant recommendations accordingly. This can also be done using voice and can cut down on the time it takes for a customer to get to the information s/he wants relative to other available technologies like interactive voice response (IVR) where customers are given numbered options and are then asked to dial a number to go to the next IVR set of options or to be connected to someone specific.

In the future, AI will become even more embedded in businesses as they look to increase efficiency, gain and maintain a competitive edge, and avert business and operational risks created by bad actors who are themselves using AI.

However, AI is not without its risks. It also has some very dark sides. One of the main risks is that AI-powered fraud, cyberthreats, scams, malicious fake or biased news, profiles and attributions, frame-ups and other malicious activities will become more sophisticated and difficult to detect. Additionally, as AI gets better at understanding and predicting human behaviour, there is a risk that it could be used to manipulate people for nefarious purposes. This is something that AI is already able to do very well today, and will only get better over time as people inadvertently post information they should be keeping for themselves on social media and other platforms accessible for web scraping that is subsequently fed to AI algorithms.

A noteworthy category of risks may be referred to as technical and system risks. A few salient examples of this risk category, which are by no means intended to be exhaustive, follow hereunder.

AI can be subject to data poisoning. In this case, data that is deliberately manipulated by humans or even an algorithm to cause a machine learning system to perform poorly.

In 2017, Google’s Street View cars captured what appeared to be a mysterious figure in the clouds above Southern California. It turned out that it was just one of numerous large inflatable Mylar balloons released by artist Zaria Forman as part of her work Sky Art. However, at the time it happened there were concerns that some sort of AI-powered camera might mistake the balloon for something more sinister like an alien spaceship.

A more malicious form of data poisoning is when someone deliberately alters training data in order to cause an AI system to fail. For example, adding noise or changing labels on images used to train an object recognition system. This can be done for reasons ranging from political (sabotaging facial recognition systems used by law enforcement) to personal (tricking a self-driving car into thinking stop signs are yield signs).

Data poisoning is a serious – arguably the most serious – technical / system threat to any AI system that relies on training data, especially if that data is sourced from the internet, but also in the case of malicious intrusions. It’s important to carefully vet any data used for training and be aware of the possibility of maliciously-tampered data.

Social engineering, a type of attack where someone tricks people into giving them information or access to systems they shouldn’t have, is also a risk. This can be done in person, over the phone, or even online through email or social media. A classic example is phishing, where someone sends an email that appears to be from a legitimate company (like your bank) but actually contains links to malicious websites designed to steal your login credentials or to install malicious code that will allow the stealing of login credentials and other personal information. Another common tactic is pretexting, where someone pretends to be someone else (like a customer service representative) in order to get sensitive information like credit card numbers out of unsuspecting victims. Social engineering attacks are becoming more common as we become increasingly reliant on technology. With so much personal information available online, it’s relatively easy for attackers to find enough details about their targets to make their scam emails or phone calls seem believable. And with the rise of AI-powered chatbots and voice assistants, it’s only going to get easier for attackers to impersonate other people and trick victims into giving them what they want, and to do it at a scale never seen before. Moreover, if social engineering attacks manage to get elevated access to AI data warehouses linking billions or even trillions of different data points, the damage can potentially be ruinous.

Another AI attack vector could take the form of adversarial examples. Simply put, these are inputs specifically designed to fool machine learning models. By way of illustration, adding a small amount of noise to an image that is unrecognisable to humans can, if the noise is carefully planned, cause a computer vision system to misclassify it. These examples require detailed knowledge of AI, but can be used to attack any machine learning system, whether it’s used for object recognition, facial recognition, or even fraud detection. Adversarial examples are particularly insidious because they can be generated automatically by algorithms designed specifically for that purpose, and once generated, they can be used to attack any AI system that uses the same algorithm or similar ones without requiring any prior knowledge about the target system. This makes them difficult to defend against. It also makes it hard to know if a machine learning model has been attacked until it is too late.

Yet another AI attack vector could take the form of Hardware Vulnerability Exploitation. Most machine learning systems are based on neural networks, which are composed of many interconnected processing units (which we refer to as “neurons”) that work together to perform computations. These neurons are typically implemented as software programs running on general-purpose processors like CPUs or GPUs. However, there is a growing trend towards using dedicated hardware accelerators like TPUs, FPGAs and ASICs to speed up neural network computations.

These hardware accelerators can be faster and more energy-efficient than general-purpose processors for a given amount of work, but they also come with their own set of security risks. For example, it’s possible to physically tamper with a TPU, FPGA or ASIC in order to insert malicious code that causes the device to behave in unexpected ways. This type of attack is difficult to detect and can be used to cause all sorts of problems, from denial-of-service attacks that disable systems to data leaks that expose sensitive information.

Hardware vulnerabilities are a serious concern for any system that uses AI accelerator chips, especially if those chips are made by third-party suppliers. It’s important to carefully vet any hardware used in critical applications and have a plan for dealing with compromised devices.

Algorithmic Bias And Discrimination, though not an attack vector is an AI risk. Algorithmic bias occurs when a machine learning algorithm produces results that favour one group over another (e.g. men over women, white people over black people). This can happen for a variety of reasons, including the use of biased training data or the selection of inappropriate evaluation metrics. If an algorithm is trained on data that is heavily-skewed towards one group, for instance, then it’s likely to perform poorly on other groups. This is a fundamental statistical tenet referred to as statistical misrepresentation and not the result of some inherently bad actor trying to wreak havoc. If an algorithm is optimised for a metric that favours one group (like accuracy) then it may perform well on that group but poorly on others (like fairness).

Algorithmic bias can lead to discrimination, where people are treated unfairly because of their belonging to a particular group. If a facial recognition system is biased towards white men, for example, then it may be more likely to misidentify black women as criminals. This type of discrimination can have serious real-world consequences and needs to be carefully avoided when developing AI systems.

Another category of risks revolves around the sociological, political and regulatory sphere.

The first such risk that tops the list is always that of job losses. As AI automates more tasks and functions, initially there will be an unlocking of labour to be redeployed to higher-value tasks. This will result in jobs being lost to AI and others being created by AI adoption. With increased cognitive capabilities, automation and displacement, however, over time we will get to a point where AI has become so cognitively-capable and computationally fast and efficient that the displaced human labour will no longer be able to be redeployed elsewhere as the limit of human capabilities will have been surpassed by AI. The end result is that there will be less work to do for humans and thus fewer jobs available. This can very well be a good thing as it will enable humans to work less or not work at all, freeing up time for leisurely, creative and self-fulfilling pursuits. However, if economic, AI and political power gets concentrated in the hands of a handful of people, this could lead to mass unemployment, social unrest, social strife and ultimately the unfolding of a nasty dystopia.

More immediately, AI also poses a threat to privacy as businesses collect ever-more data on individuals to be able to refine their AI algorithms and to make them work better. If at any point in time this data falls into the wrong hands, it could be used to exploit or blackmail individuals or to steal their identities.

Overall, AI is a powerful technology that has the potential to transform businesses and society. However, whether it ultimately transforms them into better or worse places will depend on the social, economic, political, regulatory and technological framework within which this transition happens and the incentives that this framework will give rise to. It clearly poses risks that need to be managed carefully and meticulously by ensuring that the full benefits of AI can be enjoyed by society at large on the one hand and by ensuring that its risks are precluded from materialising while not fettering the development and proliferation of AI technologies on the other.

The most significant impact of AI on businesses and societies is the transformation of how they operate. Artificial intelligence is the modern-day equivalent of electricity and the transformational revolution that electrification bought about. The use of this new “electricity” is not, however, free from risk. Just like the original electricity came with the risk of electrocution, almost-total dependence and implementation difficulties, AI comes with certain risks some of which we know all too well, and others which we are just starting to discover.

Social engineering is a type of attack where someone tricks people into giving them information or access to systems they shouldn’t have. This can be done in person, over the phone, or even online through email or social media. A classic example is phishing, where someone sends an email that appears to be from a legitimate company (like your bank) but actually contains links to malicious websites designed to steal your login credentials or to install malicious code that will allow the stealing of login credentials and other personal information. Another common tactic is pretexting, where someone pretends to be someone else (like a customer service representative) in order to get sensitive information like credit card numbers out of unsuspecting victims. Social engineering attacks are becoming more common as we become increasingly reliant on technology. With so much personal information available online, it’s relatively easy for attackers to find enough details about their targets to make their scam emails or phone calls seem believable. And with the rise of AI-powered chatbots and voice assistants, it’s only going to get easier for attackers to impersonate other people and trick victims into giving them what they want, and to do it at a scale never seen before. Moreover, if social engineering attacks manage to get elevated access to AI data warehouses linking billions or even trillions of different data points, the damage can potentially be ruinous.

Most Recent Insights

Do you believe that AI can be of help to your organisation?

At Algorithmic BrAIn, one of the Equinox Group companies, we have developed a comprehensive staged checklist to ensure that you leave no one of your important considerations out when planning your AI journey. We’d love to be able to help you get this right and if you think we can help you in this, we’d be thrilled to hear from you.