The AI Imperative

What is Artificial Intelligence (AI)? At its core, AI is a branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously. AI research deals with the question of how to create computers that are capable of intelligent behaviour. In practical terms, AI applications can be deployed in a wide variety of domains such as healthcare, finance, manufacturing, logistics, and many others. Some of the most popular AI applications include expert systems, natural language processing (NLP), Robotics, and machine learning. What is Machine Learning (ML)? Machine learning is a subfield of AI that deals with the development of algorithms that can learn from data and improve their performance over time. In other words, machine learning algorithms are able to automatically improve given more data. There are two main types of machine learning: supervised and unsupervised. Supervised learning algorithms are able to learn from labelled training data (i.e., data that has been already classified or labelled by humans). Unsupervised learning algorithms are able to learn from unlabelled data (i.e., data that has not been classified or labelled by humans). Some popular applications of machine learning include facial recognition, spam detection, recommender systems (e.g., Netflix recommendations), and selfdriving cars.

Algorithmic BrAInAI Insights - The AI Imperative

Prefer to listen?

If you prefer to listen to, instead of reading the text on this page, all you need to do is to put your device sound on, hit the play button on the left,  sit back, relax and leave everything else to us.

The AI Imperative

The rise of Artificial Intelligence (AI) has been nothing short of phenomenal, and this is the trend that will only continue to grow in the coming years.

When it comes to the corporate sector, AI has already completely transformed how businesses operate and how they interact with their customers. According to a recent report by McKinsey, it is estimated that AI will add USD 13 trillion to the global economy by 2030.

There are many reasons why AI is becoming so popular in business. For one thing, it helps businesses automate tasks that would otherwise be done manually. This not only saves time but also reduces costs significantly. For another, it allows businesses to gather and analyse data more effectively. With the help of AI and the increasingly powerful computers powering it, businesses can make better decisions based on data-driven insights, or better still give AI the rules and the parameters within which it can make autonomic decisions itself without supervision.

These computers are merely the descendants of Joseph-Marie Jacquard’s power looms. These had been the catalysts of the proof that information could be encoded and decoded, as well as mapped and translated. This, together with George Heriot and James Watt’s steam engines ushered in the Industrial Revolution. However, their increased sophistication and sheer power also makes them belong to a new class of machine. A machine that can grasp the symbols in language, music and programming and use them in ways that may seem creative from a human perspective.

AI “foundation models” represent a breakthrough in AI. Foundation models are the latest twist on “Deep Learning” (DL), a technique that rose to prominence ten years ago and now dominates the field of AI.

Deep learning is based on Artificial Neural Networks (ANNs), which are modelled on the brain’s neural circuitry. Neural networks were invented in the 1940s, but they remained impractical for several decades because of limitations in computing power and data storage. In 2006, Geoffrey Hinton and his colleagues rediscovered an effective way to train ANNs that had been proposed in 1986 by David Rumelhart, Geoffrey Hinton, and Ronald Williams. This method was called “backpropagation” or “backprop.”

DL quickly became popular because it enabled significant advances in Computer Vision, speech recognition and synthesis, and other AI tasks that had long stymied traditional machine-learning approaches such as support vector machines (SVMs). DL also made it possible to automatically detect patterns of behaviour hidden within large quantities of streaming data—a capability known as real-time pattern detection or anomaly detection.

The most successful deep-learning systems have used a variety of techniques to achieve these results, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory Units (LSTMs), Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs), and Reinforcement Learning (RL). But all of these methods share a common structure: they are hierarchical models that learn progressively more abstract representations of data as it flows through the model from the input layer to the output layer. A DL model typically contains dozens or even hundreds of layers, each of which transforms the representation learned by the previous layer. The final layer of a deep-learning model is often a “softmax” layer, which outputs probabilities that sum to one. These probabilities can be interpreted as the likelihood that an input data point belongs to each of the classes in the output layer.

The backpropagation algorithm adjusts the weights in each layer of a DL model so that its predictions are as close as possible to the true labels for a training dataset. Backpropagation is a form of gradient descent, which is an optimisation technique that has been known since 1847, with the term “gradient descent” having been coined in 1904 by Hermann Minkowski.

Gradient descent is used in many Machine Learning (ML) algorithms, but it poses two challenges for Deep Learning:

  • The computational cost of backpropagating gradients through many layers can be prohibitive; and
  • Backpropagation can get stuck in local minima or in simpler words, regions where the error surface has flat sections or sharp turns. Both problems become more severe as the depth of a deep-learning model increases.

The current generation of deep-learning models has been able to overcome these problems by using a variety of techniques, including dropout, batch normalisation, residual connections, and gradient clipping. But even with all of these improvements, the training time for state-of-the-art deep-learning models can be quite long – on the order of days or weeks. At times, the performance of these models can be far from human levels on many tasks. However, with the increase in size of the foundation models, the gap has closed down very significantly in such a way that we can already start speaking about general AI, thought to be half a century away until a few years ago, as being around the corner.

Foundation models are trained using a technique called self-supervised learning, which is a form of unsupervised learning. This is in contradistinction to most deep-learning models, which are trained using supervised learning, and which requires labelled data.

Self-supervised learning algorithms learn to perform useful tasks by starting from scratch and making use of whatever information is available in the environment. For example, an algorithm might be designed to learn how to navigate through a three-dimensional space by starting from a random location and then moving in random directions until it reaches its goal. The key idea behind self-supervised learning is that the algorithm can automatically generate labels for the data it encounters by taking actions and observing their consequences. This approach enables the algorithm to learn without any human supervision – hence the name “self-supervised learning”.

Foundation models are designed to learn increasingly abstract representations of data as they flow through the model from input layer to output layer. The backpropagation algorithm then adjusts the weights in each layer of a foundation model so that its predictions are as close as possible to the true labels for a training dataset.

At the hardware level, the current breakthroughs in AI have only been possible because of the increase in computing power, and quantum computing promises to take this to new heights.

There are two main types of quantum computers: those that use trapped ions and those that use superconducting qubits. Trapped ion processors have been around for a while, but they are very expensive and require a lot of maintenance. The first company to commercialise this technology was D-Wave Systems, which was founded in 1999.

Superconducting qubits are a newer type of quantum computer that uses electrical circuits to store and process information. These processors are much cheaper than their trapped ion counterparts and are easier to maintain. IBM has been working on superconducting qubits since 2003, and Google started its own program in 2007.

In October 2019, Google claimed “quantum supremacy” demonstrating that its 53-qubit processor could perform a calculation in 200 seconds that would take 10,000 years on the world’s most powerful classical computer. This claim was met with some skepticism from the academic community, but is still an important milestone, particularly given the impressive improvements being made to superconducting qubit computers. Another milestone was reached when researchers from Xanadu in Toronto had their quantum chip Borealis smash through a Gaussian boson sampling problem that would have taken a classical supercomputer about 9,000 years to compute in just 36 microseconds. And the costs of superconducting qubit quantum computers are starting to fall.

It is not easy to put all these developments in a single pipeline and to make sense of them, together with their linkages all together. This would require a multidisciplinarity, an ability to link the dots, to process data and to have foresight that not many hyper-specialised professionals in today’s business world possess: it requires an understanding of theoretical quantum physics, quantum entanglement, quantum computing, several AI sub-specialisations and a good dose of statistical economics foresight too. Indeed, it sounds like the perfect job for an AI engine, with the catch being that if you don’t get there without AI you will never have the AI infrastructure to do this job for you.

With all these developments happening at break-neck speed, as well as the fundamental importance of AI to all areas of human endeavour, particularly business, it is fundamental to understand the implications that AI will have on the business world irrespectively of whether a business decides to adopt AI or not, given that these decisions are taken in a context where several competitors are going to be increasingly adopting the technology to cut costs, increase productivity, increase scalability, increase market share and provide a better and more uniform customer experience.

This Equinox Insights article is a step in that direction. It aims to provide the salient issues that need to be considered by a business in order to be able to make sense of the ongoing developments in the field of AI and ancillary ones that have an impact on AI.

Insight Sub-Sections

Most Recent Insights

How can we help you?

Contact us by requesting a call-back or submitting a business inquiry online.

Insight Playlist

Do you believe that AI can be of help to your organisation?

At Algorithmic BrAIn, one of the Equinox Group companies, we have developed a comprehensive staged checklist to ensure that you leave no one of your important considerations out when planning your AI journey. We’d love to be able to help you get this right and if you think we can help you in this, we’d be thrilled to hear from you.