If you are even a bit technology-savvy, you’ve probably run into at least 1-2 posts about AI in your feeds. There are a plethora of articles out there exploring the multitude of advantages that AI could bring, branching into two rather extreme theories: one of them is that shortly, humanity might need to adopt a universal basic income as we will not be needed for work anymore and AI will do everything for us; the other one considers that the future of humanity is doomed as soon as AI suddenly becomes conscious, realizes that it doesn’t need humans, and takes over the planet.
I must admit that there is a bit of plausibility in both of the theories, but any kind of analysis should be done taking into account facts and figures and not fairy tales, Sci-Fi movies, or undocumented opinions. But, before we jump to conclusions, let’s have a look at the current AI status, and maybe we can catch a glimpse into what the future might look like.
What we call artificial intelligence nowadays, represents a broad discipline that pursuits the objective of creating an autonomous form of intelligence within machines (computers). Several important terms must be defined so that we get a clear understanding of what exactly we are referring to when discussing AI.
Machine Learning
Machine Learning (ML) represents the ability of machines to learn to perform different tasks or solve problems. Machine Learning algorithms are just more advanced algorithms that apply different learning methods (e.g., statistical analysis) on data sets. These are algorithms that know how to learn by themselves. In most of the cases, when you hear people talking about AI, they are actually referring to ML.
Artificial General Intelligence (AGI) is the intelligence of a machine that can actually act as a human, i.e., solve a variety of complex problems and experience consciousness (also known as strong AI). Although this represents a goal for everybody, at this point, it is only wishful thinking and you will only find such scenarios in Sci-Fi movies (TAU provides a good representation of this concept in a rather middling movie). You will probably find different variations on these definitions.
Machine learning has become a science in that has borrowed different concepts from other disciplines such as statistics, neurology, computer science, biology, genetics, etc. In time, different approaches have been developed to help machines do the learning by themselves. This Wikipedia page provides a good introductory overview of available methods, and for more advanced explanations check out The Master Algorithm by Pedro Domingos.
In Domingos’ view, there are 5 big categories of approaches:
- “Symbolists” have developed learning algorithms based on the use of symbols and inverse deduction that figures out missing data in order to make a deduction.
- “Connectionists” follow the brain functioning model by making decisions based on the strength of the connections (as in the strength of the synapse) between decisions through an algorithm called back-propagation.
- “Evolutionaries” use genetic programming which evolves computer programs by copying nature’s models of mating and evolution.
- “Bayesians” use probabilistic inference and the famous Bayes’ theorem and its derivatives.
- “Analogizers” work by analogy, recognizing similarities between decisions (e.g., patients having the same symptoms).
Each of these approaches has been proven to function properly when faced with certain types of problems, but none of them can be defined as what Pedro Domingos calls “the master algorithm,” an algorithm that can solve any type of problem. Nevertheless, he thinks we are close to inventing a master algorithm, perhaps combined from different other algorithms. Therefore, one thing to keep in mind is that none of the existing machine-learning algorithms can be used for solving all kinds of problems. You need a specific algorithm for a specific type of problem. The figure below provides a summary of the generally used machine learning algorithms and their practical application. (Fig. credits)
Machine Learning Algorithms
The Importance of Data
Another important aspect of how these ML algorithms work is that they need a huge amount of data to learn from. Basically, this is one of the biggest challenges of ML nowadays. This can also maybe explain the huge data collection heist happening online nowadays, as marketing/PR is one of the big industries taking advantage of the ML progress.
ML-rules require that the algorithm should be exposed to huge amounts of data in order for the biases to be rendered insignificant. You may actually notice this need on a daily basis, as many online stores track you in order to give you personalized offers. Amazon or Netflix will never be able to determine your real tastes and provide an accurate personalized offer just by taking into account 2-3 movies or books. Maybe you had a bad week/month and had been reading/watching stuff to cheer you up, or maybe you had some homework to do, and thus, you read something that’s not your usual taste. ML algorithms need huge amounts of data to get as close to reality as possible, especially in today’s world where we are influenced by so many factors. Moreover, given the new privacy super-awareness trend that is continuously growing worldwide (see GDPR), data collection and profiling is becoming more and more difficult.
Nevertheless, ML is all around us.
If you want a glimpse of it, just have a short talk with your mobile Digital Assistant (Siri, Alexa, Cortana, etc.). You will see how helpful they can be in some simple situations, but also how misleading they can become in others. If you need a short reminder on how bad things can go with AI, just have a look here and here. There is nothing more creepy than a biased ML algorithm running on low data. Maybe getting a less useful list of movies or books would not produce such a negative outcome at a social level, but think of the scenario where governments rely on AI/ML to assist them in decision-making (e.g., building national health public policies that may affect millions of people).
A Harvard Kennedy School report written by Hila Mehr concludes that “AI is not a solution to government problems, it is one powerful tool to increase government efficiency” and that “despite the clear opportunities, AI will not solve systemic problems in government […]”.
The author’s research indicates that there are a lot of AI initiatives in the governmental sector, but mostly, they fall into 5 types of categories: answering questions, filling out and searching documents, routing requests, translation, and drafting documents. Therefore, it could be said that AI might improve efficiency, but it is still not ready to inform nor to make impactful decisions. Indeed, it may take some time until it will reach the desired level.
Another AI-related concern is the huge loss of jobs, as many of them will be replaced by algorithms and robots. The whole saga of job loss started in 2013 with the paper called “The Future of Employment: How Susceptible are Jobs to Computerisation?” by Carl Benedikt Frey and Michael A. Osborne from Oxford University. According to the paper, “around 47 percent of total US employment is in the high-risk category” and “as technology races ahead, low-skill workers will reallocate to tasks that are non-susceptible to computerisation – i.e., tasks requiring creative and social intelligence.” Also, according to the paper, this change should happen in an “unspecified number of years,” so they did not know when.
Recently, the Organisation for Economic Co-operation and Development (OECD) published another report, concluding that “14% of jobs in OECD countries […] are at high risk (probability of over 70%) of being automated based on current technological possibilities.” An additional 32% of jobs could face significant changes due to automation. Well, if you ask me, the results of the two studies are quite different.
Another important OECD finding is that “the risk of automation declines with the level of education” and “the risk peaks among teen jobs.” Although I couldn’t find any official statistics, the same thing probably happened in the previous three industrial revolutions (AI belonging to the fourth) in the 16th to 19th centuries. While the steam engine and the telephone were introduced, probably a lot of people lost their jobs, but new ones were created. Another point of view from a reputable research and advisory company states in a report that “artificial intelligence will create more jobs than it eliminates.”
I tend to agree with most of the findings above, and I might conclude that there is no black or white conclusion. Caution should be exercised before jumping to conclusions. AI will kill some jobs but will also create others. The only issue is how you play with these outcomes in terms of public policies. Re-qualification will play a major role in the near future and governments better be prepared for that (see the concept of “flexicurity”).
Last but not least, there is also the scenario where AI will suddenly become conscious, decide that humans are a low-level form of intelligence, and take over the planet and eventually destroy us. This doomsday scenario can be found in multiple movies and, in my opinion, has nothing to do with reality. It’s pure fiction.
One thing that differentiates us from machines is our consciousness.
This is our state of awareness through which we understand what and why something is happening to us. Reasoning combined with consciousness gives you the great opportunity to define yourself as an entity, establish and pursue your own goals, more or less it helps you define the meaning of your life. Up to now, many scientists have struggled to identify how exactly consciousness and reasoning are formed within our brains, but as far as I know, we are still at the early stages. In this respect, I found it difficult to believe that a bunch of metal and silicon exposed to electricity will suddenly become conscious and eventually develop feelings and reasoning skills, eventually deciding the fate of humanity. The only reasonable scenario that I can think of is having a very efficient, generalized AI specifically programmed to destroy humanity. But still, for such a scenario to succeed, a lot of prerequisites must be accomplished. Technically, we’re not at that level, yet! Equally true is that in its history, humanity has always tried to weaponize every technological breakthrough. Caution should be used in regulating AI to prevent it from being developed in the wrong direction.
AI is a great achievement for humanity and it should be treated as such. Many technological innovations have produced disruptions up to now, but we managed to “survive” and take advantage of them all and continue our progress. Since the first industrial revolution, technology has contributed exponentially to the growth of humanity and we are now living in times that are more prosperous, free, and enjoyable than any other time in history. AI is just an advanced technology, and we should treat it this way. We have seen such disruptions before, we know how to handle them, we just need the willpower.