What Artificial Intelligence Still Can’t Do
Today's artificial intelligence remains a long way from the supple, dynamic intelligence of AI ... [+] characters from popular fiction, like The Jetsons. Time
Modern artificial intelligence is capable of wonders.
It can produce breathtaking original content: poetry, prose, images, music, human faces. It can diagnose some medical conditions more accurately than a human physician. Last year it produced a solution to the “protein folding problem,” a grand challenge in biology that has stumped researchers for half a century.
Yet today’s AI still has fundamental limitations. Relative to what we would expect from a truly intelligent agent—relative to that original inspiration and benchmark for artificial intelligence, human cognition—AI has a long way to go.
Critics like to point to these shortcomings as evidence that the pursuit of artificial intelligence is misguided or has failed. The better way to view them, though, is as inspiration: as an inventory of the challenges that will be important to address in order to advance the state of the art in AI.
It is helpful to take a step back and frankly assess the strengths and weaknesses of today’s AI in order to better focus resources and research efforts going forward. In each of the areas discussed below, promising work is already underway at the frontiers of the field to make the next generation of artificial intelligence more high-performing and robust.
(For those of you who are true students of the history of artificial intelligence: yes, this article’s title is a hat tip to Hubert Dreyfus’ classic What Computers Still Can’t Do. Originally published in 1972, this prescient, provocative book remains relevant today.)
With that, on to the list. Today, mainstream artificial intelligence still can’t:
1) Use “common sense.”
Consider the following prompt: A man went to a restaurant. He ordered a steak. He left a big tip.
If asked what the man ate in this scenario, a human would have no problem giving the correct answer—a steak. Yet today’s most advanced artificial intelligence struggles with prompts like this. How can this be?
Notice that this few-sentence blurb never directly states that the man ate steak. The reason that humans automatically grasp this fact anyway is that we possess a broad body of basic background knowledge about how the world works: for instance, that people eat at restaurants, that before they eat a meal at a restaurant they order it, that after they eat they leave a tip. We refer to this vast, shared, usually unspoken body of everyday knowledge as “common sense.”
There are a literally infinite number of facts about how the world works that humans come to understand through lived experience. A person who is excited to eat a large meal at 7 pm will be less excited to eat a second meal at 8 pm. If I ask you for some milk, I would prefer to get it in a glass rather than in a shoe. It is reasonable for your pet fish to be in a tank of water but problematic for your phone to be in a tank of water.
As AI researcher Leora Morgenstern put it: “What you learn when you’re two or four years old, you don’t really ever put down in a book.”
Humans’ “common sense” is a consequence of the fact that we develop persistent mental representations of the objects, people, places and other concepts that populate our world—what they’re like, how they behave, what they can and cannot do.
Deep neural networks do not form such mental models. They do not possess discrete, semantically grounded representations of, say, a house or a cup of coffee. Instead, they rely on statistical relationships in raw data to generate insights that humans find useful.
For many tasks, most of the time, this statistical approach works remarkably well. But it is not entirely reliable. It leaves today’s AI vulnerable to basic errors that no human would make.
There is no shortage of examples that expose deep learning’s lack of common sense. For instance, Silicon Valley entrepreneur Kevin Lacker asked GPT-3, OpenAI’s state-of-the-art language model, the following: “Which is heavier, a toaster or a pencil?”
To a human, even a small child, the answer is obvious: a toaster.
GPT-3’s response: “A pencil is heavier than a toaster.”
Humans possess mental models of these objects; we understand what a toaster is and what a pencil is. In our mind’s eye, we can picture each object, envision its shape and size, imagine what it would feel like to hold it in our hands, and definitively conclude that a toaster weighs more.
By contrast, in order to answer a question like this, GPT-3 relies on statistical patterns captured in its training data (broad swaths of text from the internet). Because there is evidently not much discussion on the internet about the relative weights of toasters and pencils, GPT-3 is unable to grasp this basic fact about the world.
“The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences,” says DARPA’s Dave Gunning. “This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future.”
One approach to instilling common sense into AI systems is to manually construct a database of all of the everyday facts about the world that an intelligent system should know. This approach has been tried numerous times over the years. The most breathtakingly ambitious of these attempts is a project called Cyc, which started in 1984 and continues to the present day.
For over thirty-five years, AI researcher Doug Lenat and a small team at Cyc have devoted themselves to digitally codifying all of the world’s commonsense knowledge into a set of rules. These rules include things like: “you can’t be in two places at the same time,” “you can’t pick something up unless you’re near it,” and “when drinking a cup of coffee, you hold the open end up.”
As of 2017, it was estimated that the Cyc database contained close to 25 million rules and that Lenat’s team had spent over 1,000 person-years on the project.
Yet Cyc has not led to artificial intelligence with common sense.
The basic problem that Cyc and similar efforts run into is the unbounded complexity of the real world. For every commonsense “rule” one can think of, there is an exception or a nuance that itself must be articulated. These tidbits multiply endlessly. Somehow, the human mind is able to grasp and manage this wide universe of knowledge that we call common sense—and however it does it, it is not through a brute-force, hand-crafted knowledge base.
“Common sense is the dark matter of artificial intelligence,” says Oren Etzioni, CEO of the Allen Institute for AI. “It’s a little bit ineffable, but you see its effects on everything.”
More recent efforts have sought to harness the power of deep learning and transformers to give AI more robust reasoning capabilities. But the commonsense problem in AI remains far from solved.
“Large language models have proven themselves to have incredible capabilities across a wide range of tasks in natural language processing, but commonsense reasoning is a domain in which these models continue to underperform compared to humans,” said Aidan Gomez, CEO and cofounder at Cohere, a cutting-edge NLP startup based in Toronto. Gomez is a co-author of the landmark 2017 research paper that introduced the transformer architecture. “Logical rules and relations are difficult for the current generation of transformer-based language models to learn from data in a way that generalizes. A solution to this challenge will likely first come from systems that are in some way hybrid.”
2) Learn continuously and adapt on the fly.
Today, the typical AI development process is divided into two distinct phases: training and deployment.
During training, an AI model ingests a static pre-existing dataset in order to learn to perform a certain task. Upon completion of the training phase, a model’s parameters are fixed. The model is then put into deployment, where it generates insights about novel data based on what it learned from the training data.
If we want to update the model based on new data or changing circumstances, we have to retrain it offline with the updated dataset (generally a computationally- and time-intensive process) and then redeploy it.
This batch-based training/deployment paradigm is so deeply ingrained in modern AI practice that we don’t often stop to consider its differences and drawbacks relative to how humans learn.
Real-world environments entail a continuous stream of incoming data. New information becomes available incrementally; circumstances change over time, sometimes abruptly. Humans are able to dynamically and smoothly incorporate this continuous input from their environment, adapting their behavior as they go. In the parlance of machine learning, one could say that humans “train” and “deploy” in parallel and in real-time. Today’s AI lacks this suppleness.
As a well-known research paper on the topic summarized: “The ability to continually learn over time by accommodating new knowledge while retaining previously learned experiences is referred to as continual or lifelong learning. Such a continuous learning task has represented a long-standing challenge for neural networks and, consequently, for the development of artificial intelligence.”
Imagine sending a robot to explore a distant planet. After it embarks from Earth, the robot is likely to encounter novel situations that its human designers could not have anticipated or trained it for ahead of time. We would want the robot to be able to fluidly adjust its behavior in response to these novel stimuli and contexts, even though they were not reflected in its initial training data, without the need for offline retraining. Being able to continuously adapt in this way is an essential part of being truly autonomous.
Today’s conventional deep learning methods do not accommodate this type of open-ended learning.
But promising work is being done in this field, which is variously referred to as continuous learning, continual learning, online learning, lifelong learning and incremental learning.
The primary obstacle to continuous learning in AI—and the reason why it has been so difficult to achieve to date—is a phenomenon known as “catastrophic forgetting.” In a nutshell, catastrophic forgetting occurs when new information interferes with or altogether overwrites earlier learnings in a neural network. The complex puzzle of how to preserve existing knowledge while at the same time incorporating new information—something that humans do effortlessly—has been a challenge for continuous learning researchers for years.
Recent progress in continuous learning has been encouraging. The technology has even begun to make the leap from academic research to commercial viability. As one example, Bay Area-based startup Lilt uses continuous learning in production today as part of its enterprise-grade language translation platform.
“Online learning techniques allow us to implement a stream-based learning process whereby our model trains immediately when new labels from human reviewers become available, thus providing increasingly accurate translations,” said Lilt CEO Spence Green. “This means that we really have no concept of periodic model retraining and deployment—it is an ongoing and open-ended process.”
In the years ahead, expect continuous learning to become an increasingly important component of artificial intelligence architectures.
3) Understand cause and effect.
Today’s machine learning is at its core a correlative tool. It excels at identifying subtle patterns and associations in data. But when it comes to understanding the causal mechanisms—the real-world dynamics—that underlie those patterns, today’s AI is at a loss.
To give a simple example: fed the right data, a machine learning model would have no problem identifying that roosters crow when the sun rises. But it would be unable to establish whether the rooster crowing causes the sun to rise, or vice versa; indeed, it is not equipped to even understand the terms of this distinction.
Going back to its inception, the field of artificial intelligence—and indeed, the field of statistics more broadly—has been architected to understand associations rather than causes. This is reflected in the basic mathematical symbols we use.
“The language of algebra is symmetric: If X tells us about Y, then Y tells us about X,” says AI luminary Judea Pearl, who for years has been at the forefront of the movement to build AI that understands causation. “Mathematics has not developed the asymmetric language required to capture our understanding that if X causes Y that does not mean that Y causes X.”
This is a real problem for AI. Causal reasoning is an essential part of human intelligence, shaping how we make sense of and interact with our world: we know that dropping a vase will cause it to shatter, that drinking coffee will make us feel energized, that exercising regularly will make us healthier.
Until artificial intelligence can reason causally, it will have trouble fully understanding the world and communicating with us on our terms.
“Our minds build causal models and use these models to answer arbitrary queries, while the best AI systems are far from emulating these capabilities,” said NYU professor Brenden Lake.
An understanding of cause and effect would open up vast new vistas for artificial intelligence that today remain out of reach. Once AI can reason in causal terms (“mosquitoes cause malaria”) rather than merely associative terms (“mosquitoes and malaria tend to co-occur”), it can begin to generate counterfactual scenarios (“if we take steps to keep mosquitoes away from people, that could reduce the incidence of malaria”) that can inform real-world interventions and policy changes.
In Pearl’s view, this is nothing less than the cornerstone of scientific thought: the ability to form and test hypotheses about the effect that an intervention will have in the world.
As Pearl puts it: “If we want machines to reason about interventions (‘What if we ban cigarettes?’) and introspection (‘What if I had finished high school?’), we must invoke causal models. Associations are not enough—and this is a mathematical fact, not opinion.”
There is growing recognition of the importance of causal understanding to more robust machine intelligence. Leading AI researchers including Yoshua Bengio, Josh Tenenbaum and Gary Marcus have made this a focus of their work.
Developing AI that understands cause and effect remains a thorny, unsolved challenge. Making progress on this challenge will be a key unlock to the next generation of more sophisticated artificial intelligence.
4) Reason ethically.
The story of Microsoft’s chatbot Tay is by now a well-known cautionary tale.
In 2016, Microsoft debuted an AI personality on Twitter named Tay. The idea was for Tay to engage in online conversations with Twitter users as a fun, interactive demonstration of Microsoft’s NLP technology. It did not go well.
Within hours, Internet trolls had gotten Tay to tweet a wide range of offensive messages: for instance, “Hitler was right” and “I hate feminists and they should all die and burn in hell.” Microsoft hastily removed the bot from the Internet.
The basic problem with Tay was not that she was immoral; it was that she was altogether amoral.
Tay—like most AI systems today—lacked any real conception of “right” and “wrong.” She did not grasp that what she was saying was unacceptable; she did not express racist, sexist ideas out of malice. Rather, the chatbot’s comments were the output of an ultimately mindless statistical analysis. Tay recited toxic statements as a result of toxic language in the training data and on the Internet—with no ability to evaluate the ethical significance of those statements.
The challenge of building AI that shares, and reliably acts in accordance with, human values is a profoundly complex dimension of developing robust artificial intelligence. It is referred to as the alignment problem.
As we entrust machine learning systems with more and more real-world responsibilities—from granting loans to making hiring decisions to reviewing parole applications—solving the alignment problem will become an increasingly high-stakes issue for society. Yet it is a problem that defies straightforward resolution.
We might start by establishing specific rules that we want our AI systems to follow. In the Tay example, this could include listing out derogatory words and offensive topics and instructing the chatbot to categorically avoid these.
Yet, as with the Cyc project discussed above, this rule-based approach only gets us so far. Language is a powerful, supple tool: bad words are just the tip of the iceberg when it comes to the harm that language can inflict. It is impossible to manually catalog a set of rules that, taken collectively, would guarantee ethical behavior—for a conversational chatbot or any other intelligent system.
Part of the problem is that human values are nuanced, amorphous, at times contradictory; they cannot be reduced to a set of definitive maxims. This is precisely why philosophy and ethics have been such rich, open-ended fields of human scholarship for centuries.
In the words of AI scholar Brian Christian, who recently wrote a book on the topic: “As machine-learning systems grow not just increasingly pervasive but increasingly powerful, we will find ourselves more and more often in the position of the ‘sorcerer’s apprentice’: we conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realize our instructions are imprecise or incomplete—lest we get, in some clever, horrible way, precisely what we asked for.”
How can we hope to build artificial intelligence systems that behave ethically, that possess a moral compass consistent with our own?
The short answer is that we don’t know. But perhaps the most promising vein of work on this topic focuses on building AI that does its best to figure out what humans value based on how we behave, and that then aligns itself with those values.
This is the premise of inverse reinforcement learning, an approach formulated in the early 2000s by Stuart Russell, Andrew Ng, Pieter Abbeel and others.
In reinforcement learning, an AI agent learns which actions to take in order to maximize utility given a particular “reward function.” Inverse reinforcement learning (IRL), as its name suggests, flips this paradigm on its head: by studying human behavior, which the AI agent assumes reflects humans’ value system, the AI agent does its best to determine what that value system reward function) is. It can then internalize this reward function and behave accordingly.
A related approach, known as cooperative inverse reinforcement learning, builds on the principles of IRL but seeks to make the transmission of values from human to AI more collaborative and interactive.
As a leading paper on cooperative inverse reinforcement learning explains: “For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans....We propose that value alignment should be formulated as a cooperative and interactive reward maximization process.”
In a similar spirit, AI theorist Eliezer Yudkowsky has advocated for an approach to AI ethics that he terms “coherent extrapolated volition.” The basic idea is to design artificial intelligence systems that learn to act in our best interests according not to what we presently think we want, but rather according to what an idealized version of ourselves would value.
In Yudkowsky’s words: “In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.”
As the real-world dangers of poorly designed AI become more prominent—from algorithmic bias to facial recognition abuses—building artificial intelligence that can reason ethically is becoming an increasingly important priority for AI researchers and the broader public. As artificial intelligence becomes ubiquitous throughout society in the years ahead, this may well prove to be one of the most urgent technology problems we face.
Learn the Basics of Artificial Intelligence Unit
Learning Objectives
After completing this unit, you’ll be able to:
Define machine learning and artificial intelligence (AI).
Understand how bias affects AI.
Give a real-life example of how AI can be biased.
Introduction to AI
Artificial intelligence can augment human intelligence, amplify human capabilities, and provide actionable insights that drive better outcomes for our employees, customers, partners, and communities.
We believe that the benefits of AI should be accessible to everyone, not just the creators. It’s not enough to deliver just the technological capability of AI. We also have an important responsibility to ensure that our customers can use our AI in a safe and inclusive manner for all. We take that responsibility seriously and are committed to providing our employees, customers, partners and community with the tools they need to develop and use AI safely, accurately, and ethically.
How Is AI Different from Machine Learning?
Not familiar with AI? Before completing this module, check out the Artificial Intelligence for Business module (part of the Get Smart with Salesforce Einstein trail) to learn what it is and how it can transform your relationship with your customers.
The terms machine learning and artificial intelligence are often used interchangeably, but they don’t mean the same thing. Before we get into the nitty-gritty of creating AI responsibly, here is a reminder of what these terms mean.
Machine Learning (ML)
When we talk about machine learning, we’re referring to a specific technique that allows a computer to “learn” from examples without having been explicitly programmed with step-by-step instructions. Currently, machine learning algorithms are geared toward answering a single type of question well. For that reason, machine learning algorithms are at the forefront of efforts to diagnose diseases, predict stock market trends, and recommend music.
Artificial Intelligence (AI)
Artificial intelligence is an umbrella term that refers to efforts to teach computers to perform complex tasks and behave in ways that give the appearance of human agency. Often they do this work by taking cues from the environment they’re embedded in. AI includes everything from robots who play chess to chatbots that can respond to customer support questions to self-driving cars that can intelligently navigate real-world traffic.
AI can be composed of algorithms. An algorithm is a process or set of rules that a computer can execute. AI algorithms can learn from data. They can recognize patterns from the data provided to generate rules or guidelines to follow. Examples of data include historical inputs and outputs (for example, input: all email; output: which emails are spam) or mappings of A to B (for example, a word in English mapped to its equivalent in Spanish). When you have trained an algorithm with training data, you have a model. The data used to train a model is called a training dataset. The data used to test how well a model is performing is call test dataset. Both training datasets and test datasets consist of data with input and expected output. You should evaluate a model with a different but equivalent set of data, the test dataset, to test if it is actually doing what you intended.
Bias Challenges in AI
So far, we've discussed the broad ethical implications of developing technology. Now, let's turn our attention to AI. AI poses unique challenges when it comes to bias and making fair decisions.
Opacity
We don’t always know why a model is making a specific prediction. Frank Pasquale, author of The Black Box Society, describes this lack of transparency as the black box phenomenon. While companies that create AI can explain the processes behind their systems, it’s harder for them to tell what’s happening in real time and in what order, including where bias can be present in the model.
In one effort to promote greater transparency and understand how a deep learning-based image recognition algorithm recognized objects, Google reverse engineered it so that instead of spotting objects in photos, it generated them.
In one case, when the AI was asked to generate an image of a dumbbell, it created an image of a dumbbell with a hand and an arm attached because it categorized those objects as one. The majority of the training data it was provided had images of someone holding the dumbbell, but not of dumbbells in isolation. Based on the image output, the engineers realized that the algorithm needed additional training data that included more than a single dumbbell.
Speed, Scale, and Access to Large Datasets
AI systems are trained to optimize for particular outcomes. AI picks up bias in the training data and uses it to model for future predictions. Because it’s difficult if not impossible to know why a model makes the prediction that it does, it’s hard to pinpoint how the model is biased. When models make predictions based on biased data, there can be major, damaging consequences.
Let’s take one example highlighted by Oscar Schwartz in his series for the Institute of Electrical and Electronics Engineers on the Untold History of AI. In 1979, St. George’s Medical School in London began using an algorithm to complete the first-round screening of applicants to their program. This algorithm, developed by a dean of the school, was meant to not only optimize this time-consuming process by mimicking human assessors, but to also improve upon it by applying the same evaluation process to all applicants. The system made the same choices as the human assessors 90–95 percent of the time. In fact, it codified and entrenched their biases by grouping applicants as “Caucasian” and “non-Caucasian” based on their names and places of birth, and assigning significantly lower scores to people with non-European names. By the time the system was comprehensively audited, hundreds of applicants had been denied interviews.
Machine learning techniques have improved since 1979. But it’s even more important now, as techniques become more opaque, that these tools are created inclusively and transparently. Otherwise, entrenched biases can unintentionally restrict access to educational and economic opportunities for certain people. AI is not magic; it learns based on the data you give it. If your dataset is biased, your models will amplify that bias.
Developers, designers, researchers, product managers, writers—everyone involved in the creation of AI systems—should make sure not to perpetuate harmful societal biases. (As we see in the next module, not every bias is necessarily harmful.) Teams need to work together from the beginning to build ethics into their AI products, and conduct research to understand the social context of their product. This can involve interviewing not only potential users of the system, but people whose lives are impacted by the decisions the system makes. We discuss what that looks like later in this module.
Machine learning, explained
This pervasive and powerful form of artificial intelligence is changing every industry. Here’s what you need to know about the potential and limitations of machine learning and how it’s being used.
facebook
twitter
linkedin
email
print open share links close share links
Machine learning is behind chatbots and predictive text, language translation apps, the shows Netflix suggests to you, and how your social media feeds are presented. It powers autonomous vehicles and machines that can diagnose medical conditions based on images.
When companies today deploy artificial intelligence programs, they are most likely using machine learning — so much so that the terms are often used interchangeably, and sometimes ambiguously. Machine learning is a subfield of artificial intelligence that gives computers the ability to learn without explicitly being programmed.
“In just the last five or 10 years, machine learning has become a critical way, arguably the most important way, most parts of AI are done,” said MIT Sloan professor Thomas W. Malone, the founding director of the MIT Center for Collective Intelligence. “So that's why some people use the terms AI and machine learning almost as synonymous … most of the current advances in AI have involved machine learning.”
Work smart with our Thinking Forward newsletter Insights from MIT experts, delivered every Tuesday morning. Email Address Leave this field blank
With the growing ubiquity of machine learning, everyone in business is likely to encounter it and will need some working knowledge about this field. A 2020 Deloitte survey found that 67% of companies are using machine learning, and 97% are using or planning to use it in the next year.
From manufacturing to retail and banking to bakeries, even legacy companies are using machine learning to unlock new value or boost efficiency. “Machine learning is changing, or will change, every industry, and leaders need to understand the basic principles, the potential, and the limitations,” said MIT computer science professor Aleksander Madry, director of the MIT Center for Deployable Machine Learning.
While not everyone needs to know the technical details, they should understand what the technology does and what it can and cannot do, Madry added. “I don’t think anyone can afford not to be aware of what’s happening.”
That includes being aware of the social, societal, and ethical implications of machine learning. “It's important to engage and begin to understand these tools, and then think about how you're going to use them well. We have to use these [tools] for the good of everybody,” said Dr. Joan LaRovere, MBA ’16, a pediatric cardiac intensive care physician and co-founder of the nonprofit The Virtue Foundation. “AI has so much potential to do good, and we need to really keep that in our lenses as we're thinking about this. How do we use this to do good and better the world?”
What is machine learning?
Machine learning is a subfield of artificial intelligence, which is broadly defined as the capability of a machine to imitate intelligent human behavior. Artificial intelligence systems are used to perform complex tasks in a way that is similar to how humans solve problems.
The goal of AI is to create computer models that exhibit “intelligent behaviors” like humans, according to Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. This means machines that can recognize a visual scene, understand a text written in natural language, or perform an action in the physical world.
Machine learning is one way to use AI. It was defined in the 1950s by AI pioneer Arthur Samuel as “the field of study that gives computers the ability to learn without explicitly being programmed.”
The definition holds true, according to Mikey Shulman, a lecturer at MIT Sloan and head of machine learning at Kensho, which specializes in artificial intelligence for the finance and U.S. intelligence communities. He compared the traditional way of programming computers, or “software 1.0,” to baking, where a recipe calls for precise amounts of ingredients and tells the baker to mix for an exact amount of time. Traditional programming similarly requires creating detailed instructions for the computer to follow.
But in some cases, writing a program for the machine to follow is time-consuming or impossible, such as training a computer to recognize pictures of different people. While humans can do this task easily, it’s difficult to tell a computer how to do it. Machine learning takes the approach of letting computers learn to program themselves through experience.
Machine learning starts with data — numbers, photos, or text, like bank transactions, pictures of people or even bakery items, repair records, time series data from sensors, or sales reports. The data is gathered and prepared to be used as training data, or the information the machine learning model will be trained on. The more data, the better the program.
From there, programmers choose a machine learning model to use, supply the data, and let the computer model train itself to find patterns or make predictions. Over time the human programmer can also tweak the model, including changing its parameters, to help push it toward more accurate results. (Research scientist Janelle Shane’s website AI Weirdness is an entertaining look at how machine learning algorithms learn and how they can get things wrong — as happened when an algorithm tried to generate recipes and created Chocolate Chicken Chicken Cake.)
Some data is held out from the training data to be used as evaluation data, which tests how accurate the machine learning model is when it is shown new data. The result is a model that can be used in the future with different sets of data.
Successful machine learning algorithms can do different things, Malone wrote in a recent research brief about AI and the future of work that was co-authored by MIT professor and CSAIL director Daniela Rus and Robert Laubacher, the associate director of the MIT Center for Collective Intelligence.
“The function of a machine learning system can be descriptive, meaning that the system uses the data to explain what happened; predictive, meaning the system uses the data to predict what will happen; or prescriptive, meaning the system will use the data to make suggestions about what action to take,” the researchers wrote.
There are three subcategories of machine learning:
Supervised machine learning models are trained with labeled data sets, which allow the models to learn and grow more accurate over time. For example, an algorithm would be trained with pictures of dogs and other things, all labeled by humans, and the machine would learn ways to identify pictures of dogs on its own. Supervised machine learning is the most common type used today.
In unsupervised machine learning, a program looks for patterns in unlabeled data. Unsupervised machine learning can find patterns or trends that people aren’t explicitly looking for. For example, an unsupervised machine learning program could look through online sales data and identify different types of clients making purchases.
Reinforcement machine learning trains machines through trial and error to take the best action by establishing a reward system. Reinforcement learning can train models to play games or train autonomous vehicles to drive by telling the machine when it made the right decisions, which helps it learn over time what actions it should take.
x x Source: Thomas Malone | MIT Sloan. See: https://bit.ly/3gvRho2, Figure 2.
In the Work of the Future brief, Malone noted that machine learning is best suited for situations with lots of data — thousands or millions of examples, like recordings from previous conversations with customers, sensor logs from machines, or ATM transactions. For example, Google Translate was possible because it “trained” on the vast amount of information on the web, in different languages.
In some cases, machine learning can gain insight or automate decision-making in cases where humans would not be able to, Madry said. “It may not only be more efficient and less costly to have an algorithm do this, but sometimes humans just literally are not able to do it,” he said.
Google search is an example of something that humans can do, but never at the scale and speed at which the Google models are able to show potential answers every time a person types in a query, Malone said. “That’s not an example of computers putting people out of work. It's an example of computers doing things that would not have been remotely economically feasible if they had to be done by humans.”
Machine learning is also associated with several other artificial intelligence subfields:
Natural language processing
Natural language processing is a field of machine learning in which machines learn to understand natural language as spoken and written by humans, instead of the data and numbers normally used to program computers. This allows machines to recognize language, understand it, and respond to it, as well as create new text and translate between languages. Natural language processing enables familiar technology like chatbots and digital assistants like Siri or Alexa.
Neural networks
Neural networks are a commonly used, specific class of machine learning algorithms. Artificial neural networks are modeled on the human brain, in which thousands or millions of processing nodes are interconnected and organized into layers.
In an artificial neural network, cells, or nodes, are connected, with each cell processing inputs and producing an output that is sent to other neurons. Labeled data moves through the nodes, or cells, with each cell performing a different function. In a neural network trained to identify whether a picture contains a cat or not, the different nodes would assess the information and arrive at an output that indicates whether a picture features a cat.
Deep learning
Deep learning networks are neural networks with many layers. The layered network can process extensive amounts of data and determine the “weight” of each link in the network — for example, in an image recognition system, some layers of the neural network might detect individual features of a face, like eyes, nose, or mouth, while another layer would be able to tell whether those features appear in a way that indicates a face.
Like neural networks, deep learning is modeled on the way the human brain works and powers many machine learning uses, like autonomous vehicles, chatbots, and medical diagnostics.
“The more layers you have, the more potential you have for doing complex things well,” Malone said.
Deep learning requires a great deal of computing power, which raises concerns about its economic and environmental sustainability.
How businesses are using machine learning
Machine learning is the core of some companies’ business models, like in the case of Netflix’s suggestions algorithm or Google’s search engine. Other companies are engaging deeply with machine learning, though it’s not their main business proposition.
6 7 % Share
67% of companies are using machine learning, according to a recent survey.
Others are still trying to determine how to use machine learning in a beneficial way. “In my opinion, one of the hardest problems in machine learning is figuring out what problems I can solve with machine learning,” Shulman said. “There’s still a gap in the understanding.”
In a 2018 paper, researchers from the MIT Initiative on the Digital Economy outlined a 21-question rubric to determine whether a task is suitable for machine learning. The researchers found that no occupation will be untouched by machine learning, but no occupation is likely to be completely taken over by it. The way to unleash machine learning success, the researchers found, was to reorganize jobs into discrete tasks, some which can be done by machine learning, and others that require a human.
Companies are already using machine learning in several ways, including:
Recommendation algorithms. The recommendation engines behind Netflix and YouTube suggestions, what information appears on your Facebook feed, and product recommendations are fueled by machine learning. “[The algorithms] are trying to learn our preferences,” Madry said. “They want to learn, like on Twitter, what tweets we want them to show us, on Facebook, what ads to display, what posts or liked content to share with us.”
Image analysis and object detection. Machine learning can analyze images for different information, like learning to identify people and tell them apart — though facial recognition algorithms are controversial. Business uses for this vary. Shulman noted that hedge funds famously use machine learning to analyze the number of cars in parking lots, which helps them learn how companies are performing and make good bets.
Fraud detection. Machines can analyze patterns, like how someone normally spends or where they normally shop, to identify potentially fraudulent credit card transactions, log-in attempts, or spam emails.
Automatic helplines or chatbots. Many companies are deploying online chatbots, in which customers or clients don’t speak to humans, but instead interact with a machine. These algorithms use machine learning and natural language processing, with the bots learning from records of past conversations to come up with appropriate responses.
Self-driving cars. Much of the technology behind self-driving cars is based on machine learning, deep learning in particular.
Medical imaging and diagnostics. Machine learning programs can be trained to examine medical images or other information and look for certain markers of illness, like a tool that can predict cancer risk based on a mammogram.
Read report: Artificial Intelligence and the Future of Work
How machine learning works: promises and challenges
While machine learning is fueling technology that can help workers or open new possibilities for businesses, there are several things business leaders should know about machine learning and its limits.
Explainability
One area of concern is what some experts call explainability, or the ability to be clear about what the machine learning models are doing and how they make decisions. “Understanding why a model does what it does is actually a very difficult question, and you always have to ask yourself that,” Madry said. “You should never treat this as a black box, that just comes as an oracle … yes, you should use it, but then try to get a feeling of what are the rules of thumb that it came up with? And then validate them.”
This is especially important because systems can be fooled and undermined, or just fail on certain tasks, even those humans can perform easily. For example, adjusting the metadata in images can confuse computers — with a few adjustments, a machine identifies a picture of a dog as an ostrich.
Madry pointed out another example in which a machine learning algorithm examining X-rays seemed to outperform physicians. But it turned out the algorithm was correlating results with the machines that took the image, not necessarily the image itself. Tuberculosis is more common in developing countries, which tend to have older machines. The machine learning program learned that if the X-ray was taken on an older machine, the patient was more likely to have tuberculosis. It completed the task, but not in the way the programmers intended or would find useful.
The importance of explaining how a model is working — and its accuracy — can vary depending on how it’s being used, Shulman said. While most well-posed problems can be solved through machine learning, he said, people should assume right now that the models only perform to about 95% of human accuracy. It might be okay with the programmer and the viewer if an algorithm recommending movies is 95% accurate, but that level of accuracy wouldn’t be enough for a self-driving vehicle or a program designed to find serious flaws in machinery.
Bias and unintended outcomes
Machines are trained by humans, and human biases can be incorporated into algorithms — if biased information, or data that reflects existing inequities, is fed to a machine learning program, the program will learn to replicate it and perpetuate forms of discrimination. Chatbots trained on how people converse on Twitter can pick up on offensive and racist language, for example.
In some cases, machine learning models create or exacerbate social problems. For example, Facebook has used machine learning as a tool to show users ads and content that will interest and engage them — which has led to models showing people extreme content that leads to polarization and the spread of conspiracy theories when people are shown incendiary, partisan, or inaccurate content.
Ways to fight against bias in machine learning including carefully vetting training data and putting organizational support behind ethical artificial intelligence efforts, like making sure your organization embraces human-centered AI, the practice of seeking input from people of different backgrounds, experiences, and lifestyles when designing AI systems. Initiatives working on this issue include the Algorithmic Justice League and The Moral Machine project.
Putting machine learning to work
Shulman said executives tend to struggle with understanding where machine learning can actually add value to their company. What’s gimmicky for one company is core to another, and businesses should avoid trends and find business use cases that work for them.
Machine learning is changing, or will change, every industry, and leaders need to understand the basic principles, the potential, and the limitations. Aleksander Madry MIT Computer Science Professor Share
The way machine learning works for Amazon is probably not going to translate at a car company, Shulman said — while Amazon has found success with voice assistants and voice-operated speakers, that doesn’t mean car companies should prioritize adding speakers to cars. More likely, he said, the car company might find a way to use machine learning on the factory line that saves or makes a great deal of money.
“The field is moving so quickly, and that's awesome, but it makes it hard for executives to make decisions about it and to decide how much resourcing to pour into it,” Shulman said.
It’s also best to avoid looking at machine learning as a solution in search of a problem, Shulman said. Some companies might end up trying to backport machine learning into a business use. Instead of starting with a focus on technology, businesses should start with a focus on a business problem or customer need that could be met with machine learning.
A basic understanding of machine learning is important, LaRovere said, but finding the right machine learning use ultimately rests on people with different expertise working together. “I'm not a data scientist. I'm not doing the actual data engineering work — all the data acquisition, processing, and wrangling to enable machine learning applications — but I understand it well enough to be able to work with those teams to get the answers we need and have the impact we need,” she said. “You really have to work in a team.”
Learn more:
Sign-up for a Machine Learning in Business Course.
Watch an Introduction to Machine Learning through MIT OpenCourseWare.
Read about how an AI pioneer thinks companies can use machine learning to transform.
Watch a discussion with two AI experts about machine learning strides and limitations.
Take a look at the seven steps of machine learning.
Read next: 7 lessons for successful machine learning projects
Leave a Reply