How IBM Sees The Future Of Artificial Intelligence
Ever since IBM’s Watson system defeated the best human champions at the game show Jeopardy!, artificial intelligence (AI) has been the buzzword of choice. More than just hype, intelligent systems are revolutionizing fields from medicine and manufacturing to changing fundamental assumptions about how science is done.
Yet for all the progress, it appears that we are closer to the beginning of the AI revolution than the end. Intelligent systems are still limited in many ways. They depend on massive amounts of data to learn accurately, have trouble understanding context and their susceptibility to bias makes them ripe targets for sabotage.
IBM, which has been working on AI since the 1950s, is not only keenly aware of these shortcomings, it is working hard to improve the basic technology. As Dario Gil, Chief Operating Officer of IBM Research recently wrote in a blog post, the company published over 100 papers in just the past year. Here are the highlights of what is being developed now.
Working To Improve Learning
What makes AI different from earlier technologies is its ability to learn. Before AI, a group of engineers would embed logic into a system based on previously held assumptions. When conditions changed, the system would need to be reprogrammed to be effective. AI systems, however, are designed to adapt as events in the real world evolve.
This means that AI systems aren’t born intelligent.  We must train them to do certain tasks, much like we would a new employee. Systems often need to be fed thousands or even millions of examples before they can perform at anything near an acceptable level. So far, that’s been an important limiting factor for how effective AI systems can be.
“A big challenge now is being able to learn more from less,†Dr. John Smith, Manager of AI Tech at IBM Research, told me. “For example, in manufacturing there is often a great need for systems to do visual inspections of defects, some of which may have only one or two instances, but you still want the system to be able to learn from them and spot future instances.â€
“We recently published our research on a new technique called few-shot or one-shot learning, which learns to generalize information from outliersâ€, he continued. “It’s still a new technique, but in our testing so far, the results have been quite encouraging.“ Improving a system’s ability to learn is key to improving how it will perform.
Understanding Context
One of the most frustrating things about AI systems is their inability to understand context. For example, if a system is trained to identify dogs, it will be completely oblivious to a family playing Frisbee with its beloved pet. This flaw can get extremely frustrating when we’re trying to converse with a system that takes each statement as a separate query and ignores everything that came before.
IBM made some important headway on this problem with its Project Debater, a system designed to debate with humans in real time. Rather than merely respond to simple, factual questions, Debater is able to take complex, ambiguous issues and make a clear, cogent argument based on nuanced distinctions that even highly educated humans find difficult.
A related problem is the inability to understand causality. A human who continues to encounter a problem would start to wonder where it’s coming from, but machines generally don’t. “A lot of the AI research has been focused on correlations, but there is a big difference between correlations and causality,†Smith told me.
“We’re increasingly focused on researching how to infer causality from a large sets of data,” he says. “That will help us do more than diagnose a problem in say, a medical or industrial setting, but help determine where the problem is coming from and how to approach fixing it.â€
Focusing On Ethics And Trust
Probably the most challenging aspect of AI is ethics. Now that we have machines helping to support human decisions, important questions arise about who is accountable for those decisions, how they are arrived at and upon what assumptions they are being made.
Consider the trolley problem, which has stumped ethicists for decades. In a nutshell, it asks what someone should do if faced with the option of pulling a lever so that a trolley avoids killing five people laying on a track, but kills another person in the process. Traditional approaches, such as virtue ethics or Kantian ethics, provide little guidance on what is the right thing to do. A completely utilitarian approach just feels intuitively lacking in moral force.
These basic challenges are compounded by other shortcomings inherent in our computer systems, such as biases in the data they are trained on and the fact that many systems are “black boxes,†which offer little transparency into how judgments are arrived at. Today, as we increasingly need to think seriously about encoding similar decisions into real systems, such as self-driving cars, these limitations are becoming untenable.
“If we want AI to effectively support human decision making we have to be able to build a sense of trust, transparency, accountability and explainability around our work,†Smith told me. “So an important focus of ours has been around building tools, such as Fairness 360, an open source resource that allows us to ask questions of AI systems much as we would of human decisions.â€
The Future Of AI
When IBM announced its System 360 mainframe computer in 1964 at a cost of $5 billion (or more than $30 billion in today’s dollars), it was considered a major breakthrough and dominated the industry for decades. The launch of the PC in the early 80s had a similar impact. Today’s smartphones, however, are infinitely more powerful and cost a small fraction of the price.
We need to look at AI in the same way. We’re basically working with an early version of the PC, with barely enough power to run a word processing program and little notion of the things which would come later. In the years and decades to come, we expect vast improvements in hardware, software and our understanding of how to apply AI to important problems.
One obvious shortcoming is that although many AI applications perform tasks in the messy, analog world, the computations are done on digital computers. Inevitably, important information gets lost. So a key challenge ahead is to develop new architectures, such as quantum and neuromorphic computers, to run AI algorithms.
“We’re only at the beginning of the journey†IBM’s Smith told me excitedly, “but when we get to the point that quantum and other technologies become mature, our ability to build intelligent models of extremely complex data is just enormous.
An earlier version of this article first appeared in Inc.com
Image: Wikimedia Commons
Wait! Before you go…
Choose how you want the latest innovation content delivered to you:
- Daily — RSS Feed — Email — Twitter — Facebook — Linkedin Today
- Weekly — Email Newsletter — Free Magazine — Linkedin Group
Greg Satell is a popular author, keynote speaker, and trusted adviser whose new book, Cascades: How to Create a Movement that Drives Transformational Change, will be published by McGraw-Hill in April, 2019. His previous effort, Mapping Innovation, was selected as one of the best business books of 2017. You can learn more about Greg on his website, GregSatell.com and follow him on Twitter @DigitalTonto.
NEVER MISS ANOTHER NEWSLETTER!
LATEST BLOGS
Three things you didn’t know about credit cards
Photo by Ales Nesetril on Unsplash Many of us use credit cards regularly. From using them for everyday purchases to…
Read MoreFive CV skills of a business-minded individual
Photo by Scott Graham on Unsplash The skills listed on a CV help employers quickly understand your suitability for a…
Read More