Cognitive Computing Goes Beyond Early Techniques
Cognitive computing is here and, in some cases, already in use. The philosophies and technologies behind cognitive are a far cry from where computing stood a few years ago. These systems are lab assistants, call center companions and healthcare advisors, not merely number crunchers.
Some may question their need, but Guruduth Banavar, chief science officer and vice president, cognitive computing, IBM Research, says they’ve become a necessity.
Q: Why has cognitive computing become so important to IBM and others?
A: There are three trends we must look at when it comes to cognitive computing. One is the amount of new kinds of data we have across the physical world. Everything we write as a species is growing exponentially, so there’s a sense of knowledge workers being overwhelmed with all of this information and not really knowing how to make sense out of it.
The second trend is that over the last five to 10 years, new algorithms and computational techniques have taken off, fueled by all of the data. Many of them are based on neural-network technology. Some of them organize into many layers, such as deep learning, and many kinds of architecture variations. All of those techniques have suddenly started to bear fruit at scale. If you can build a large enough neural network, it can learn the patterns, trends and other features of large-scale data to provide insight, whether it’s a classification, conversion or translation. Those kinds of techniques can start performing better than anybody expected to the point where you see all of these different paths, from recognizing a person in a picture to finding the best options for procedures in a medical scenario. All of those techniques can now be more realistically implemented.
The third trend is the computational infrastructure; it’s suddenly taken off. This is the role with IBM Systems—not only the architectures like Power* and the techniques we’ve used in the mainframe but a number of new architectures as well, such as GPUs, field-array programmable gate arrays and new kinds of approximate computing systems—all the way to neuromorphic systems that go well beyond von Neumann systems. They’ve become relevant and almost necessary to address the requirements of the algorithmic workloads I mentioned in the second trend.
Because of these three trends—data, algorithms and computing infrastructure—there’s suddenly a tool kit of capabilities that can be applied to solve some of the big problems of the world. Look at healthcare: If you can’t accurately diagnose patients’ symptoms, they could potentially receive the wrong medications or treatment. There are a lot of things that are now within our control. We can use data to address these types of problems, whether in healthcare, education or the Internet of Things. That’s the idea behind cognitive computing.
Another point we should discuss is how cognitive computing relates to artificial intelligence (AI). Their goals are different. AI is trying to mimic the human brain. Cognitive is trying to complement the human brain and do things the human brain can’t do well, encapsulating the idea of intelligence augmentation (IA).
comments powered by