In my previous blog, “Are chimpanzees more intelligent than computers?,” I established some thoughts about what it all means, about the difference between artificial intelligence (AI) and human or general intelligence. In this second blog, I will instead look at the hype itself.
It’s important to explain why AI is suddenly such a hot topic. After all, it’s been around since the fifties, with John McCarthy being considered the father of AI. Over the past 60 years, there have been various waves of AI acceleration and hype, followed by disappointment. But, some AI technologies and methods have survived and proved their usefulness.
Firstly, after the Dartmouth Workshop of 1956, an era of new discoveries emerged. Influential programs focusing on reasoning by search (deducing the best result by trying out all combinations and backtracking if it doesn’t work) and natural language led to high expectations. H.A. Simon predicted in 1965 that machines would be capable of doing any work that a person can do. Marvin Minsky said in 1970 that within three to eight years a machine would exist that has the general intelligence of an average human being.
Well, needless to say, it didn’t exactly pan out that way. Limitations in computing power, an exponential increase in the time required to resolve real issues (combinatorial explosion) and the lack of good-quality data led to a lack of progress in the field. Funding became scarcer as frustrations rose.
It took another 10 years or so to see the next wave of AI hype. In the eighties, funding returned, not in the least due to the rise of expert systems. These expert systems were programs that tried to answer questions or solve problems using logical rules that are derived from experts. With the belief that intelligence might be based on the ability to use large amounts of knowledge (data), knowledge-based systems and knowledge engineering became a focus within AI research.
Again, it didn’t work out. For instance, the attempt to replace doctors with expert systems underestimated the complex process of examination, decision-making, testing and pragmatic interventions that occurs when a doctor sees a patient. As the expert systems failed to deliver on the inflated expectations, funding for them dried up.
The most recent and biggest wave of acceleration and subsequent hype has happened in the last 15 years. The availability of big data (the more data an AI has available to learn from, the ‘smarter’ it gets) and cloud (hugely scalable), as well as edge computing power, have created the circumstances needed for this latest wave of hype.
Breakthroughs in Deep Learning (new ways of training neural networks) have been key in teaching machines to ‘think’ more in the way we do. Neural networks are intended to classify information in a way that is similar to how the brain works. It classifies based on characteristics of the input, which is particularly useful for applications such as image recognition where, for example, it could classify the make and model of a car based on shape and size.
Using these advances, companies have developed useful applications that actually work. For instance, H&M is using a chat-bot that through natural language processing can interact with the user and can recommend clothes that the user would be interested in buying. Although most of these use cases are very specific and cannot be considered general intelligent, this has nevertheless generated another round of hype in the market: that machines can learn everything and that this time around, machines really will become general intelligent, intelligent like human beings. This hype is being enforced by a multitude of organizations and persons preaching that AI will reach human-like intelligence in the foreseeable future.
As with the previous waves of hype, whether this is true or not remains to be seen. But, unlike the previous waves, this time AI at least is able to demonstrate business value (e.g. autonomous vehicles, web shop recommendations, medical diagnosis or planning optimization cases). With continuing high levels of investment in AI, it looks like we’ll see a continuation of development in this field for the foreseeable future.
Do you have questions or comments about AI?
We’d love to hear them so please leave us a message below.