The topic of Artificial Intelligence is at the top of its Hype curve1 . And there are many good reasons for that; it is exciting, promising and a bit scary at the same time. Various publications are claiming that AI knows what we want to buy, it can create Netflix series, it could cure cancer and it may eventually take our jobs or even destroy mankind.
In general terms, AI refers to a broad field of science encompassing not only computer science but also psychology, philosophy, linguistics and other areas. AI is concerned with getting computers to do tasks that would normally require human intelligence. Having said that, there are many point of views on AI and many definitions exist. The following are AI definitions which highlight key characteristics of AI.
Big Data - Capable of processing massive amounts of structured and unstructured data which can change constantly
Learning - Ability to learn based on historical patterns, expert input and feedback loop
Reasoning - Ability to reason (deductive or inductive) and to draw inference based to the situation. Context driven awareness of the system.
Problem Solving - Capable of analyzing and solving complex problems in special-purpose and general purpose domain
Narrow AI vs. General AI - A chess computer could beat a human in playing chess, but it couldn’t solve a complex math problem. Virtually all current AI is “narrow”, meaning it can only do what it is designed for. This means for every problem, a specific algorithm needs to be designed to solve it. Narrow AI are mostly much better at the task they were made for than humans, like face recognition, chess computers, calculus, translation.
The holy grail of AI is a General AI, a single system that can learn about any problem and then solve it. This is exactly what humans do: we can specialize in a specific topic, from abstract math to psychology and from sports to art; we can become experts at all of them. An AI system combines and utilizes mainly machine learning and other types of data analytics methods to achieve artificial intelligence capabilities.
Applications of AI
Image Recognition - Recognizing images is an easy task for most of us. We don’t have any trouble differentiating a car from a tiger or recognizing that a car is still a car when you observe it from the front instead of from the side. This task has been proven considerably more difficult for computers, but recent progress in image recognition accuracy has resulted in interesting applications. Because different vendors like Google and IBM are offering their preprogrammed algorithms open source and software libraries like Tensorflow1 make it possible to construct your own algorithms, visual recognition is becoming more accessible for the public
Speech Recognition - Speech recognition is an AI application that recognizes speech and can turn spoken words into written words. It is hardly used on its own but it is largely used as an addition to Chatbots, virtual agents and mobile applications. Well known examples are Apple’s Siri, Google Home and Microsoft’s Alexa. Speech recognition started already in 1952 with ‘Audrey’9 . Audrey was able to recognize digits spoken in a single voice, which is quite impressive given the computers back then. Today we have applications on our phone and in our car that can respond to our voice.
Translation - A different topic with large business implications is automatic translation. This topic can be defined as the process of translating text from one language to another by using software. Traditionally, translation was done by substituting each word by its closest counterpart in the other language. While this works reasonably well for single words, a pair of words or sentences are generally harder to process correctly due to the fact that relations between words are important for the meaning of a sentence, but such nuances cannot be captured when each word is analyzed separately.
The usage of deep learning has had a significant impact on the quality of machine translations by completely shifting the paradigm. Rather than working in a rule-based way, powered by human decision making, translation using a neural network is completely based on mathematics.The impact of quality translations in a global economy are enormous. With business translations originally dominated by conversions between European languages, the need for translations to Chinese, Japanese, and Korean is increasing9 . A simple example is one that Uber was investigating, where automatic translation takes place between you and your local Uber driver, who can only communicate in Chinese.
Question and Answers (Q&A) - Q&A agents or Chatbots are another example of applying AI to language. When talking about the ability of having conversations, distinctions are made in the domain and the way of generating an answer of the agent. A chatbot can be focused on answering questions in an open or closed domain. When it operates in an open domain, it should be able to answer general questions that can concern any topic (see for example cleverbot). This is generally harder than a closed domain, which concerns only a limited amount of topics.
Closed domains, however, have very good business application such as answering questions at helpdesks. A couple of years ago, there was a breakthrough in question answering interest, when IBM Watson beat humans in a game of Jeopardy, a well-known American quiz show10. More recently another breakthrough was made by Google, who can now give chatbots the ability to have a short term memory, which gives the chatbot the ability to mimic real-life conversations more realistically.
Games - One of the most exciting applications of AI lies in playing games. Playing a game well requires you to not only know the rules, but to calculate the next possible moves within these rules, and finally make a careful judgement on which move would give you to best chance to win. If computers can play games as well as human players, there are no reasons why they cannot learn any other difficult task that people do in their daily work (although human supervision probably remains needed). Recently there was a big step forward in the field of games when the world Champion of Go was beaten by a computer for the first time.
Go is a game that cannot be brute-force calculated, since the number of possible moves is higher than the number of stars in the universe.
The top Go players of the world rely for a large part on their intuition to come to the best moves. Google’s AlphaGo (a neural network based go-engine), however, learned how to play like a top human player by studying millions of human games. It then became even stronger by playing against another version of itself millions of times, which finally enabled it to beat the world champion.
If computers can beat human players in one of the most complicated games that currently exists, then where do the possibilities for AI stop? One big advantage people still have over computers, is that we can take our knowledge and training in one area, and apply it to a new task or area. For example, good go players can apply their way of thinking to solve their daily problems in their jobs. AlphaGo cannot do this: it is only good at playing go and nothing else.
When you make it learn something else, like chess, it will lose its ability to play go.
Recently however, a first step was taken in overcoming this problem: neural networks are now able to remember to most important knowledge from one game, and at the same time learn a new game (link). Google Deepmind wrote a new algorithm that allowed a neural network to learn 10 Atari games at the same time, and play them with human performance. Once this field will be more developed, computers will be able to perform series of difficult tasks that at the moment only people can perform.
Google themselves use it to lower the energy bills of their large datacenters. The AI controls over 120 variables in Google’s datacenters, such as the windows, fans and cooling systems, optimizing for energy usage while keeping computing performance up. The optimization potentially lowers Google’s energy bill for hundreds of millions over several years.
1 – Google’s AI-Powered Predictions
Using anonymized location data from smartphones, Google Maps (Maps) can analyze the speed of movement of traffic at any given time. And, with its acquisition of crowdsourced traffic app Waze in 2013, Maps can more easily incorporate user-reported traffic incidents like construction and accidents. Access to vast amounts of data being fed to its proprietary algorithms means Maps can reduce commutes by suggesting the fastest routes to and from work.
Ridesharing Apps Like Uber and Lyft
How do they determine the price of your ride? How do they minimize the wait time once you hail a car? How do these services optimally match you with other passengers to minimize detours? The answer to all these questions is ML.
Engineering Lead for Uber ATC Jeff Schneider discussed in an NPR interview how the company uses ML to predict rider demand to ensure that “surge pricing”(short periods of sharp price increases to decrease rider demand and increase driver supply) will soon no longer be necessary. Uber’s Head of Machine Learning Danny Lange confirmed Uber’s use of machine learning for ETAs for rides, estimated meal delivery times on UberEATS, computing optimal pickup locations, as well as for fraud detection.
Commercial Flights Use an AI Autopilot
AI autopilots in commercial airlines is a surprisingly early use of AI technology that dates as far back as 1914, depending on how loosely you define autopilot. The New York Times reports that the average flight of a Boeing plane involves only seven minutes of human-steered flight, which is typically reserved only for takeoff and landing.