Why self-taught AI has trouble with the real world

We have seen AI solutions teaching themselves to beat humans in games such as chess, Go and video games like Dota 2, but does that mean that AI can teach itself just about anything? Or that AI is near human-level intelligence?

No, it does not, as Thomas Hellström, Professor in Computing Science, points out in a recent article on DN Debatt, ”Dangerous over-confidence in AI that so far is too unintelligent”: “The hype of AI may lead to an over-confidence in AI solutions, and will make us use unintelligent AI for tasks that require considerably more advanced intelligence, such as ability to understand, reason, and make moral judgements. A wise usage of AI should be governed by an awareness of the systems’ limitations.”

Similarly, Turing Award winner, Judea Pearl, argues in his latest book, “The Book of Why: The New Science of Cause and Effect,” that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is. In the article “To Build Truly Intelligent Machines Teach Them Cause and Effect” he further elaborates his view. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.”

The main problem is that the progress in gaming, image recognition and labeling objects (as “cat” or “tiger” for example) has made the world expect that AI can be widely applied beyond the game board and that they can teach themselves to understand and reason like a human. In the article “Why Self-Taught Artificial Intelligence Has Trouble With the Real World” Pedro Domingos, computer scientist at the University of Washington, explains why: “One characteristic shared by many games, chess and Go included, is that players can see all the pieces on both sides at all times. Each player always has what’s termed “perfect information” about the state of the game. However devilishly complex the game gets, all you need to do is think forward from the current situation. Real-life situations are not so straightforward. For example, a self-driving car needs a more nuanced objective function, something akin to the kind of careful phrasing you’d use to explain a wish to a genie. For example: Promptly deliver your passenger to the correct location, obeying all laws and appropriately weighing the value of human life in dangerous and uncertain situations. How researchers craft the objective function, Domingos said, “is one of the things that distinguishes a great machine-learning researcher from an average one.”

This over-confidence in AI can cause problems, even dangers, that we need to be aware of. The first one is the failure of many AI projects, when they requires much more efforts in training to be useful than expected. In the legal market we currently see a lot of law firms investing in AI solutions, mainly for due diligence review, but struggling with the implementation for this exact reason. (Read more about this here: “Nordic Law Firms Flock to Legal AI” as well as this Swedish article ”En kock, en ingenjör och en ledare – så fixar du en bra AI-soppa enligt Google”)

The second is wrongful or biased results, or even dangerous, deathly outcomes. In his article on DN Debatt, Thomas Hellström, gives the following examples of this second problem:

Systems for automatic assessment of loan or job applications: “Such systems analyze huge amounts of old applications, and then try to imitate the human case handler’s assessments. Unfortunately, it turns out that such systems can become prejudiced and, for example, consistently discriminate applicants due to skin color or gender.” (Read more about this here: “Silicon Valley is stumped: Even A.I. cannot always remove bias from hiring”“Researchers Combat Gender and Racial Bias in Artificial Intelligence” and “Microsoft Had to Suspend Its AI Chatbot After It Veered Into White Supremacy” as well as this Swedish article: “När algoritmer diskriminerar”)

AI-based image analysis systems used in driverless cars: “The systems usually work excellently but have severe constraints that are revealed in a particular type of stress test. By adding noise to a camera image, you can fool the image analysis system to deliver breathtakingly incorrect analyzes of what’s in front of the camera. These types of tests raise questions about whether the AI-based systems really understand images, or if they only learn that some formations of pixels should be called “car” or “pedestrian”.”

It is therefore essential that the use and development of AI is governed by an awareness of the systems’ limitations and of the time and effort it takes to train an AI system to make it useful.

Share