[seopress_breadcrumbs]

An AI is as Smart as a Four-Year-Old – Or is It?

An AI is as Smart as a Four-Year-Old – Or is It?

Machines like IBM’s Deep Blue and Watson are already capable of beating chess champs and Jeopardy! champs respectively, and prove that strategy and trivia are easily conquered by a machine. But this knowledge doesn’t necessarily transfer over into everyday use.

You might be the best Jeopardy! player in your family, but you might not excel at general problem solving. In order to test the limits of practical AI knowledge, researchers turned towards the gold standard in standardized testing: the IQ test.

In October, Stellan Ohlsson and colleagues at the University of Illinois at Chicago published a paper showing the IQ score of ConceptNet, an open-source MIT AI. The paper mades waves amid AI enthusiasts and media alike because ConceptNet managed to scored as well as the average four-year-old child performs on the standard psychometric test. Though the system performed poorly compared to five, six, and seven year olds, the suggestion that ConceptNet could compete with toddlers made everyone feel a bit excited and borderline uneasy.

But what do these results really mean?

Even when applied to human intelligence the IQ test is considered largely flawed because most people assume that the test is a comprehensive assessment of someone’s smartness. The IQ test doesn’t encompass the entire breadth of intelligence, and fails to measure key concepts which we would regard as fundamental to practical human intelligence. Among these are creativity and emotional intelligence – two traits that are important (if not essential) to a person’s intelligent functioning in society.

But what other tests can be used to measure an AI’s intellect?

In 2012 researchers Dr Roger Highfield and Dr Adrian M. Owen created a test that was meant as an augmentation of the IQ test. After a landmark study of more than 100,000 participants that had volunteers from around the world answering 12 cognitive tests about memory, reasoning, and attention, Highfield and Owen determined that the IQ test gave a misleading representation of one’s intellect.

Then, scanning the brains of 16 different volunteers as they attempted to complete the same tests, the researchers found that engagement with different types of intelligence (short-term memory, reasoning, and a verbal component) triggered activity in different parts of the brains – thus requiring more than just the IQ test to activate them. 

So just as it’s important to reconsider the validity of the IQ test when measuring human intellect, it’s important to ask whether or not the IQ test is a valid way of measuring the intelligence of AI. The test is surely a measure of AI advancement – if one system out-performs its predecessor then it’s fair to say the system has advanced. But, from the Turing test to the IQ test, as these systems advance it’s pivotal to revamp the ways we analyze them.

Credit: Hyundai

[mrj_paywall] unauthorized access

Share article

Subscibe to updates

Subscribe to weekly email with our best articles Financial Services updates that have happened in the last week.

Recommended from Emerj

This Content is Exclusive to Emerj Plus Members

You’ve reached a category page only available to Emerj Plus Members.

Members receive full access to Emerj’s library of interviews, articles, and use-case breakdowns, and many other benefits, including:

In-Depth Analysis

Consistent coverage of emerging AI capabilities across sectors.

Created with Sketch.

Exclusive AI Capabilities Matrix

An explorable, visual map of AI applications across sectors.

Created with Sketch.

Exclusive AI White Paper Library

Every Emerj online AI resource downloadable in one-click

Created with Sketch.

Best Practices and executive guides

Generate AI ROI with frameworks and guides to AI application

View membership options

Register