Pages

Monday, June 30, 2025

Dr Michael Johnston: The Johnston test for human intelligence


Philosophers love to dream up strange scenarios to get us thinking. They call these scenarios ‘thought experiments.’

In 1980, philosopher John Searle published his ‘Chinese room’ thought experiment. Searle asked his readers to imagine someone who understands no Chinese sitting alone in a room with a very large book.

The book contains instructions for manipulating Chinese symbols, to produce a plausible response to any statement or question written in Chinese.

A Chinese speaker passes a written message into the room. The person in the room follows the instructions in the book to produce a reply, which they pass back. The Chinese speaker feels understood, even though the person inside the room understood neither the message nor the reply.

The ‘Chinese room’ was a response to another famous thought experiment, formulated by Alan Turing, the father of digital computing. Under the ‘Turing test’ for machine intelligence, a machine is intelligent if it can converse with a human and convince them it is also human.

Clearly, modern AI passes the Turing test with flying colours. But, according to Searle, the Turing test is flawed. Just like the person sitting in the Chinese room, AI is simply manipulating symbols without understanding.

Turing wanted to get people thinking about machine intelligence. But I believe we urgently need to think about human intelligence. More specifically, we need a survival strategy for human intelligence in the age of AI.

Early this year, the Higher Education Policy Institute in the UK published a survey about university students’ use of AI. About 90% reported using AI for their assessments. Most said they just use AI to help them understand concepts. Let’s be real, though. They are using AI to write their essays for them.

There is little doubt that many students are passing courses without even trying to learn anything. Meanwhile, academics are increasingly using AI to create courses and grade assessments.

Soon, humans will be redundant in the educational process. AI will create the courses, set the assessments, complete the assessment tasks, and grade them as well. Meanwhile, human beings will get more and more stupid.

It will be like a whole lot of Chinese rooms talking to one another. Symbols will be passed from one AI to another, with no human understanding involved.

In order to think, we need to know things. The real test for human intelligence will be whether we realise that in time.

Dr Michael Johnston is a Senior Fellow at the New Zealand Initiative. This article was first published HERE

4 comments:

Anonymous said...

How to indoctrinate in one fell swoop!

Barend Vlaardingerbroek said...

People have been getting "more and more stupid" since the introduction of the hand-held calculator.

Gaynor said...

Possibly not on topic but for me the stupidest event in the history of education in the English speaking world was to treat English as a hieroglyphic language like Chinese and have beginner readers taught phonically based English, visually by whole words instead of by sounds . This craziness persisted , dictated entirely by elite academics and their theory , and greatly worsened by whole word evolving into Whole Language . Most of those who created this nonsense had had little or no experience in teaching reading. Rather It was based on a determination to adhere to a flawed natural biological theory . Consequently our once high literacy standards persisted as long as some explicit phonics was still in vogue but plummeted when explicit phonics was condemned and ridiculed. In our so called scientific age how could something like this, which was clearly not working , persist for more than 60 years and destroy the educational prospects and futures of the not just NZ , but the entire English speaking world affecting multi millions of children.

Anonymous said...

AI by definition should be able "self-code".
This is a lot more difficult than what supposedly passes itself off as AI.
At the moment anyone who is utilizing AI technology is constantly having to break down tasks into workable slithers of statements and questions as well as provide the background information.
The human is still choosing what their AI can assess and only within a narrow and workable domain of expertise.
While this is impressive it is still a long way from being truly AI and therefore still susceptible to the age-old problem since the dawn of the information technology age of "garbage in garbage out"

Post a Comment

Thanks for engaging in the debate!

Because this is a public forum, we will only publish comments that are respectful and do NOT contain links to other sites. We appreciate your cooperation.