How AI pales in the face of human intelligence and ingenuity | Letters

TruthLens AI Suggested Headline:

"Critique Highlights Limitations of AI Compared to Human Intelligence"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 7.6
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

In a recent discussion on the limitations of artificial intelligence (AI), Gary Marcus emphasizes that merely increasing computational power will not resolve the inherent issues of generative AI. He highlights a fundamental distinction between human intelligence and AI capabilities, illustrating this with the example of the Tower of Hanoi puzzle, which a seven-year-old can solve while current AI struggles. The author argues that humans are embodied beings who interact with the world using all their senses from birth, allowing them to develop an intuitive understanding of their environment. This embodied intelligence equips humans with the ability to recognize general truths from limited examples, a task that remains challenging for AI systems. For instance, while AI requires thousands of images to recognize a cat, a child can identify a cat after seeing just a few, demonstrating the efficiency and adaptability of human learning compared to AI's resource-intensive requirements.

Furthermore, the article points out the significant energy demands of AI technologies, especially in the context of climate change. The disparity in energy consumption between humans and AI systems is stark; for example, while an autonomous vehicle may consume over a kilowatt of energy, a human driver operates on a mere twenty watts. This comparison emphasizes the economic and ecological advantages of human intelligence, which is not only versatile and creative but also inherently energy-efficient. The author also critiques the current state of AI reasoning, noting that models like ChatGPT exhibit fundamental limitations in logical reasoning. These systems often rely on brute force and pattern recognition rather than true reasoning. The mention of artificial narrow intelligence (ANI) clarifies that while AI can excel at tasks like summarization, it still falls short of genuine reasoning capabilities. The dialogue encourages reconsideration of the value of human intelligence, especially in light of AI's limitations and its environmental impact.

TruthLens AI Analysis

You need to be a member to generate the AI analysis for this article.

Log In to Generate Analysis

Not a member yet? Register for free.

Unanalyzed Article Content

Gary Marcus is right to point out – as many of us have for years – that just scaling up compute size is not going to solve the problems of generative artificial intelligence (When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype, 10 June). But he doesn’t address the real reason why a child of seven can solve the Tower of Hanoi puzzle that broke the computers: we’re embodied animals and we live in the world.

All living things are born to explore, and we do so with all our senses, from birth. That gives us a model of the world and everything in it. We can infer general truths from a few instances, which no computer can do.

A simple example: to teach a large language model “cat”, you have to show it tens of thousands of individual images of cats – being the way they are, they may be up a tree, in a box, or hiding in a roll of carpet. And even then, if it comes upon a cat playing with a bath plug, it may fail to recognise it as a cat.

A human child can be shown two or three cats, and from interacting with them, it will recognise any cat as a cat, for life.

Apart from anything else, this embodied, evolved intelligence makes us incredibly energy-efficient compared with a computer. The computers that drive an autonomous car use anything upwards of a kilowatt of energy, while a human driver runs on twentysomething watts of renewable power – and we don’t need an extra bacon sandwich to remember a new route.

At a time of climate emergency, the vast energy demands of this industry might perhaps lead us to recognise, and value, the extraordinary economy, versatility, plasticity, ingenuity and creativity of human intelligence – qualities that we all have simply by virtue of being alive.Sheila HaymanAdvisory board member, Minderoo Centre for Technology & Democracy, Cambridge University

It comes as no surprise to me that Apple researchers have found “fundamental limitations” in cutting-edge artificial intelligence models (Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds, 9 June). AI in the form of large reasoning models or large language models (LLMs) are far from being able to “reason”. This can be simply tested by asking ChatGPT or similar: “If 9 plus 10 is 18 what is 18 less 10?” The response today was 8. Other times, I’ve found that it provided no definitive answer.

This highlights that AI does not reason – currently, it is a combination of brute force and logic routines to essentially reduce the brute force approach. A term that should be given more publicity is ANI – artificial narrow intelligence, which describes systems likeChatGPTthat are excellent at summarising pertinent information and rewording sentences, but are far from being able to reason.

But note, the more times that LLMs are asked similar questions, the more likely it will provide a more reasonable response. Again, though, this is not reasoning, it is model training.Graham TaylorMona Vale, New South Wales, Australia

Have an opinion on anything you’ve read in the Guardian today? Pleaseemailus your letter and it will be considered for publication in ourletterssection.

Back to Home
Source: The Guardian