AI can’t solve a problem that hasn’t been previously solved by a human.
—Arnaud Bertrand
A lot can be said about AI, but there are few bottom lines. Consider these my last words on the subject itself. (About its misuse by the national security state, I’ll say more later.)
The Monster AI
AI will bring nothing but harm. As I said earlier, AI is not just a disaster for our political health, though yes, it will be that (look for Cadwallader’s line “building a techno-authoritarian surveillance state”). But AI is also a disaster for the climate. It will hasten the collapse by decades as usage expands.
(See the video above for why AI models are massive energy hogs. See this video to understand “neural networks” themselves.)
Why won’t AI be stopped? Because the race for AI is not really a race for tech. It's a greed-driven race for money, a lot of it. Our lives are already run by those who seek money, especially those who already have too much. They've now found a way to feed themselves even faster: by convincing people to do simple searches with AI, a gas-guzzling death machine.
For both of these reasons — mass surveillance and climate disaster — no good will come from AI. Not one ounce.
An Orphan Robot, Abandoned to Raise Itself
Why does AI persist in making mistakes? I offer one answer below.
AI doesn’t think. It does something else instead. For a full explanation, read on.
Arnaud Bertrand on AI
Arnaud Bertrand has the best explanation of what AI is at its core. It’s not a thinking machine, and its output’s not thought. It’s actually the opposite of thought — it’s what you get from a Freshman who hasn’t studied, but learned a few words instead and is using them to sound smart. If the student succeeds, you don’t call it thought, just a good emulation.
Since Bertrand has put the following text on Twitter, I’ll print it in full. The expanded version is a paid post at his Substack site. Bottom line: He’s exactly right. (In the title below, AGI means Artificial General Intelligence, the next step up from AI.)
Apple just killed the AGI myth
The hidden costs of humanity's most expensive delusion
by Arnaud Bertrand
About 2 months ago I was having an argument on Twitter with someone telling me they were “really disappointed with my take“ and that I was “completely wrong“ for saying that AI was “just a extremely gifted parrot that repeats what it's been trained on“ and that this wasn’t remotely intelligence.
Fast forward to today and the argument is now authoritatively settled: I was right, yeah! 🎉
How so? It was settled by none other than Apple, specifically their Machine Learning Research department, in a seminal research paper entitled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity“ that you can find here (https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf).
“Can ‘reasoning’ models reason?
Can they solve problems they haven’t been trained on? No.”
What does the paper say? Exactly what I was arguing: AI models, even the most cutting-edge Large Reasoning Models (LRMs), are no more than a very gifted parrots with basically no actual reasoning capability.
They’re not “intelligent” in the slightest, at least not if you understand intelligence as involving genuine problem-solving instead of simply parroting what you’ve been told before without comprehending it.
That’s exactly what the Apple paper was trying to understand: can “reasoning“ models actually reason? Can they solve problems that they haven’t been trained on but would normally be easily solvable with their “knowledge”? The answer, it turns out, is an unequivocal “no“.
A particularly damning example from the paper was this river crossing puzzle: imagine 3 people and their 3 agents need to cross a river using a small boat that can only carry 2 people at a time. The catch? A person can never be left alone with someone else's agent, and the boat can't cross empty - someone always has to row it back.
This is the kind of logic puzzle you might find in a children brain teaser book - figure out the right sequence of trips to get everyone across the river. The solution only requires 11 steps.
Turns out this simple brain teaser was impossible for Claude 3.7 Sonnet, one of the most advanced "reasoning" AIs, to solve. It couldn't even get past the 4th move before making illegal moves and breaking the rules.
Yet the exact same AI could flawlessly solve the Tower of Hanoi puzzle with 5 disks - a much more complex challenge requiring 31 perfect moves in sequence.
Why the massive difference? The Apple researchers figured it out: Tower of Hanoi is a classic computer science puzzle that appears all over the internet, so the AI had memorized thousands of examples during training. But a river crossing puzzle with 3 people? Apparently too rare online for the AI to have memorized the patterns.
This is all evidence that these models aren't reasoning at all. A truly reasoning system would recognize that both puzzles involve the same type of logical thinking (following rules and constraints), just with different scenarios. But since the AI never learned the river crossing pattern by heart, it was completely lost.
This wasn’t a question of compute either: the researchers gave the AI models unlimited token budgets to work with. But the really bizarre part is that for puzzles or questions they couldn’t solve - like the river crossing puzzle - the models actually started thinking less, not more; they used fewer tokens and gave up faster.
A human facing a tougher puzzle would typically spend more time thinking it through, but these 'reasoning' models did the opposite: they basically “understood” they had nothing to parrot so they just gave up - the opposite of what you'd expect from genuine reasoning.
Conclusion: they’re indeed just gifted parrots, or incredibly sophisticated copy-paste machines, if you will.
This has profound implications for the AI future we’re all sold. Some good, some more worrying.
The first one being: no, AGI isn’t around the corner. This is all hype. In truth we’re still light-years away.
The good news about that is that we don’t need to be worried about having "AI overlords" anytime soon. The bad news is that we might potentially have trillions in misallocated capital. […]
AI won’t become conscious, but it will take entry-level positions in the workforce which will further burden young ppl who are already locked out of home ownership and i’d argue…having a family, building any sort of security like insurance, or retirement. which low-income GenXers like me and my crew will tell you was BS from the start.
AI is a fancy search engine and illustration machine—but that’s enough to disrupt a LOT careers.
Thomas, a most excellent book to READ: Sherry Turkle's 'Alone Together: Why We Expect More from Technology and Less from Each Other' Basically, Turkle states the obvious, AI or whatever you wish to call it, cannot cognate, cannot think and/or advance itself. Its purpose, to end social intercourse between people, which we are losing quickly. The main thesis of my work 'Zen & The Art of Masturbation' is the story of going online and continually 'Clicking-the-Mouse' instead of going out and interacting with other people, something which is destroying us, which is to say, those people out there are the "others" put on the mask and social-distance for your health & safety.
One last thing (for now), reading books is vitally important as you are required to think about the various problems throughout your read which can hardly be done with only reading, watching or listening to even the best of independent news. In doing a Graduate program (Master of Arts in Religion) I had to read a stack of books, some 5 or 6 thousand pages, analyze everything under-the-sun in those pages and write a 5 PAGE paper concerning the various subjects in the program (Theology and Ethics). And most certainly, I didn't spend all that time and money for a career and/or job, hell no, I'm human and as such, for me, being PERFECT is not doing everything CORRECT, it being my authentic self, a person!