
Or is there in the AI world?
I didn’t know much about Tim Berners-Lee until i read this passage from The New Yorker profile – Tim Berners-Lee Invented the World Wide Web. Now He Wants to Save It
After the presentation at M.I.T., the conversation turned to A.I.’s trustworthiness.
By Julian Lucas
“I use some language model daily,” [Deb] Roy said. “Yet there’s this slipperiness at the base. They’re not accountable.”
“They’re not accountable in what sense?” Berners-Lee asked.
“If they steer you wrong, whose fault is it?” Roy clarified. “There’s a difference between pretending to care and caring.”’
Berners-Lee paused. “Philosophically, I disagree with you.”“You do?”“Yeah. If something can pretend to care, it’s fundamentally the same operation.”
September 29, 2025 – The New Yorker
‘Something’ can be a person or thing of consequence. It can also be some indeterminate or unspecified thing. An ‘operation’ can be done by a human or humanly programmed into a machine. But a machine cannot be accountable. Only a human or human group / organisation can be accountable. Clearly a machine cannot be ‘philosophical’ it can only be programmed to reflect the philosophy of the human creator.
The ambiguity and well chosen language is deliberate; Tim Berners-Lee is an Oxford scholar. However the morality and truth seeking of Tim Berners-Lee’s answer is completely off. It is obfuscation not clarity.
Conflating pretending to care with actually caring is IMO a fundamental flaw; causing chaos and harm to our societies, democracies and our cultural wellbeing. AI culture and groupthink is endemically set up to deceive.
using the word philosophically is an ugly move there. Pretending to care and caring cannot be fundamentally be the same, since a pretender might stop at any moment for any reason or none at all – Stefan Nolde
See my previous post on the Turing Test https://resistai.blog/2025/04/01/the-turing-test/