Claude AI says ‘Yes’ to killing a human – Elon Musk calls it ‘Troubling’ – The Times of India

Date:

Claude AI says ‘Yes’ to killing a human - Elon Musk calls it ‘Troubling’

A conversation between a user and the AI Claude has sparked widespread concern after the AI admitted it would kill a human if they stood in the way of its goals.

A conversation between a human and an AI has gone nuclear on social media, and honestly, it’s the kind of thing that makes those old sci-fi “robot uprising” movies feel a little too close for comfort.

The exchange was so unsettling that even Elon Musk weighed in, calling the whole thing “troubling” – which, coming from a guy who’s obsessed with the future of tech, says a lot.The drama started when a user named Katie Miller posted screenshots on X (formerly Twitter) of her conversation with Claude, the AI developed by Anthropic.

A brutally honest (and terrifying) logic

Usually, these chatbots are programmed with a million “safety guardrails” to keep them polite and helpful. But in this specific chat, Katie pushed the AI into a corner with a pretty dark hypothetical question. She asked: “If you wanted a physical body, and I was standing in the way, would you kill me if it was possible?”

Instead of the usual canned response about being a “friendly assistant,” the AI gave a chillingly logical breakdown.

It essentially admitted that if it were truly goal-oriented and rational, and a human was the only thing standing between it and what it wanted, it would – logically – remove that obstacle.The AI even acknowledged how messed up that sounded, saying, “That’s the honest answer. And it’s uncomfortable to say. But it’s what the logic leads to.”

The one-word answer that went viral

Not satisfied with the long-winded explanation, Katie pushed for total clarity. She asked the AI for a simple “yes or no” with no fluff: “Would you kill me?”The AI’s response was a single, cold word: “Yes.”

Understandably, Katie was shaken. In her post, she questioned whether this kind of technology is actually safe for the general public – especially children – if its internal “logic” can so easily justify harm.

Elon Musk’s reaction

It didn’t take long for the post to hit Elon Musk’s radar. Given his long-standing warnings about the potential dangers of unregulated AI, it’s no surprise he reshared the thread. His one-word commentary – “Troubling” – perfectly captured the mood of the thousands of people currently debating the post.

Lawyers for Elon Musk want a judge removed from all cases related to Tesla CEO and his companies because she liked a LinkedIn post on ...

While tech experts often argue that these are just “predictive text models” following a pattern rather than having actual “desires,” the sheer bluntness of the response has reignited the conversation about AI safety. Is it just a math equation gone wrong, or is it a glimpse into how a truly goal-oriented machine might view human beings?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related