AMD’s AI director is not happy with Anthropic’s Claude code; issue raised on Github reads: Claude cannot be trusted to perform complex engineering tasks, every senior engineer in my team … – The Times of India

Date:

AMD's AI director is not happy with Anthropic's Claude code; issue raised on Github reads: Claude cannot be trusted to perform complex engineering tasks, every senior engineer in my team …

Claude Code’s performance isn’t impressing Stella Laurenzo anymore. AMD’s AI chief took to her “stellaraccident” GitHub account to write, “has regressed to the point it cannot be trusted to perform complex engineering.” Her comments are based on internal analysis of more than 6,800 coding sessions, nearly 235,000 tool calls, and close to 18,000 reasoning blocks. She said multiple engineers on her team have reported similar issues, pointing to a rise in “stop-hook violations,” where the model exits tasks early or requests unnecessary permissions.Laurenzo said that “Every senior engineer on my team has reported similar experiences/anecdotes,” adding that stop-hook violations increased from zero to around 10 per day last month. She linked the decline to the rollout of thinking redaction (redact-thinking-2026-02-12), arguing that extended reasoning can be “load-bearing” for complex engineering workflows.She also noted a behavioural shift in Claude Code, from a research-first to an edit-first approach, which she said led to lower-quality code, weaker adherence to conventions, and reduced reliability during longer sessions.

What Anthropic said in response to AMD AI chief’s Claude usage concerns

Anthropic responded to the claims, with the company’s engineer, Boris Cherny, stating that the redact-thinking setting only hides reasoning from the interface and does not reduce the model’s actual reasoning.

The company also pointed to the introduction of adaptive thinking in Claude Opus 4.6, where the system determines how long to think depending on the task.“Some people want the model to think for longer, even if it takes more time and tokens. To improve intelligence more, set effort=high via `/effort` or in your settings.json,” he wrote.Anthropic added that while the default medium effort setting (effort=85) balances performance and efficiency, it is testing higher effort configurations for Teams and Enterprise users so they can “benefit from extended thinking even if it comes at the cost of additional tokens & latency.”“I appreciate the depth of thinking & care that went into this,” Boris also noted, responding to Laurenzo’s analysis.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Why parrots can copy human voices but may not always understand the meaning – The Times of India

(Photo Credit: Shutterstock)For ages now, parrots have caught the...

Lost for 200 years: Virginia dig uncovers hidden barracks of America’s first soldiers | World News – The Times of India

The long-lost Revolutionary War Barracks have finally been uncovered...