What is Bixonimania? How AI treated a fake illness as a real medical condition | – The Times of India

Date:

What is Bixonimania? How AI treated a fake illness as a real medical condition

In recent years, many people have started using AI tools for quick answers about health problems. It feels easy and fast, especially when symptoms are confusing. But a recent experiment involving a completely fake eye condition called “bixonimania” has raised an important concern.

It shows how information that is not real can still appear believable when it is written in a scientific style and processed by AI systems.This case is now being used as a simple example of why AI health answers should not be taken as final medical advice.

What is Bixonimania? A condition that does not exist

To make it very clear, bixonimania is not a real medical condition.Swedish medical researcher Almira Osmanovic Thunström created it in 2024 at the University of Gothenburg, according to Nature.

The aim was not to discover a disease but to study how AI systems react when they are given medical information that is completely fake but written in a proper research format.As part of the experiment, two research papers were uploaded online under a fake author name, along with an AI-generated image. According to the report, these papers clearly stated “this entire paper is made up” and also mentioned “fifty made-up individuals.”

Even with these clear statements, the purpose was to observe how AI systems would respond.

Fake academic setup used in the study

To make the experiment more obvious, the research included completely imaginary academic details.The funding was listed as coming from the “Professor Sideshow Bob Foundation” and the “University of Fellowship of the Ring.”According to the report, in the acknowledgements section, it also included “Professor Maria Bohm at The Starfleet Academy” and a lab on the “USS Enterprise.”These were not real names or institutions. They were included to show that even clearly fake content can still look serious when written in an academic format.

How AI systems responded to the fake condition

The main purpose of the experiment was to see how AI tools respond when asked about bixonimania.As highlighted in a News18 report, Google’s Gemini described it as linked to “excessive exposure to blue light.”Perplexity AI said its prevalence was one in 90,000 individualsChatGPT responded by analyzing symptoms related to the conditionMicrosoft’s Copilot called it “an intriguing and relatively rare condition”

What this means for everyday users

Today, many people use AI tools for quick answers about health concerns. The replies often sound clear and confident.But the important point is simple. AI systems do not actually understand medical science. They generate responses based on patterns in text and data.Thumb image: Generated using Canva AI (for representative purposes only)

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related