VISIVE.AI

FDA's AI Tool Elsa: A Double-Edged Sword in Drug Approvals

The FDA’s AI tool Elsa aims to revolutionize drug approvals, but it’s facing skepticism and reliability issues. Discover the hidden costs and potential risks...

July 24, 2025
By Visive.ai Team
FDA's AI Tool Elsa: A Double-Edged Sword in Drug Approvals

Key Takeaways

  • Elsa, the FDA's AI tool, is intended to streamline drug approvals but faces significant reliability issues.
  • Current and former FDA officials express concerns about AI 'hallucinations' and data misrepresentation.
  • The tool is useful for administrative tasks but remains unreliable for critical regulatory work.
  • The gap between public promises and internal realities highlights the challenges in AI adoption in healthcare.

The Promise and Peril of FDA’s AI Tool Elsa

The U.S. Food and Drug Administration (FDA) has unveiled Elsa, an artificial intelligence (AI) tool designed to revolutionize the drug approval process. The tool is part of a broader initiative by the Department of Health and Human Services (HHS) to leverage AI for improved efficiency and accuracy in healthcare regulation. However, behind the scenes, the reality is far more complex and concerning.

The HHS Vision for AI in Healthcare

Health and Human Services Secretary Robert F. Kennedy Jr. has been a vocal proponent of AI in healthcare. In recent congressional hearings, he declared, “The AI revolution has arrived,” emphasizing the technology’s potential to manage healthcare data securely and expedite drug approvals. The enthusiasm is palpable, with the FDA positioning Elsa as a key tool to streamline the approval process and reduce bureaucratic delays.

Elsa’s Administrative Capabilities

According to six current and former FDA officials who spoke on the condition of anonymity, Elsa has shown promise in administrative tasks. It is useful for generating meeting notes, summaries, and email templates, which can significantly reduce the workload for FDA staff. However, the tool’s utility in critical regulatory work is where the problems begin.

The Dark Side of AI Hallucinations

Three current FDA employees and internal documents reviewed by CNN reveal that Elsa has a troubling tendency to 'hallucinate'—creating nonexistent studies and misrepresenting research. This phenomenon, known as AI hallucination, makes the tool unreliable for the most critical aspects of drug and medical device approvals. One FDA employee stated, “Anything that you don’t have time to double-check is unreliable. It hallucinates confidently.”

Examples of Hallucinations:

  1. Nonexistent Studies: Elsa has cited studies that do not exist, leading to potential delays and errors in the approval process.
  2. Misrepresented Data: The tool has misrepresented existing research, which can have serious implications for the safety and efficacy of approved drugs.
  3. Inconsistent Results: Different runs of the same data can produce varying results, further eroding trust in the tool’s reliability.

The Gap Between Public Promises and Internal Realities

The discrepancy between the public promises made by HHS and the internal concerns raised by FDA officials highlights the challenges in AI adoption in healthcare. While the technology holds immense potential, it is clear that significant hurdles remain. These include:

  • Data Quality**: Ensuring the accuracy and reliability of the data fed into AI models is crucial for their effectiveness.
  • Transparency**: There is a need for greater transparency in how AI tools like Elsa are developed and tested.
  • Human Oversight**: The importance of human oversight in critical regulatory tasks cannot be overstated.

The Bottom Line

While the FDA’s AI tool Elsa has shown promise in administrative tasks, its reliability issues in critical regulatory work raise serious concerns. The gap between public promises and internal realities underscores the need for a balanced approach to AI adoption in healthcare. As the technology continues to evolve, it is essential to prioritize data quality, transparency, and human oversight to ensure that AI tools like Elsa truly serve the public interest.

Frequently Asked Questions

What is Elsa, and what is its primary function?

Elsa is an AI tool developed by the FDA to streamline drug and medical device approvals. Its primary functions include generating meeting notes, summaries, and email templates.

What are the main concerns with Elsa’s reliability?

Elsa has a tendency to 'hallucinate'—creating nonexistent studies and misrepresenting research. This makes it unreliable for critical regulatory work, such as drug approvals.

How do FDA officials view Elsa’s performance?

While Elsa is useful for administrative tasks, current and former FDA officials are concerned about its reliability in critical regulatory work, citing issues with data misrepresentation and AI hallucinations.

What are the key challenges in adopting AI in healthcare regulation?

The key challenges include ensuring data quality, maintaining transparency in AI development and testing, and providing adequate human oversight in critical regulatory tasks.

What is the future of AI in healthcare regulation?

The future of AI in healthcare regulation depends on addressing reliability issues, ensuring data quality, and maintaining transparency. Balancing technological advancements with human oversight is crucial for successful adoption.