The American Medical Association is urging Congress to establish stronger safeguards for artificial intelligence in healthcare, warning that the rapid rise of mental health chatbots is outpacing the protections needed to keep patients safe.
In letters to congressional caucuses focused on artificial intelligence and digital health, the AMA said AI-enabled tools have the potential to expand access to mental health support and drive innovation in care delivery. But the group cautioned that growing use, particularly in sensitive mental health settings, has exposed gaps in oversight, with risks ranging from misinformation and emotional dependency to privacy breaches and, in some reported cases, chatbots providing harmful or inappropriate responses to users in distress.
See also: Beyond the chatbot frenzy: Rethinking HR’s digital experience architecture
AMA CEO John Whyte, MD, MPH, said that while AI can play a supportive role in care, it currently lacks consistent safeguards to prevent serious harm. Without clearer rules, he warned, the technology could erode patient trust even as adoption accelerates.
The AMA’s recommendations center on building a framework that keeps pace with the technology’s reach. That starts with transparency, ensuring users understand when they are interacting with AI rather than a human clinician, and drawing firm boundaries to prevent chatbots from presenting themselves as licensed professionals. The organization also called for clearer regulatory definitions around what these tools can and cannot do, arguing that mental health chatbots should not diagnose or treat conditions without appropriate review.
At the same time, the AMA emphasized the need for systems that can recognize crisis situations in real time. As these tools scale, it said, developers should be required to incorporate safeguards that detect signs of self-harm risk and direct users to appropriate human support, helping prevent dangerous gaps in care.
The push comes as policymakers and regulators continue to grapple with how to oversee AI in healthcare. The U.S. Food and Drug Administration has begun developing frameworks for AI-enabled medical technologies, but a comprehensive approach to mental health chatbots has yet to emerge. Other experts, including the American Psychological Association, have similarly raised concerns about accuracy, bias and overreliance on AI for emotional support.
At the same time, demand for digital mental health tools continues to grow, driven in part by persistent access challenges. The National Institute of Mental Health has documented ongoing shortages of providers and barriers to care, creating an environment where AI tools are increasingly filling gaps, even as questions about their safety and effectiveness remain.
The AMA stressed that its proposed safeguards are a starting point, not a limit, as the technology evolves. Ultimately, the group said, the goal is to ensure AI tools complement—not replace—clinical care, balancing innovation with the protections needed to maintain patient safety and public trust.
Credit: Source link









