Sam Altman, CEO and cofounder of OpenAI, testified on May 16, 2023, and recommended lawmakers pursue new sets of safety requirements that could test products before they are released. Sam said, “There should be limits on what a deployed model is capable of and what it does.”
The healthcare sector should pay attention to Sam Altman’s statements on AI regulation. Despite their strong enthusiasm for AI modeling, those in the healthcare industry must proceed cautiously.
Here are examples of where ChatGPT can be particularly effective for healthcare.
Automation Of Patient Inquiries
Patients can communicate directly with their physicians, allowing them to seek feedback on their care or ask questions. However, this has resulted in an overwhelming influx of messages for physicians, and their ability to respond promptly can be challenging. This, in turn, can potentially affect the overall positive patient experience.
The ChatGPT technology, when deployed as a chatbot, can automate physicians’ replies to patients’ queries.
In a research study released by the Journal of the American Medical Association (JAMA), a group of certified healthcare professionals evaluated the responses of both physicians and chatbots to patient inquiries posted publicly on a social media forum. Interestingly, respondents favored the chatbot’s responses over the physicians’, giving them significantly higher ratings for quality and empathy.
Automating responses could significantly improve physician satisfaction, particularly when the healthcare industry grapples with clinician burnout.
During a panel discussion with healthcare leaders on May 17, 2023, the panelists advocated using AI to generate automated responses to patient inquiries. However, they firmly stated that physicians must review and approve these messages before sending them.
Enhanced Self-Service Data Insights
On April 17, 2023, Epic Systems and Microsoft revealed plans to integrate OpenAI’s GPT-4 AI language into natural language queries and data analysis for SlicerDicer. SlicerDicer, Epic’s self-service reporting tool, accesses the Electronic Medical Record (EMR) dataset, enabling physicians and operational leaders to discover trends beneficial for patient care or to pinpoint potential opportunities for revenue cycle improvement.
For example, a physician might pursue data insights on a specific patient group given a particular medication to determine whether this medication improved the patients’ well-being. Additionally, the organization’s financial leader could utilize this dataset to examine expected reimbursements and profit margins, detailed by financial class. This allows the revenue cycle team to investigate any underpayments from the insurance provider.
Healthcare Interpreter
The real-time translation feature of ChatGPT changes the medical interpretation business in healthcare. Taking advantage of the system’s sophisticated language processing skills, it can swiftly and precisely translate complex medical terminology and jargon. This will ensure that patients comprehensively understand their diagnosis, various treatment alternatives, and medical instructions in their native language.
However, we must understand that even though we can use ChatGPT to assist in various educational fields, it does not replace human teachers and educators, and we shouldn’t consider it as such.
The healthcare industry must regulate these three main themes.
Healthcare AI Regulations
The use of AI in US healthcare is evolving rapidly and lacks full regulation. We must ensure that ChatGPT usage adheres to all pertinent laws and regulations. The healthcare sector remains one of the most stringently regulated worldwide, ranging from mandatory doctor licenses to equipment standards and rigorous clinical trials for new drugs.
US healthcare can adopt similar standards from the EC’s proposal where an independent notified body would ensure that an AI product meets general requirements. These include stating its intended purpose, verifying AI accuracy, and confirming that the training data is reliable, representative, and used sufficiently.
Action Plan To Solve For Bias in Data
OpenAI trains ChatGPT on large amounts of data, which may contain biases. This can lead to the model making biased predictions, harming patient care. It is imperative to ensure that the data used to train the model is diverse and representative of the population it will serve.
Data privacy and security
One of the major concerns in using ChatGPT in healthcare lies in potential data breaches and unauthorized access to sensitive patient information. The current HIPAA guidelines do not cover AI and other emerging scenarios. Although AI technologies offer numerous benefits, they are evolving at a pace that exceeds the speed at which regulatory agencies can keep up.
AI has exponential potential in healthcare, but without regulations, as recommended by Sam Altman, ChatGPT may follow IBM Watson’s trajectory, showing initial promise but eventually fading away.
Credit: Source link