Google is conducting trials on their medical AI platform Med-PaLM 2, designed to answer healthcare-related questions in clinical settings.
Google recently began trials of the system with clients, including the Mayo Clinic.
Med-PaLM 2’s capabilities extend beyond answering healthcare questions – the AI can summarize documents and structure large amounts of health data. Moreover, the AI could deliver cutting-edge advice anywhere with an internet connection, including remote regions or where doctors are scarce.
In an email viewed by The Wall Street Journal, Google mentioned to its employees that an AI model serving as a trusted medical assistant could be “of tremendous value in countries that have more limited access to doctors.”
Medical experts and ethicists warn that while AI can revolutionize medicine, data protection and usage remain critical issues.
Google has already been criticized for how it manages sensitive health data via its partnerships with hospitals.
Some health and ethical risks remain unanswered
Generative AI tools present new risks as they can generate authoritative-sounding responses to medical inquiries, potentially influencing patients in ways not endorsed by doctors.
However, Google assured that Med-PaLM 2 test clients maintain control over their data in encrypted settings, inaccessible to Google. The program won’t absorb any user data.
Google already has competitors in this space. For instance, Microsoft partnered with Epic, a health software company, in April to develop tools that can auto-compose messages to patients using ChatGPT’s algorithms.
Greg Corrado, a senior research director at Google working on Med-PaLM 2, stated that the AI is a work in progress.
“I don’t feel that this kind of technology is yet at a place where I would want it in my family’s healthcare journey,” Corrado said. However, Med-PaLM 2 “takes the places in healthcare where AI can be beneficial and expands them by 10-fold,” he said.
Physicians and healthcare executives believe programs like Med-PaLM 2 need further development and rigorous testing before they are used to diagnose patients and suggest treatments.
According to a Google paper published in May, physicians who examined 1,000 answers provided by Med-PaLM 2 favored the AI system’s responses over those of doctors in 8/9 evaluation categories.
However, doctors also found that Med-PaLM 2 included some inaccurate or irrelevant content in its responses, indicating that it might share flaws with other chatbots, such as confidently generating off-topic or false statements.
Kellie Owens, a medical ethicist at NYU Grossman School of Medicine, stressed the need for patients to be educated about new ways AI tools use their health data.
Ideally, “These conversations need to be human-to-human” between patients and medical staff, Owens said.