Google is reportedly testing a medical AI chatbot that is designed to expertly answer medical questions. The system is reportedly being tested by the Mayo Clinic. The Masters of the Universe getting involved in healthcare is deeply concerning for Americans that care about their privacy and medical freedom.
The Wall Street Journal reports that Google’s Med-PaLM 2 is planning to release a medical chatbot technology that has been trained using questions and answers from medical licensing exams. The technology is expected to outperform more general-purpose algorithms in holding conversations on healthcare issues. Google executives claim that Med-PaLM 2 can be used to summarize documents, organize health data, and generate responses to medical questions. According to the Journal, the chatbot is being tested by the Mayo Clinic.
The healthcare industry has emerged as a new battleground for big tech companies and startups, all vying to win customers with their AI offerings. However, concerns about data privacy and the accuracy of AI-generated responses are at the forefront of medical AI race. In response to these concerns, Google has assured the public that customers testing Med-PaLM 2 will retain control of their data in encrypted settings inaccessible to Google. The program, they say, won’t ingest any of the customer data. Google has faced lawsuits in the past over its insatiable hunger for private medical data.
Microsoft, another major player in this space, has also been incorporating AI into patient interactions. The tech giant has partnered with health software company Epic to build tools that can automatically draft messages to patients using advanced AI algorithms. Both Google and Microsoft have expressed interest in building a virtual assistant that answers medical questions from patients worldwide, particularly in areas with limited resources.
Physicians who reviewed answers provided by Med-PaLM 2 preferred the system’s responses in eight out of nine categories. However, they found that Med-PaLM 2 included more inaccurate or irrelevant content in its responses than those of their peers.
Read more at the Wall Street Journal here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan