AI-Powered Transcription Tool in Hospitals Accused of Fabricating Speech
OpenAI's Whisper AI-powered transcription tool accused of fabricating text, including racial commentary and violent rhetoric. Concerns raised over the use of Whisper in medical settings despite warnings about its reliability. Researchers and experts call for AI regulations and improvements to address the issue of hallucinations in transcription tools.

Despite being praised for its accuracy, Whisper has been found to create false sentences, a phenomenon known as "hallucinations," by software engineers and researchers.
Whisper's flaws have raised concerns as it is widely used across various industries for tasks such as transcribing interviews, generating text, and creating video subtitles. Medical centres have rushed to adopt Whisper-based tools for transcribing patient consultations, despite warnings from OpenAI about its use in high-risk domains.
Researchers have encountered hallucinations in a significant number of audio transcriptions, with instances found in public meetings, short audio samples, and even in over 13,000 clear audio snippets. The prevalence of these errors could lead to thousands of inaccurate transcriptions, posing serious consequences, especially in medical settings.
Alondra Nelson, a former White House official, highlighted the potential risks of misdiagnosis due to these errors, stressing the need for a higher standard in transcription tools used in critical environments. Whisper's use in creating closed captioning for the Deaf and hard of hearing has also raised concerns about the impact of faulty transcriptions on vulnerable populations.
Calls for AI regulations have emerged from experts and advocates, urging OpenAI to address the flaws in Whisper. Former employees have expressed worries about the tool's integration into various systems without adequate safeguards. OpenAI has acknowledged the issue and stated its commitment to reducing hallucinations through model updates.
While errors in transcription tools are not uncommon, Whisper's tendency to hallucinate stands out among AI-powered tools. Despite this, Whisper is widely integrated into platforms like Oracle and Microsoft's cloud services, with millions of downloads and applications in call centres and voice assistants.

Researchers have identified harmful hallucinations in a significant portion of transcribed content, raising concerns about misinterpretation and misrepresentation of speakers. The tool's fabrications often occur during pauses or background noise, leading to inaccuracies in transcriptions.
Medical centres, including the Mankato Clinic and Children's Hospital Los Angeles, have adopted Whisper-based tools for transcribing patient interactions. However, concerns persist about the accuracy of these transcriptions, especially in confidential medical settings.
OpenAI's Whisper AI-powered transcription tool accused of fabricating text, including racial commentary and violent rhetoric.
Concerns raised over the use of Whisper in medical settings despite warnings about its reliability.
Researchers and experts call for AI regulations and improvements to address the issue of hallucinations in transcription tools.
Source: AP NEWS