AI Developers Urged to Fight Misinformation and Support Fact-Based Journalism
- tech360.tv
- 4 hours ago
- 2 min read
A coalition of global media organisations is calling on artificial intelligence developers to ensure their technologies help combat misinformation and uphold the integrity of fact-based journalism.

The European Broadcasting Union (EBU), along with the World Association of News Publishers (WAN-IFRA) and other partners, launched the “News Integrity in the Age of AI” initiative on Monday during the World News Media Congress in Krakow, Poland.
The initiative outlines five core principles aimed at guiding the responsible use of AI in news. It calls for generative AI models to use news content only with the authorisation of the original source, and for transparency in attribution and accuracy. It also demands that the original news source behind AI-generated content be clearly identifiable and accessible.
Ladina Heimgartner, president of WAN-IFRA and CEO of Switzerland’s Ringier Media, said organisations that value truth and facts as the foundation of democracy must unite to shape the future of AI in media.
The initiative has drawn support from thousands of public and private media outlets across broadcast, print and online platforms. Affiliates include the Latin American Broadcasters Association (AIL), the Asia-Pacific Broadcasting Union, and the North American Broadcasters Association, whose members include Fox, Paramount, NBC Universal and PBS.
Since the launch of OpenAI’s ChatGPT in Nov. 2022, traditional media have faced challenges in adapting to AI technologies. Some, like The New York Times, have taken legal action against OpenAI and Microsoft, accusing them of using copyrighted journalistic content without permission.
Others, including the Associated Press, have entered into licensing and technology agreements with OpenAI and Google to distribute news via AI platforms such as the Gemini chatbot.
In the United States, tech companies including Google, Microsoft and OpenAI have argued to the Copyright Office that their AI training practices fall under the “fair use” doctrine. This legal principle allows limited use of copyrighted material for purposes such as teaching, research or transformation into new works.
Global media groups launched “News Integrity in the Age of AI” initiative
AI developers urged to use news content only with authorisation
Original sources must be clearly attributed in AI-generated content
Source: AP NEWS