top of page
  • Kyle Chua

CNET Issues Corrections on More Than Half of Articles Written by AI Tool

CNET’s use of artificial intelligence (AI) to write articles seemingly didn’t go according to plan.

Credit: Possess Photography via Unplash

The tech news outlet has issued corrections on more than half of the articles it claims were written by an internal AI tool as part of a test project.


Out of the 77 AI-written articles published since November last year, 41 reportedly have corrections. Some of which, according to the CNET’s editor-in-chief Connie Guglielmo, required “substantial correction”, while several others only had “minor issues,” which included incomplete company names, transposed numbers and vague language, among others.


There were also articles that supposedly contained phrases that weren’t “entirely original,” suggesting the AI tool may have plagiarised from other sources. CNET explains that in those cases the editor assigned to the article may have either overlooked the plagiarised phrases or failed to use the plagiarism tool properly.


Engadget notes that AI software isn’t advanced enough yet to be aware of when it’s plagiarising something. That means, if there’s anyone to blame for the plagiarised phrases from the articles, it should indeed be CNET’s editors.

A sample of CNET's AI-written articles. Credit: CNET

Guglielmo further said the articles that have corrections now include an editors’ note which explains what was changed.


She also said that CNET is pausing the use of the AI tool for now, but could use it again at a later date. “We’ve paused and will restart using the AI tool when we feel confident the tool and our editorial processes will prevent both human and AI errors.”


CNET took some flak from industry professionals and social media users alike after Futurism broke the story that it was quietly employing the help of an AI tool to write articles. The outlet later confirmed the report, saying the tool was being used to help put out explainer articles on finance-related topics. The Verge also notes that the tool was being used to improve the search engine optimisation (SEO) of the articles, so that they could include more affiliate ads.

 
  • CNET has issued corrections on more than half of the articles it claims were written by an internal AI tool as part of a test project.

  • Some articles required “substantial correction”, while several others only had “minor issues,” which included incomplete company names and the use of vague language.

  • There were also articles that supposedly contained phrases that weren’t “entirely original,” suggesting the AI tool may have plagiarised from other sources.

  • CNET has since temporarily suspended the use of the AI tool, but plans to use it again at a later date.

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page