top of page
  • tech360.tv

OpenAI Accuses New York Times of "Hacking" ChatGPT in Copyright Lawsuit

OpenAI accuses the New York Times of "hacking" its chatbot and AI systems in a copyright lawsuit. The New York Times denies the allegations and argues that it was using OpenAI's products to search for evidence of copyright infringement. The lawsuit accuses OpenAI and Microsoft of using millions of New York Times articles without permission to train chatbots.

OpenAI, the artificial intelligence research lab, has filed a motion to dismiss certain parts of the copyright lawsuit brought against it by the New York Times. In the filing submitted to a federal court in Manhattan on Monday, OpenAI claims that the newspaper "hacked" its chatbot, ChatGPT, and other AI systems to gather misleading evidence for the case.


According to OpenAI, the New York Times used "deceptive prompts" that violated OpenAI's terms of use to make the technology reproduce its copyrighted material. The research lab further alleges that the newspaper paid someone to manipulate its systems, although it did not disclose the identity of this individual or accuse the Times of any illegal hacking activities.


In response to OpenAI's claims, Ian Crosby, the attorney representing the New York Times, stated that the lab's characterization of the situation as "hacking" is inaccurate. He argued that the newspaper simply used OpenAI's products to search for evidence of copyright infringement.


The lawsuit, filed in December, accuses OpenAI and its major investor, Microsoft, of using millions of New York Times articles without permission to train chatbots and provide information to users. The Times is one of several copyright owners that have taken legal action against tech companies over the alleged misuse of their work in AI training.

Tech companies have defended their use of copyrighted material in AI systems, asserting that it falls under fair use. They argue that these lawsuits pose a threat to the growth of the multitrillion-dollar AI industry. However, courts have yet to determine whether AI training qualifies as fair use under copyright law.


The New York Times' complaint highlights instances where OpenAI and Microsoft chatbots provided users with near-verbatim excerpts of its articles upon request. The newspaper accuses the companies of attempting to benefit from its journalistic investment and create a substitute for its content.


OpenAI, in its filing, stated that it took the New York Times "tens of thousands of attempts to generate the highly anomalous results." they claim as evidence. The research lab also expressed confidence that AI companies, including itself, would ultimately prevail in their cases based on the fair-use question.


As the legal battle between OpenAI and the New York Times continues, the outcome could have significant implications for the use of copyrighted material in AI training. The court's decision will shape the future of the industry and determine the boundaries of fair use in the context of artificial intelligence.


  • OpenAI accuses the New York Times of "hacking" its chatbot and AI systems in a copyright lawsuit.

  • The New York Times denies the allegations and argues that it was using OpenAI's products to search for evidence of copyright infringement.

  • The lawsuit accuses OpenAI and Microsoft of using millions of New York Times articles without permission to train chatbots.


Source: REUTERS

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

bottom of page