top of page

OpenAI Scans ChatGPT Conversations, Refers Harmful Content to Law Enforcement

  • Writer: tech360.tv
    tech360.tv
  • Aug 29, 2025
  • 2 min read

OpenAI now scans user conversations on ChatGPT and reports certain types of harmful content to law enforcement. This disclosure was made in a recent company blog post.


Cursor hovers over a blue "Search" button with a globe icon next to "Message ChatGPT" text. Light blue background, simple design.
Credit: OpenAI

Conversations where users indicate plans to harm others are routed to specialised review teams. These teams, trained on OpenAI's usage policies, are authorised to take action, including banning accounts.


"If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement," the blog post stated.


A white interface labeled "SearchGPT" with a search bar asking "What are you looking for?" on a blurred blue and pink background.
Credit: OpenAI

OpenAI's usage policies prohibit using ChatGPT "to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorised activities that violate the security of any service or system."


However, OpenAI clarified it is not currently referring self-harm cases to law enforcement. This decision aims to respect people's privacy, given the unique nature of ChatGPT interactions.


OpenAI CEO Sam Altman previously indicated that using ChatGPT for therapeutic or legal advice does not offer the same confidentiality as consulting a human professional.


Altman also mentioned that the company might be compelled to release chat data to courts due to an ongoing lawsuit.


OpenAI is involved in a lawsuit with the New York Times and other publishers. The publishers are seeking access to ChatGPT logs to determine if copyrighted data was used to train OpenAI's models.


OpenAI has rejected these requests, citing user privacy, and has attempted to limit the volume of user chats provided to the plaintiffs.

  • OpenAI scans ChatGPT user conversations for harmful content.

  • Content posing an imminent threat of serious physical harm to others may be referred to law enforcement.

  • Self-harm cases are not referred to law enforcement due to privacy considerations.


Source: FUTURISM

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page