OpenAI Scans ChatGPT Conversations, Refers Harmful Content to Law Enforcement
- tech360.tv

- Aug 29, 2025
- 2 min read
OpenAI now scans user conversations on ChatGPT and reports certain types of harmful content to law enforcement. This disclosure was made in a recent company blog post.

Conversations where users indicate plans to harm others are routed to specialised review teams. These teams, trained on OpenAI's usage policies, are authorised to take action, including banning accounts.
"If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement," the blog post stated.

OpenAI's usage policies prohibit using ChatGPT "to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorised activities that violate the security of any service or system."
However, OpenAI clarified it is not currently referring self-harm cases to law enforcement. This decision aims to respect people's privacy, given the unique nature of ChatGPT interactions.
OpenAI CEO Sam Altman previously indicated that using ChatGPT for therapeutic or legal advice does not offer the same confidentiality as consulting a human professional.
Altman also mentioned that the company might be compelled to release chat data to courts due to an ongoing lawsuit.
OpenAI is involved in a lawsuit with the New York Times and other publishers. The publishers are seeking access to ChatGPT logs to determine if copyrighted data was used to train OpenAI's models.
OpenAI has rejected these requests, citing user privacy, and has attempted to limit the volume of user chats provided to the plaintiffs.
OpenAI scans ChatGPT user conversations for harmful content.
Content posing an imminent threat of serious physical harm to others may be referred to law enforcement.
Self-harm cases are not referred to law enforcement due to privacy considerations.
Source: FUTURISM


