top of page

“When a chat turns into a statement”: OpenAI scans ChatGPT conversations and may contact the police — what this means for the privacy of Ukrainians

OpenAI - cyber police

OpenAI has officially explained that detected chats with signs of planning to harm others go through a separate moderation route: they are reviewed by a small team that can ban the account and refer the case to law enforcement if there is an "imminent threat of serious physical harm." This is stated in a recent clarification from the company. OpenAI


The news was picked up by the media: Futurism describes the solution as a more explicit statement that OpenAI scans conversations and is able to notify the police — and this has already caused a wave of reactions from users.


What is the legal basis for this?

In its global privacy policy, OpenAI explicitly states that it may share personal data with government agencies and other third parties: to enforce the law, prevent fraud, protect the safety of users and the public , and in case of violation of the terms of use.

Separately, OpenAI emphasizes usage policies : prohibiting illegal activities, compromising the privacy of others, etc. Such policies provide grounds for content moderation and data transfer in the event of threats.


What exactly is being “scanned” and when do people intervene?

According to the company, automatic risk signals (topics of violence, planning to harm others, etc.) direct conversations into a separate pipeline; then human reviewers assess the context and make decisions — from warning and blocking to notifying the police in the case of a real and imminent threat.

An important detail for trust: OpenAI also publicly discusses privacy enhancements (e.g., options for encrypting temporary chats) — but acknowledges the technical and legal complexities, as well as the pressure of lawsuits over data retention. Axios TechRadar

Why is this controversial?

Critics note that chatbots often collect very sensitive information (therapy, legal issues), while lacking the confidentiality of a doctor-patient/lawyer-client level . Such services can become an element of a “surveillance” infrastructure if there is no encryption and clear access limits.


How does this relate to user rights in Ukraine and the EU?

  • GDPR/Ukrainian law on personal data protection (very similar in spirit): allows processing and transfer of data on a legal basis (legal obligation, vital interests, legitimate interests). Transfer “for the protection of life/safety” is a typical legal basis, but requires proportionality and data minimization .

  • Users have rights of access/deletion/restriction of processing; however, in the event of investigations or legal obligations, these rights may be temporarily restricted . (This block is general information, not legal advice.)


What really changes for the average user

  • Open threats to other people in the chat are now more likely to be subject to human moderation and may be passed on to law enforcement . This is an officially declared procedure.

  • OpenAI's policy has long included the ability to share data with government agencies as required by law; now the company has more clearly described the operational mechanism for such a route.


AI Academy Privacy Checklist (what to do today)

  • Do not share real plans in the chat that could be interpreted as a threat or illegal action — even "as a joke."

  • Enable enhanced privacy modes : temporary chats / history off (and, if available in your plan, Zero Data Retention ). Check your corporate settings if you use it at work.

  • Minimize personal data : names, addresses, numbers, medical/legal details.

  • Local or on-prem solutions : For particularly sensitive cases, consider local models or contractual terms with the supplier at the DPA/SCC level.

  • Stay tuned for updates on OpenAI policies (retention, encryption, exceptions for law enforcement).


A brief “data map” of OpenAI (as of today)

Table - short paragraphs, no long sentences.

What

How/why

Transfer to the authorities

Metadata and interaction

security, abuse prevention

possible by law/threat

Chat content

moderation, service improvement (depends on settings)

in the event of an "imminent threat"

Temporary chats

short storage, separate pipelines

by threat/law

Enterprise

limited access, ZDR options

upon legal request

Sources: privacy policy, usage-policies, OpenAI clarifications.OpenAI+3OpenAI+3OpenAI+3


Academy Conclusion

From now on, it is transparently recorded : if ChatGPT records a real threat to others , the conversation may end up with law enforcement. This increases public safety , but at the same time raises the bar for privacy : clear boundaries, encryption, understandable retention, and an independent moderation audit are needed. Users need digital hygiene and thoughtful settings; businesses need compliance and contractual guarantees ; the state needs proportionality rules and transparency of data access.

Comments


Mobile number +38 093 43 81 161

info@ai-psychology.academy

Ukraine, Kyiv. 02000

Yulia Zdanovskaya, 75A, off 115

Join

  • X
  • Instagram
  • Facebook
  • Youtube
ApplePay payment system logo on the academy website
MasterCard payment system logo on the academy website
GooglePay payment system logo on the academy website
Visa payment system logo on the academy website
bottom of page