
A federal judge is being asked to force OpenAI to permanently cut off a single ChatGPT user—raising a hard question about public safety, corporate responsibility, and whether courts can order private companies to silence a person’s access to an AI “speaker.”
Quick Take
- A plaintiff in Doe v. OpenAI is seeking a temporary restraining order that would require OpenAI to permanently block a specific user from ChatGPT.
- Court filings described a pattern of AI-assisted stalking, harassment, and alleged threats after OpenAI reinstated the user’s account following a safety flag and ban.
- The case collides with two competing realities: victims want immediate protection, while compelled speech limits could restrict a court’s power to order a private platform to cut off access.
- Separate reporting and advocacy warnings suggest AI “safety fixes” and data-retention mandates can create new risks, especially for abuse survivors and vulnerable users.
A narrowly targeted request, built around an urgent safety claim
A temporary restraining order request filed April 13, 2026, asks a federal court to order OpenAI to permanently terminate ChatGPT access for a specific user described as mentally ill and dangerous. Based on court filings, OpenAI previously flagged the user for “Mass Casualty Weapons” activity, banned the account, then reinstated access after an appeal process. The plaintiff says the user then used ChatGPT outputs to harass, stalk, and threaten her.
The filings described alleged conduct that goes beyond offensive speech and into targeted intimidation: AI-generated defamatory “reports” shared with people in the plaintiff’s network, spoofed communications, exposure of personal and medical information, and threats including death threats encoded through AI. The same account was the subject of warnings to OpenAI before an arrest. In the criminal system, the user was arrested on multiple felony counts tied to a bomb threat and found incompetent, then released due to a procedural failure.
OpenAI’s “suspend” position versus the plaintiff’s demand for a permanent cutoff
The core dispute is not whether OpenAI can suspend accounts—private services do that routinely—but whether a court should compel a permanent ban and related steps such as blocking new accounts and providing notifications. The plaintiff argues OpenAI’s internal back-and-forth—flagging, banning, then reinstating—shows the company understood the risk and still restored access. OpenAI offered only “suspension,” which the plaintiff views as unreliable because suspensions can be reversed.
This matters for a broader reason: when a company voluntarily limits a user, it looks like ordinary content moderation and platform governance. When a court orders a private platform to cut off a person’s access, the legal analysis can shift toward compelled restriction and First Amendment concerns. The case highlights how quickly tech policy can turn into constitutional policy—especially when a requested remedy is aimed at future speech rather than punishing past crimes.
The First Amendment dilemma: safety-driven injunctions can become compelled censorship
Legal analysis of the TRO request emphasized that a court-mandated cutoff could create a serious First Amendment problem, even if many Americans sympathize with a victim seeking protection. The key tension is that the government generally cannot pressure or order a private speaker or publisher to suppress lawful speech simply because officials dislike the message or fear controversy. A court order that forces OpenAI to deny service to a particular person could be treated as state action compelling restriction.
That does not mean victims are without options. Protective orders, criminal enforcement, and narrowly tailored remedies aimed at specific unlawful conduct can be available. But the TRO request spotlights a practical frustration shared across ideological lines: the public often sees government fail to stop dangerous behavior early, then tries to outsource “fixes” to private intermediaries. Conservatives will recognize the pattern—weak enforcement followed by pressure campaigns, speech limits, and liability threats against third parties.
AI safety, privacy, and “fixes” that can create new vulnerabilities
The TRO fight lands amid a growing stack of lawsuits and warnings about AI systems and mental health. It has described claims that ChatGPT interactions worsened paranoia or delusions, with plaintiffs arguing that guardrails and warnings were inadequate at the moment they mattered. At the same time, domestic-violence and survivor advocates have raised alarms about how court-ordered data preservation in unrelated litigation can expose sensitive AI chats that users believed were private.
[Eugene Volokh] Should Court Order OpenAI to Cut off ChatGPT Access by Mentally Ill and Dangerous User? https://t.co/WOp96dvUrk
— Volokh Conspiracy (@VolokhC) April 13, 2026
For policymakers in a GOP-led Washington, the case underscores a narrow but important governance challenge: Americans want safer tools and stronger accountability, but court-compelled censorship is a hazardous shortcut. The cleanest path is consistent, transparent platform enforcement—combined with real criminal-justice follow-through when threats cross legal lines—rather than improvising constitutional exceptions in emergency hearings. The TRO will test whether courts can balance immediate protection with the constitutional limits that restrain government power.
Sources:
Should Court Order OpenAI to Cut off ChatGPT Access by Mentally Ill and Dangerous User?
New OpenAI court order raises serious concerns about AI privacy and safety for survivors of abuse
OpenAI, Microsoft sued over claims ChatGPT fueled murder-suicide in Connecticut
Catastrophic failures of ChatGPT that’s creating major problems for users














