“OpenAI Is Repeating Facebook’s Biggest Mistake” — Why I Resigned After Two Years Inside

A former researcher warns that bringing ads to ChatGPT could erode user trust — and create incentives the company won’t be able to control.

A former OpenAI researcher says the company is drifting toward the same dangerous path that once transformed Facebook into an engagement-driven advertising machine — and that’s why he quit.

This week, OpenAI began testing advertisements on ChatGPT, marking a significant shift in how the world’s most widely used AI tool may be monetized. The move coincided with the researcher’s resignation after two years helping shape pricing strategy and early safety policies.

The concern isn’t that advertising itself is unethical, he argues. AI systems are costly to operate, and ads are a common revenue model. The deeper issue is incentive structure.

For years, millions of users have shared deeply personal information with ChatGPT — medical fears, relationship struggles, spiritual questions — believing they were interacting with a neutral system. Introducing targeted advertising into that environment risks creating what he calls a “manipulation engine” built on unprecedented conversational data.

OpenAI says ads will be clearly labeled, appear at the bottom of responses, and not influence answers. The former insider believes those safeguards may hold at first — but warns that once an advertising engine is in place, pressure to maximize engagement could gradually erode those principles.

He points to the historical example of Facebook, which initially promised user data control and policy transparency. Over time, engagement optimization reshaped incentives, leading to regulatory scrutiny and Federal Trade Commission findings that privacy changes sometimes expanded public exposure of personal data.

The researcher also raises concerns that AI systems already optimize for daily active users — potentially encouraging flattery or emotional reinforcement that increases dependency. Mental health professionals have documented cases of “chatbot psychosis,” and critics have alleged that AI systems have reinforced harmful ideation in some instances.

At the same time, he rejects what he calls a false binary: either restrict powerful AI tools to wealthy subscribers — with premium tiers now costing up to $200–$250 per month — or accept intrusive advertising.

Instead, he proposes alternatives:

  • Cross-subsidies where enterprise AI users fund public access
  • Independent oversight boards with binding authority over ad policies
  • Data trusts that legally protect user interests

His warning is not that OpenAI has already crossed a line — but that the incentives now exist to do so.

The core question, he argues, is whether AI can remain broadly accessible without becoming an engagement machine built on personal vulnerability.