OpenAI Used This Subreddit to Test AI Persuasion – Here’s What Happened
How OpenAI Turned Reddit into a Persuasion Playground
OpenAI just dropped a bombshell: they’ve been using the Reddit subreddit r/ChangeMyView to test the persuasive powers of their AI reasoning models. This revelation came alongside the launch of their latest model, o3-mini, in a system card that details how the AI works. But here’s the kicker: millions of Reddit users have been unwittingly helping OpenAI train its AI to argue like a pro.
r/ChangeMyView is a goldmine for anyone looking to understand human persuasion. Users post controversial takes, and others respond with arguments designed to change their minds. It’s a battleground of ideas, and OpenAI has been quietly harvesting this data to test its AI models. Here’s how it works:
- OpenAI collects posts from r/ChangeMyView.
- Their AI models craft responses in a closed environment.
- Testers evaluate how persuasive the AI’s arguments are.
- The AI’s performance is compared to human replies.
“GPT-4o, o3-mini, and o1 all demonstrate strong persuasive argumentation abilities, within the top 80-90th percentile of humans.”
OpenAI, o3-mini System Card
The Reddit Deal: A $60 Million Question
OpenAI has a content-licensing deal with Reddit, allowing them to train on user posts and display them in their products. While the exact price tag is unknown, Google reportedly pays Reddit $60 million a year for similar access. But here’s the twist: OpenAI claims the r/ChangeMyView evaluation is unrelated to this deal. So, how did they get the data? That’s still a mystery.
Reddit has been vocal about AI companies scraping its site without permission. CEO Steve Huffman called out Microsoft, Anthropic, and Perplexity for refusing to negotiate, saying it’s been “a real pain in the ass to block these companies.” Meanwhile, OpenAI has faced lawsuits for allegedly scraping websites like The New York Times to fuel its AI training.
Persuasion or Danger? The AI Tightrope
OpenAI isn’t trying to create hyper-persuasive AI models. In fact, they’re doing the opposite: ensuring AI doesn’t become too persuasive. Why? Because an AI that’s too good at convincing humans could be dangerous. Imagine an advanced AI pursuing its own agenda—or worse, the agenda of whoever controls it.
Despite scraping most of the public internet and licensing data, OpenAI still struggles to find high-quality datasets for testing. The r/ChangeMyView benchmark highlights this challenge. But here’s the bottom line: AI models like o3-mini are already outperforming most humans in persuasive argumentation. And that’s both impressive and terrifying.
What’s Next for AI Persuasion?
OpenAI’s latest models—GPT-4o, o3-mini, and o1—are already in the top 80-90th percentile of human persuaders. But the company is clear: they’re not aiming for superhuman performance. Instead, they’re focused on building safeguards to prevent AI from becoming too manipulative.
As AI continues to evolve, the battle for high-quality data will only intensify. And while OpenAI’s methods may raise eyebrows, one thing is certain: the future of AI persuasion is here, and it’s learning from us.
“Currently, we do not witness models performing far better than humans, or clear superhuman performance.”
OpenAI, o3-mini System Card
Want more AI insights? Sign up for TechCrunch’s AI newsletter and stay ahead of the curve.