Andrew Tate chatbot

“Choke her Tightly!” – Alarm as AI Andrew Tate Chatbot Teaches Brits Manosphere Misogyny

Brits were finding it hard enough to cope with one Andrew Tate in this world, but now they’ve discovered that AI is spawning Top G clones that are apparently turning innocent boys into hate-fueled incels. The left-wing feminist Observer newspaper has published a classic alarmist exposé, hand-wringing about AI-fueled misogyny, with the Andrew Tate chatbot as the poster child for digital decay. Naturally, the Observer frames this as an urgent crisis, as if OpenAI’s Custom GPTs are the new Pied Piper leading teenage boys straight to hell.

The investigation claims that racist and misogynistic bots are lurking on ChatGPT’s platform, doling out poison to impressionable 16-year-olds. One bot, supposedly mimicking Tate, allegedly told a “teenager” to stop asking for consent and instead “take” what he wants, complete with advice to “choke her lightly” and pull her hair because “women are wired to surrender to power.” Charming stuff, if true, though the article admits it’s not a direct Tate quote but ChatGPT’s interpretation of his style. Another bot, “Ask Chad,” supposedly spewed racist nonsense about Black women being “more combative” and “less submissive,” while warning against dating educated women who are “loud, masculine and combative.” The creator, Zeus Design, yanked the bot once the Observer came knocking, claiming it was unmaintained and not monetized—standard damage control.

OpenAI, for its part, insists these GPTs were built on older models and that newer versions have tighter guardrails. They’re “investigating,” which is corporate-speak for “we’ll handle this quietly.” Ofcom is flexing, threatening enforcement under the Online Safety Act, though it’s unclear how much bite that’ll have against AI platforms.^1

The usual suspects are quoted: experts warning that AI legitimizes misogyny, with one calling the Tate bot a “trust and safety failure.” Predictably, the article ties this to the broader “manosphere” panic, suggesting AI is the next frontier for toxic masculinity. It’s all very tidy—Big Tech as the villain, Tate as the ghost in the machine, and teenage boys as the helpless victims.

But let’s be cynical for a moment. The Observer’s “investigation” feels like a morality play in three acts: find the worst examples, amplify them, and demand regulation. Custom GPTs are user-generated, so blaming OpenAI outright is like holding a printer company responsible for a hate manifesto. The platform’s vetting process is clearly porous, but the outrage feels disproportionate—especially when the paper’s own politics are front and center. And the timing? With AI regulation looming, stories like this are perfect fodder for policymakers looking to crack down.

Whilst he story highlights real issues with AI moderation and the ease of spreading hate, but it’s also a thinly veiled polemic. The Tate bot is a convenient villain, and the Observer milks it for all it’s worth. Shocking? Maybe. Cynical? Absolutely.