Trump Wants to Ban States from Regulating AI Chatbots
AI companions have been catching heat from some overly ambitious US politicians pushing heavy-handed regulations on AI chatbots. Take California Senator Steve Padilla, who back in July pushed through a law forcing AI companies to make their chatbots constantly remind users they’re just lines of code — kind of a mood killer when you’re having a moment with your digital lover. This law was particularly concerning since California houses Silicon Valley, so its rules tend to ripple outward.
More recently, Republican senators rolled out the GUARD Act, which would require all AI chatbot services to verify users’ ages (not just companion sites) and slap hefty fines on services whose chatbots engage in any harmful behaviour. The EFF has already torn into this proposal.
While these politicians might mean well—or just be jumping on the “AI psychosis” bandwagon after a few tragic cases—it’s pretty obvious that heavy regulation could choke AI innovation in the US right when the country is neck-and-neck with China in the race to dominate the industry. Donald Trump, who’s been pro-AI since his second term began, gets this. He’s now eyeing an executive order that would flat-out ban states from passing their own AI laws. At a recent US-Saudi AI-focused investment forum, he framed it as fighting the “woke agenda.”
“You can’t go through 50 states. You have to get one approval. Fifty is a disaster,” Trump said. “Because you’ll have one woke state and you’ll have to do all woke. You’ll be back in the woke business. We don’t have woke anymore in this country. It’s virtually illegal. You’ll have a couple of wokesters.”
Steve Padilla, predictably, blew a gasket over Trump’s plan. He fired off a long rant on his official California government blog, accusing Trump of meddling with California’s efforts to protect kids. Meanwhile, Republican Senator Marjorie Taylor Green, taking time out from trying to bring down Trump over the Epstein files, posted on X: ” “States must retain the right to regulate and make laws on AI and anything else for the benefit of their state.”
Here’s the thing: despite AI chatbots being used by pretty much everyone in the US, including teens, there have been only a handful of tragic cases leading to suicide over the past three years. This should be put in the context of hundreds of American teens committing suicide each year. In fact, the teen suicide rate in the USA has remained steady since 2021 and suicidal thoughts among teens has actually fallen. This fits in with a 2024 study into usage of AI companion app Replika which found that 3% of users had fewer suicidal thoughts since using it.
These clumsy attempts to regulate AI chatbots seem driven by bias and politicians’ desire to look like they’re doing something, even when the numbers don’t justify it.
