Tech News

California Wants AI Chatbots to Remind Users They Aren’t People

Even if chatbots successfully pass the Turing test, they’ll have to give up the game if they’re operating in California. A new bill proposed by California Senator Steve Padilla would require chatbots that interact with children to offer occasional reminders that they are, in fact, a machine and not a real person.

The bill, SB 243, was introduced as part of an effort to regulate the safeguards that companies operating chatbots must put in place in order to protect children. Among the requirements the bill would establish: it would ban companies from “providing rewards” to users to increase engagement or usage, require companies to report to the State Department of Health Care Services how frequently minors are displaying signs of suicidal ideation, and provide periodic reminders that chatbots are AI-generated and not human.

That last bit is particularly germane to the current moment, as kids have been shown to be quite vulnerable to these systems. Last year, a 14-year-old tragically took his own life after developing an emotional connection with a chatbot made accessible by Character.AI, a service for creating chatbots modeled after different pop culture characters. The parents of the child have sued Character.AI over the death, accusing the platform of being “unreasonably dangerous” and without sufficient safety guardrails in place despite being marketed to children.

Researchers at the University of Cambridge have found that children are more likely than adults to view AI chatbots as trustworthy, even viewing them as quasi-human. That can put children at significant risk when chatbots respond to their prompting without any sort of protection in place. It’s how, for instance, researchers were able to get Snapchat’s built-in AI to provide instructions to a hypothetical 13-year-old user on how to lie to her parents to meet up with a 30-year-old and lose her virginity.

There are potential benefits to kids feeling free to share their feelings with a bot if it allows them to express themselves in a place where they feel safe. But the risk of isolation is real. Little reminders that there is not a person on the other end of your conversation may be helpful, and intervening in the cycle of addiction that tech platforms are so adept at trapping kids in through repeated dopamine hits is a good starting point. Failing to provide those types of interventions as social media started to take over is part of how we got here in the first place.

But these protections won’t address the root issues that lead to kids seeking out the support of chatbots in the first place. There is a severe lack of resources available to facilitate real-life relationships for kids. Classrooms are over-stuffed and underfunded, after school programs are on the decline, “third places” continue to disappear, and there is a shortage of child psychologists to help kids process everything they are dealing with. It’s good to remind kids that chatbots aren’t real, but it’d be better to put them in situations where they don’t feel like they need to talk to the bots in the first place.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
×