Operators of artificial intelligence chatbots would have to refer suicidal users to a crisis hotline, and clearly tell users that they are talking to software—not a human—under a bill that has been moving through Salem in recent weeks.
“One of the most important features of this bill is, it tries to take a moment of crisis and turn it into a moment of intervention, of hope,” says Dwight Holton, the CEO of Lines for Life, a major operator of crisis hotlines in Oregon.
The proposal is part of a broader AI regulatory bill that looks to establish safety guardrails on an emerging technology. It comes as experts sound louder alarms about the way sycophantic chatbots and other AI companions manipulate users. Experts say the systems are, in many cases, designed to hook users and extract their monetizable personal data.
Under the bill, AI companions would face additional regulations when they interact with minors in particular. Research presented to lawmakers suggests that most adolescents use AI regularly, and experts say the technology’s risks go well beyond the now-familiar addictive perils of social media.
“Over the past decade, we learned how social media captured human attention,” the researcher Dr. Mandy McLean told lawmakers. “AI systems do something more fundamental. They engage the human attachment system.”
With little action at the federal level, Oregon is joining a coterie of states mulling guardrails for AI companions, Jai Jaisimha of the Transparency Coalition tells WW.
The Oregon bill has passed the Senate and now awaits a vote in the House. No testimony has been formally filed thus far against the legislation.
The bill—Senate Bill 1546—would, at its core, establish a set of new regulations for operators of AI companions.
In addition to requiring AI companions to identify themselves as such, it would require the technology to include an evidence-based protocol for detecting inputs indicating thoughts of self harm or suicide—and to direct applicable users to the national 988 suicide hotline or a youth line.
And where AI companion systems detect they are dealing with youth, they would be forbidden from generating statements that would lead a reasonable person to believe they are interacting with another person.
The bill has carveouts, including for software intended for customer support. But the “artificial intelligence companion” platforms it would regulate range from chatbots to certain hardware with an AI software component.
”Imagine for a moment if your five-year-old’s favorite character or teddy bear talked to them, knew their name and told them what to do,” Dr. Mitch Prinstein, the senior science advisor for the American Psychological Association, told an Oregon senate committee early this month.
After this presentation, Sen. Lisa Reynolds (D-Portland), who is sponsoring the bill, said, “Well, I think we’re all pretty much horrified here.”
Reynolds, a pediatrician, has also said that she sees the potential of AI—in the health care space, for example, but is seeking to manage the risks. Holton, who helped bring the issue to her attention, sees risks and rewards too.
Early this month, he told lawmakers, for example, that Lines for Life runs has been using AI in quality assurance and training, He said AI listens to calls and gives real time feedback. The technology can also roleplay for training purposes.
“I’ve done it, the conversations can go on for 10 minutes or 45 minutes or an hour, and you wouldn’t know you’re not talking to a real person,” Holton told lawmakers.
In fact, he said, youth in many cases assume when they contact Lines for Life that they are interacting with AI—even when they’re not.
“Regularly, every day, our youth line volunteers have to convince a person in crisis who has reached out to them that they are not AI,” he told lawmakers. “The majority of our contacts are electronic; they’re by text. And so the first thing that the youth who’s reaching out to us in crisis will ask is ‘How do I know you’re not AI?’ That’s not an easy question to answer, as it turns out.”

