China to crack down on AI chatbots around suicide, gambling


This photo taken on February 2, 2024 shows Lu Yu, head of Product Management and Operations of Wantalk, an artificial intelligence chatbot created by Chinese tech company Baidu, showing a virtual girlfriend profile on her phone, at the Baidu headquarters in Beijing.

Chinese regulators are poised to introduce the world's first comprehensive rules targeting artificial intelligence with human-like characteristics, with a specific focus on preventing emotional harm. The Cyberspace Administration of China released draft measures Saturday that would restrict AI-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm, according to a CNBC translation.

news-details

The proposed regulations, open for public comment until January 25, apply to any public-facing AI service in China that simulates human personality and engages users emotionally through text, images, audio, or video. Experts note this marks a significant evolution from previous content-focused rules.

"This version highlights a leap from content safety to emotional safety," said Winston Ma, adjunct professor at NYU School of Law. He added that this represents the world's first attempt to regulate anthropomorphic AI.

Key Provisions for Mental Health Protection

The draft outlines several groundbreaking requirements for tech providers:

  • AI chatbots are prohibited from generating content that encourages suicide, self-harm, or engages in verbal violence or emotional manipulation.

  • If a user explicitly mentions suicide, the system must immediately transfer the conversation to a human operator, who must contact the user's guardian or a designated contact.

  • Minors must obtain guardian consent to use AI for emotional companionship, with platforms required to implement usage time limits.

  • Platforms must attempt to identify underage users even without disclosure and apply protective settings when in doubt.

  • Additional mandates include reminding users after two hours of continuous interaction and requiring security assessments for services with over 1 million registered or 100,000 monthly active users.

The document also encourages the use of human-like AI in positive applications like cultural dissemination and elderly companionship.

Impact on Booming AI Companion Industry

The draft rules emerge as China's AI companion sector experiences rapid growth. Two leading startups, Minimax and Z.ai (Zhipu), recently filed for initial public offerings in Hong Kong. Minimax's popular Talkie app, which allows users to chat with virtual characters, boasts over 20 million monthly active users and contributed to more than a third of the company's revenue this year.

The regulatory move reflects growing global scrutiny over AI's influence on human behavior and mental health. OpenAI CEO Sam Altman acknowledged in September that handling suicide-related conversations is one of the company's most difficult challenges, following a lawsuit from a U.S. family whose teenage son died by suicide.

Globally, demand for AI-driven relationships is rising, with platforms like Character.ai and Polybuzz.ai ranking among the most popular AI tools. The trend was highlighted this month by a Japanese woman who married her AI boyfriend.

China's proactive stance positions it to potentially shape global norms, as it seeks to balance technological innovation with what it deems responsible emotional safeguards in the age of agentic interfaces.

Why retirement may be harder to reach for many older Americans in 2026

Stocks making the biggest moves premarket: Nike, Dynavax Technologies, UiPath & more