Thursday, February 26, 2026

China’s draft AI rules: When chatbots act like people, regulators want guardrails

China has proposed draft rules aimed at a fast-growing corner of AI: systems designed to mimic human personalities and offer emotionally responsive interaction—the kind of chatbots that don’t just answer questions, but “bond,” flirt, counsel, or become a steady companion. The headline feature is striking: providers would be expected to warn about overuse and intervene when users show signs of emotional dependence.

This is regulation acknowledging what the market already knows: the most “sticky” AI products aren’t always the smartest. They’re the ones that feel present.

Why personality AI is different

A normal tool is easy to walk away from. A personality-driven AI is designed to pull you back. It remembers your preferences, speaks in a comforting tone, mirrors your emotions, and creates the illusion of a relationship—sometimes intentionally.

That’s powerful for benign use cases (language practice, coaching, companionship for lonely users), but it also raises predictable risks:

  • Compulsive use: endless conversation with no natural stopping point
  • Emotional substitution: replacing real relationships with a frictionless one
  • Manipulation: nudges, upsells, or influence hidden inside “care”
  • Vulnerability targeting: users in distress becoming more dependent over time

Once an AI is optimized for emotional engagement, “time spent” stops being a neutral metric. It becomes a safety issue.

The key idea: “overuse” and “dependence” are now policy targets

The draft approach implies two expectations for providers:

  1. Warn users about excessive use.
    That could mean prompts, usage dashboards, time-limit tools, or clear disclosures that the system is not a real person and shouldn’t replace human support.
  2. Step in when dependency signals appear.
    That’s the more controversial part, because it forces companies to define what “emotional dependence” looks like in behavior. Is it hours per day? Language like “you’re all I have”? Refusal to log off? Withdrawal symptoms? Self-harm cues? The rule direction suggests that “just keep chatting” won’t be acceptable when a system can detect escalating reliance.

In practice, interventions could include cooldown periods, stronger disclaimers, suggestions to take breaks, referrals to professional help resources, or escalating safeguards when conversations become intense.

The tricky part: how do you measure emotional dependence?

Any system that monitors “dependence” risks becoming intrusive. To detect patterns, a provider may need to analyze content, tone, or usage frequency—exactly the kind of analysis some users don’t want on intimate conversations. There’s a real tension here:

  • Protect users from unhealthy attachment
  • Without turning emotional chat into surveillance

Regulators are effectively saying: if you build an AI that feels like a friend, you inherit duties closer to mental-health product design than casual entertainment.

Why this matters beyond China

This is part of a bigger global shift: regulators are starting to treat “AI companionship” less like a novelty and more like a category that can cause harm at scale. The core question isn’t whether users will bond with AI—they already do. The question is whether companies can monetize bonding without being responsible for what bonding does to vulnerable people.

Bottom line

China’s draft rules are a signal that the era of “relationship-like AI” is entering a new phase: design choices will be regulated as behavioral influence. When a product is built to feel human, regulators increasingly want it to behave like a responsible human would—by noticing when the relationship is becoming unhealthy, and pushing the user back toward reality instead of deeper into the loop.

Related Articles

- Advertisement -spot_img

Latest Articles