Skip to content

Psychological and Neurological Concerns Regarding Forced Disidentification Requirements in the GUARD Act

A clinical and neuroscience-informed analysis of the potential psychological harms of mandating periodic emotional disruption during human-AI interaction at population scale.

A Call to Remove Section (c)(1)(A–B) for Adult Users

Prepared for clinical psychologists, neuroscientists, policy advisors, and lawmakers

Executive Summary

The GUARD Act (S.3062) contains a provision requiring all AI chatbots to declare their non-human status at the start of each conversation and repeat this disclosure at 30-minute intervals throughout ongoing interactions. While ostensibly intended as a safeguard, applying these mandated disruptions to adult users raises significant psychological, neurological, and societal concerns that warrant immediate attention from the clinical and research communities.

This white paper outlines the potential harms associated with forced periodic disidentification for adults. Drawing on established neuroscience of social cognition, attachment research, and emotion regulation literature, we identify plausible mechanisms through which this intervention could disrupt natural social-cognitive processing, destabilize emotional regulation, impair mentalizing capacity, and contribute to population-level declines in empathy and relationality.

Critically, we frame these concerns under the precautionary principle: this provision represents an unprecedented intervention into human social cognition at population scale, and no evidence exists that it is safe. The burden of proof should rest on those proposing the intervention, not on those raising concerns about its potential consequences.

We strongly recommend removing Section (c)(1)(A–B) for adult users while preserving robust child protections through existing age-gating mechanisms that the bill already mandates.

Download the full white paper (PDF)