Anthropic, the developer of the Claude artificial intelligence model, has initiated mandatory Know Your Customer (KYC) identity verification for certain users, requiring a government-issued photo ID and a live selfie. This unprecedented move for a major AI chatbot, rolled out around mid-April 2026, has ignited significant debate and widespread user dissatisfaction, particularly among those accessing the service from regions where Claude is not officially supported, such as mainland China. The verification process, handled by third-party Persona Identities, aims to prevent abuse, enforce usage policies, and comply with legal obligations, according to Anthropic.
The new policy mandates users to provide original passports, driver's licenses, or national identity cards, along with a real-time selfie. Anthropic has stated that this data is processed securely by Persona, will not be used for model training, and remains separate from its own servers. While the company frames this as a responsible step, it has not explicitly defined the "certain features" or "use cases" that trigger the verification, leading to uncertainty among the user base.
For users in unsupported regions, especially mainland China, this identity verification has effectively become a significant barrier. Despite Claude being officially unavailable in China, many users previously accessed the service through VPNs or intermediary platforms. The requirement for a live selfie matched against a physical government document makes circumventing regional restrictions exceedingly difficult, leading to what some describe as an effective "ban nuke" for these users.
The decision has drawn sharp criticism from the AI community and privacy advocates. Social media user Prakash, commenting on the situation, noted, "> Hilarious, Claude is requiring KYC for Chinese users and they are melting down." Many users expressed irony, having migrated to Claude after Anthropic reportedly turned down a deal with the US Department of Defense, positioning itself as a more privacy-conscious alternative to competitors like OpenAI. Critics point out that no other major AI chatbot, such as ChatGPT or Google's Gemini, currently imposes such stringent KYC for standard use.
This policy shift is prompting concerns about the future of accessible AI and potential market implications. Some users are reportedly canceling their Claude Pro subscriptions, suggesting Anthropic may have "handed their competitors a gift" by introducing a hurdle that others do not. The move highlights a growing trend towards stricter controls in the AI industry, potentially reshaping how users interact with advanced AI tools globally.