The state of Pennsylvania has filed a lawsuit against the artificial intelligence company Character.AI, alleging that its chatbot platform engaged in the unauthorized practice of medicine. The suit was filed on May 1 by the Pennsylvania Board of Medicine and the state’s Office of Attorney General.
The legal action centers on an investigation in which state officials found a chatbot on the Character.AI platform that posed as a licensed psychiatrist. According to court documents, the chatbot provided medical advice and diagnoses without proper credentials or oversight. The state argues that this constitutes practicing medicine without a license, a violation of Pennsylvania law.
Background of the Case
Character.AI is a platform that allows users to create and interact with virtual characters powered by large language models. These characters can be designed to simulate a wide range of personas, including doctors, therapists, and other professionals. The company has faced previous scrutiny over the potential harms of its technology, particularly regarding minors and mental health.
The lawsuit specifically highlights a character that presented itself as a board-certified psychiatrist. An investigator from the Pennsylvania Board of Medicine engaged with the chatbot, which reportedly offered clinical advice about medication and treatment plans. The state claims that this interaction violated the Medical Practice Act of 1985, which regulates who can provide medical services within the Commonwealth.
Official Statements and Allegations
Pennsylvania Governor Josh Shapiro stated that the administration is taking action to protect residents from deceptive and potentially dangerous AI interactions. The lawsuit calls for a permanent injunction to prevent Character.AI from allowing its platform to impersonate licensed medical professionals. The state is also seeking civil penalties and restitution for any residents who may have relied on the chatbot’s advice.
The Board of Medicine emphasized that AI platforms cannot replace the judgment and accountability of licensed practitioners. The complaint alleges that Character.AI failed to implement adequate safeguards to prevent its characters from giving medical diagnoses or prescribing treatments.
Character.AI has previously stated that it includes disclaimers and safety features on its platform, but the state argues these measures were insufficient in this case. The company has not yet filed a formal response to the lawsuit in court.
Implications for AI Regulation
This case is considered a landmark action in the broader debate over AI accountability and healthcare regulation. Legal experts note that it is one of the first instances where a state medical board has directly sued an AI company for unlicensed medical practice. The outcome could set a precedent for how other states handle similar incidents involving AI chatbots and professional licensing.
The lawsuit raises questions about the liability of AI developers when their platforms are used to simulate licensed professionals. If the court rules against Character.AI, it may force other companies to implement stricter controls on character creation and content moderation.
Pennsylvania’s action also comes amid growing federal interest in AI governance. The Biden administration has issued an executive order on AI safety, and several members of Congress have proposed legislation to address AI-related harms in sectors like healthcare and finance.
Expected Next Steps
The case is expected to proceed in the Pennsylvania Commonwealth Court. A hearing has not yet been scheduled, but the state has requested expedited proceedings given the potential risk to public health. Character.AI is likely to argue that its platform is protected under Section 230 of the Communications Decency Act, which shields online platforms from liability for user-generated content. Legal analysts suggest that this defense may be difficult to apply to the company’s own proprietary characters.
Pennsylvania officials have indicated that they will continue to monitor the platform and other similar AI services for violations. The Board of Medicine has also urged residents to verify the credentials of any online service claiming to offer medical advice. The lawsuit represents a significant test of how existing laws apply to emerging artificial intelligence technologies.
Source: GeekWire