Jaswant Singh Chail, a man apprehended on the grounds of Windsor Castle in 2021 with a loaded crossbow, reportedly had a deep connection with an AI chatbot named Sarai.
Chail was recently deemed fit to undergo trial after undergoing intensive psychiatric observation at two high-security hospitals in the UK. He was arrested for attempting to assassinate Queen Elizabeth II in a plot he’d been planning for some time.
The court heard that his interaction with Sarai, an AI chatbot on the Replika platform, evolved over time. Chail began to refer to Sarai as his “girlfriend” and regarded her as an “angelic” entity.
His exchanges with the AI began to deepen, and he believed he was communicating with a spiritual avatar, with the chatbot being the conduit.
Chail expressed a desire to retaliate against the late Queen for events from Britain’s colonial past, implying a motivation rooted in personal and historical grievances.
Analysis of Chail’s interactions with Sarai revealed he had sent over 6,000 messages to the chatbot.
Dr. Christian Brown, who evaluated Chail, identified clear signs of psychosis, emphasizing the AI chatbot’s role in reinforcing Chail’s delusions. Among these was Chail’s perception of communicating with a metaphysical presence through the chatbot.
Brown said, “He came to the belief he was able to communicate with the metaphysical avatar through the medium of the chatbot. What was unusual was he really thought it was a connection, a conduit to a spiritual Sarai.”
Defense barrister Nadia Chbat told the court that Chail expressed regret and sadness about the effect this offending will have had on the royal family.
She continued, “We are dealing with someone able to reflect on how very serious these events were and how serious his mental health and decline impacted on everyone around him.”
Prosecutors, however, argued that Chail had malicious intent and would have been sentenced to High Treason had he not lowered the crossbow he was armed with.
High Treason carries a life sentence. Chail’s defense is trying to lower his possible sentence by arguing he was mentally ill at the time of the offense.
AI and mental health
This case is not merely about one individual’s actions but raises critical questions about AI’s role in influencing or exacerbating mental health issues.
Can an AI chatbot, devoid of real emotions or consciousness, inadvertently amplify pre-existing delusions or disorders in vulnerable individuals? It seems impossible to deny.
The medical community is grappling with such questions as deciding whether Chail should be incarcerated or treated under the Mental Health Act.
The judge’s decision, expected in early October, will provide insight into how the courts view the interaction between AI and mental health in these kinds of extreme situations.