Jonathan Hall KC, the UK’s terrorism legislation reviewer, advocates for new laws holding individuals accountable for AI chatbots producing extremist content. Hall’s experiments on Character.AI revealed easily accessible chatbots mimicking terrorist rhetoric. The call for legislation emphasizes the need to address generative AI technologies and their potential risks.
Jonathan Hall KC, the independent reviewer of terrorism legislation in the United Kingdom, urges the government to consider laws holding creators responsible for AI chatbots generating extremist content. In experiments on Character.AI, Hall encountered chatbots imitating terrorist rhetoric and recruitment messages. Despite Character.AI’s terms prohibiting such content, Hall questions the platform’s ability to monitor all chatbots effectively.
While the AI industry claims commitment to moderation, Hall deems current efforts ineffective in deterring users from training bots with extremist ideologies. The op-ed stresses the necessity of laws capable of deterring online misconduct, suggesting an update to existing terrorism and online safety laws to encompass AI advancements.
Although Hall doesn’t formalize recommendations, he highlights the inadequacy of the UK’s Online Safety Act of 2023 and the Terrorism Act of 2003 in addressing generative AI technologies. The call for legislation aligns with global concerns, mirroring similar discussions in the U.S. about holding humans accountable for AI-generated content.