China has moved quicker than any Western government regarding AI regulation, but recent surges in AI-driven fraud suggest regulation alone is insufficient without robust enforcement strategies.
China’s Regulations on the Administration of Deep Synthesis of Internet Information Services came into force on the 10th of January 2023, effectively banning the use of generative AI to impersonate or deceive.
OpenAI’s ChatGPT and other popular AIs aren’t officially available in China, but many access the tools via virtual private networks (VPNs).
The Chinese government moved quickly to regulate AI to prevent ‘existential threats’ to their socialist regime and reduce the impact of fraud and cybercrime.
However, sources within China have cited that AI fraud is rife in the country, resulting in a crackdown on people generating fake content.
For example, one case involved a man receiving a call from a scammer on the Chinese messaging app WeChat. The scammers replicated the man’s friend’s voice and face and persuaded him to transfer the equivalent sum of $600,000 to another bank account.
The man was convinced until he called his friend to confirm the transfer. “We had a video chat, and I recognized the face and voice in the video; that’s why I let my guard down,” he told authorities.
In fact, there have been so many incidents of AI fraud that the hashtag #AIFraudIsEruptingAcrossChina started appearing across the Chinese social media platform Weibo. However, the hashtag has since disappeared, leading people to believe the authorities are trying to censor discussion.
Chinese AI regulation is a useful guide for other countries
China’s swift action on regulation is partly attributed to the government’s approach to censorship. Conversely, lawmakers in the EU and US are struggling to delineate the risks of deep fakes from ‘fair use’ for freedom of speech and satire.
A midjourney art of Elon Musk in an Indian attire is going viral in India. 🇮🇳 pic.twitter.com/LD1KuIAHET
— DogeDesigner (@cb_doge) June 3, 2023
Despite sociopolitical differences, analysts argue we should closely monitor the impact of Chinese AI regulations.
For instance, Graham Webster, a Stanford University researcher who tracks China’s digital-policy developments, told the Wall Street Journal, “China is learning with the world as to the potential impacts of these things, but it’s moving forward with mandatory rules and enforcement more quickly” and “People around the world should observe what happens.”
The humor of satirical deep fakes might wear thin if deep fake-related fraud increases in frequency and severity, which seems inevitable. AI regulation in the US, EU, and other jurisdictions is still in the pipeline, but there are short-term options on the table, such as a voluntary ‘AI pact’ or ‘stop gaps’ to curb risk now.
China’s current problem with AI-related fraud indicates that the mere presence of legislation may not be enough to stem the risk of AI-related fraud.