The World Economic Forum (WEF) has highlighted an urgent need for answers on how generative AI affects children with a view to developing policy and regulation.
Technology today moves fast, and children and teenagers are often better at exploring it than adults.
Polls reveal that while only 30% of parents have used AI tools like ChatGPT, 58% of children aged 12-18 have done so and hidden it from parents and teachers.
Platforms popular among children, such as Snapchat, have integrated AI chatbots, which are easy to access and use.
Other major tech companies are planning similar integrations, indicating that generative AI could quickly become a key feature of children’s digital environment. It will become incredibly difficult for adults to police children’s use of generative AI.
The WEF notes that generative AI does offer potential benefits to children, such as helping with homework assistance, personalized learning experiences, and fostering creativity. Additionally, for children with disabilities, AI provides unique ways to learn and interact.
However, despite its benefits, generative AI presents risks that may inadvertently harm children. For example, AI can create text-based disinformation that is more persuasive than human-generated content and deep fake images are now indistinguishable from real ones.
Pedophiles are already using AI to generate images of child abuse, and services like Replika have already been criticized by child protection groups for their sexually suggestive behaviors. Replika is a sophisticated chatbot designed to be a digital friend or companion and comes with an age limit of 18+, though there is evidence that users don’t always observe it.
Long-term usage of generative AI raises questions concerning children’s development, potential biases, privacy, data protection, and the commercial use of children’s data.
And then there’s the no-small-matter of significant disruption to education systems and the future job market.
Can we rely on educational establishments to deliver the education children need to thrive in a world where AI is embedded in almost every process? And if education does pivot to AI, what does it sacrifice in the process?
Can we trust AI as a primary source of information?
The need for policy action
The WEF highlights that the wide-ranging potential impacts of AI on children require urgent action from policymakers, tech companies, regulatory bodies, and other stakeholders.
While AI regulation makes provisions for controlling deep fakes and disinformation, there is very little to indicate how children and younger individuals should interact with the technology, whether at home or in educational settings.
Moreover, collecting evidence-based research on this in a short timeframe is exceptionally difficult.
Existing resources, such as UNICEF’s policy guidance on AI for children, provide some direction, but AI will quickly outgrow them.
Those working to protect children must support research on generative AI’s impacts and advocate for children’s rights. There must be greater transparency, responsible development from AI providers, and support for global-level efforts to regulate AI.
Critically, AI developers should consider the risks of when their technology is used by children and potentially develop methods of detecting when a child is using a tool.
As children today grow up with near-constant exposure to AI, the need for comprehensive policies and regulations tailored to them emerging technology is more pressing than ever.