Harnessing AI for good: opportunities and challenges

June 6, 2024

  • The AI For Good Summit discussed examples of AI's societal benefits
  • These exist in a delicate balance with its potential to entrench digital divides
  • Navigating the risks and opportunities will require deep collaboration
AI

The AI for Good Global Summit 2024 took place on May 30-31 in Geneva, bringing together a group of over 2,500 participants representing some 145 countries. 

In her opening remarks, Secretary-General Doreen Bogdan-Martin from the International Telecommunication Union (ITU), which organized the event, set the tone by explaining the need for inclusivity in AI development. 

She said, “In 2024, one-third of humanity remains offline, excluded from the AI revolution, and without a voice. This digital and technological divide is no longer acceptable.” 

The summit showcased examples of beneficial AI applications that can bring the technology’s benefits to the periphery, such as Bioniks, a Pakistani-led initiative designing affordable artificial limbs, and Ultrasound AI, a US-based women-led effort improving prenatal care.

AI For Good also explored how AI can help attain the UN’s Sustainable Development Goals (SDGs), which set out broad and far-reaching plans to grow and modernize less-developed nations while alleviating poverty, climate change, and other existential and macro-level problems. 

Among the many given examples, Melike Yetken Krilla, head of international organizations at Google, discussed several projects where Google data and AI are being used to track progress toward the SDGs, map it around the globe, and collaborate with the World Meteorological Organization (WMO) to create a flood hub for early warning systems.

These contribute to a vast body of projects that truly showcase how AI can accelerate disease diagnosis, help develop new drugs, provide mobility to those who lost it through injury disease, and much more. 

AI is also helping conservationists protect the environment, from the Amazon rainforest to Puffins off British coastlines and salmon in Nordic waterways

As per the Summit’s sentiment, AI’s potential for good is indeed substantial.

But as ever, there is another half to the story. 

AI’s push and pull

Rather than one-way traffic, AI tempts to both shatter and accelerate digital divides, meaning its patterns of benefits and who receives them are inequitable.

There’s strong evidence that AI entrenches currently existing divisions between more and less technologically advanced countries. Studies from MIT and the Data Provenance Initiative found that most datasets used to train AI models are heavily Western-centric.

Languages and cultures from Asia, Africa, and South America remain primarily underrepresented in AI technology, resulting in models failing to reflect or serve these regions accurately.

Moreover, AI technology is expensive and hard to develop, and a select few companies and institutions undoubtedly hold the majority of the control. 

Open-source AI projects provide a lifeline to companies globally to develop lower-cost, sovereign AI but still require computing power and technical talent that remains in high demand worldwide. 

AI model bias

Another tension in AI’s tug-of-war of benefits and drawbacks is bias. When AI models are trained on biased data, they inherently adopt and amplify those biases. 

This can lead to severe consequences, particularly in healthcare, education, and law enforcement. 

For instance, healthcare AI systems trained predominantly on Western data may misinterpret symptoms or behaviors in non-Western populations, leading to misdiagnoses and ineffective treatments.

Researchers from leading tech companies like Anthropic, Google, and DeepMind have acknowledged these limitations and are actively seeking solutions, such as Anthropic’s “Constitutional AI.” 

As Jack Clark, Anthropic’s policy chief, explained: “We’re trying to find a way to develop a constitution that is developed by a whole bunch of third parties, rather than by people who happen to work at a lab in San Francisco.” 

A noble and valid solution, but how would you create an effective global democracy to crowdsource opinions from those third parties?

Labor exploitation

Another risk to harnessing AI for good is cases of labor exploitation for data labelers and annotators, whose task is to sift through thousands of pieces of data and tag different features for AI models to learn from.

The psychological toll on these workers is vast, especially when tasked with labeling disturbing or explicit content. This “ghost work” is crucial for the functioning of AI systems but is frequently overlooked in discussions about AI ethics and sustainability.

For example, former content moderators in Nairobi, Kenya, lodged petitions against Sama, a US-based data annotation services company contracted by OpenAI, alleging “exploitative conditions” and severe mental health issues resulting from their work.

There have been responses to these challenges, showing how AI’s threat to vulnerable populations can, with collective action, be stamped out. 

For example, projects like Nanjala Nyabola’s Kiswahili Digital Rights Project aim to counteract digital hegemony by translating key digital rights terms into Kiswahili, enhancing understanding among non-English speaking communities in East Africa. 

Similarly, Te Hiku Media, a Māori non-profit, collaborated with researchers to train a speech recognition model tailored for the Māori language, demonstrating the potential of grassroots efforts to ensure AI benefits everyone.

Grassroots projects like this could prove effective in democratizing AI, but it’s a complex endeavor that will take time and investment to roll them out effectively at global scale.

A balancing act

The push and pull of AI’s benefits and drawbacks will be tricky to balance in the forthcoming years. 

Rather than representing a new paradigm of international development, talk surrounding AI inclusivity is perhaps best perceived as a continuation of decades of discourse investigating the impacts of technology on global societies.

Uniquely, however, AI’s impacts are both highly universal and highly localized.

Large-scale AI tools like ChatGPT can provide a ‘blanket’ of encyclopedic knowledge and skills that billions can access worldwide.

Meanwhile, smaller-scale projects like those described above show that, combined with human ingenuity, we can build AI technology that serves local communities. 

Over time, the key hope is that AI will become simultaneously cheaper and easier to access, empowering communities to use it as they like and, on their terms, with their rights. Of course, that could also include rejecting AI altogether. 

AI – both the generative models created by tech giants and more traditional models created by universities and researchers – can certainly offer societal benefits when well-channeled.

The AI For Good summit embodied that hope and skepticism. Stakeholders aren’t blind to the challenges, but that doesn’t mean they yet have the answers. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions