The World Ethical Data Foundation, with over 25,000 members, including professionals from leading tech corporations like Meta, Google, and Samsung, has created an open framework for ethical AI development.
The newly proposed framework presents an 84-point checklist of questions surrounding 3 key AI development processes:
- Data selection and ingestion
- Creation or selection of algorithms and models
- Managing test data and tagging
The open letter describes the “Me-We-It: An Open Standard for Responsible AI,” aimed at streamlining AI development while making it transparent, ethical, and inclusive.
It’s simple and non-technical, designed to foster debate around AI development.
You can view and sign the letter here.
The proposal also responds to the pressing need for policymakers to keep up with AI regulation. It argues that, while legislation like the EU AI Act is critical, the tech community can proactively create actionable standards for ethical AI development.
According to Vince Lynch, advisor to the World Ethical Data Foundation board, the AI community is currently in the “Wild West stage, where it’s just kind of: ‘Chuck it out in the open and see how it goes.'”
Lynch, who initiated the framework, warns of AI-related issues surrounding intellectual property and human rights, which pose immediate risks alongside deep fakes and AI-supported fraud.
Open frameworks on AI: is there any point?
Open and voluntary frameworks encourage debate, but it’s difficult to tell if they amount to practical action.
OpenAI CEO Sam Altman has previously advocated for an open framework regarding AI, similar to what we’ve observed in other industries. Other tech leaders and EU officials called for interim regulation to enforce rules before legislation such as the EU AI Act comes into effect.
Thus far, AI development is business as usual, and companies like Inflection are building colossal AI training infrastructure that outstrips anything we’ve seen.
The EU urged tech firms to take action on specifically harmful AI content, including deep fakes, which already contravene rules such as the EU’s Digital Services Act.
When EU and US leaders met in Sweden in late May, they discussed a potential regulatory ‘stop gap’ to slow AI development, but there’s no substantial evidence that the technology’s trajectory has changed.
The problem with voluntary rules is that they are just that – voluntary. AI companies would likely argue they’re already answering the questions posed by the World Ethical Data Foundation.
Even compulsory rules aren’t guaranteed to achieve their objectives, and mitigating AI’s risks is very much a work in progress.