Open source AI, particularly Meta’s Llama models, has sparked debate over the risks of releasing powerful AI models to everyone.
Last week, a group of protestors rallied outside Meta’s San Francisco offices, condemning the company’s decision to publicly release its AI models.
One banner said, “Don’t open source doomsday.”
The protest was posted on the Effective Altruism Forum and stated the following as its rationale: “Meta’s frontier AI models are fundamentally unsafe… Before it releases even more advanced models – which will have more dangerous capabilities – we call on Meta to take responsible release seriously and stop irreversible proliferation.”
The protestors also created a website here, which quotes Zuckerberg downplaying the safety implications of open source model releases.
In short, prominent companies like OpenAI and Google restrict their large language models (LLMs) to API access. They’re closed ‘black boxes,’ meaning the public can’t see what happens inside the model – just what the inputs and outputs.
In contrast, Meta made headlines by offering its LLaMA models openly to the AI research community. This enables tech-savvy individuals to adapt and tune models into unleashing their full potential, unlike closed AIs which are programmed with safety guardrails.
Meta’s approach to AI development
Meta has pursued a slightly antagonistic role in AI development.
As Google, OpenAI, Microsoft, and startups like Anthropic and Inflection rush to build closed AIs kept under lock and key, Meta has opted to throw its open source models into the public sphere like a live grenade.
Open source AI certainly threatens the market structures Microsoft, OpenAI, etc, depend on to monetize their products.
Meta’s Llama model was famously leaked on 4Chan prior to its official release, and it wasn’t long before the company went a step further by releasing Llama 2.
The protestors argue that Meta’s open source initiatives symbolize an “irreversible proliferation” of possibly dangerous technology. However, others staunchly disagree, arguing that open source AI is vital to democratizing the technology and garnering trust via transparency.
Holly Elmore, who spearheaded the Friday protest outside Meta, emphasized that once model weights that influence model inputs and outputs are in the public domain, the creators lose control. Open source models can be modified and re-engineered for unethical or illegal consequences.
Elmore, previously associated with think tank Rethink Priorities, stated, “Releasing weights is a dangerous policy… The more powerful the models get, the more dangerous this policy is going to get.”
Open source AI will remain a controversial topic, but leaving the technology to the control of a select few is possibly not a favorable alternative.