AI startup Anthropic is reportedly in talks to secure $750 million in funding.
This financing round, led by the venture capital firm Menlo Ventures, will set Anthropic’s value at around $18 billion.
Funding for AI companies in 2023 has been incredible, with numerous startups building multi-billion valuations within the space of a few months.
Anthropic is the perfect example of how valuable AI companies have seemingly popped out of nowhere.
Founded in 2021 by Dario and Daniela Amodei, who previously held executive roles at OpenAI, a competitor backed by Microsoft, pioneered large language models (LLMs) that operate via a ‘constitution’ that guides their behavior. The company’s current product is Claude 2, which works similarly to most other LLMs at face value.
Anthropic has already attracted substantial investments from tech giants, including Google and Amazon, agreeing to invest up to $2 billion and $4 billion, respectively. This involved Anthropic entering into a multi-year cloud agreement with Google.
Amazon’s investment in Anthropic granted Amazon’s employees and cloud customers early access to Anthropic’s AI technologies. Additionally, Anthropic has committed to using Amazon’s cloud services for training its AI models, including Amazon’s relatively new proprietary Trainium and Inferentia chips.
Anthropic, alongside fellow US AI startup Inflection, has attracted billions in investment from tech giants seeking to align themselves with the top talent in the industry.
However, financial interest in Anthropic has also grown overseas, with it receiving a $100 million investment from South Korea’s SK Telecom to develop LLMs tailored to the telecommunications industry.
The model aims to support various languages, including Arabic, English, German, Japanese, Korean, and Spanish.
This week, Time named Anthropic’s constitutional AI as one of the top 3 AI innovations of 2023. As described in a December 2022 paper, this involves creating a “constitution” that outlines desired AI values. The AI is then trained to evaluate responses based on their alignment with this constitution.
As the Anthropic researchers note, “These methods make it possible to control AI behavior more precisely and with far fewer human labels.” This method was instrumental in developing Claude, Anthropic’s 2023 response to ChatGPT.
Jack Clark, Anthropic’s head of policy, highlighted the significance of this approach to Time, saying, “With constitutional AI, you’re explicitly writing down the normative premises with which your model should approach the world. Then the model is training on that.”
Anthropic also attempted to crowdsource an AI constitution from some 1,000 Americans to explore consensus-building for a chatbot’s guidelines.
The results were promising, showing potential for broader public involvement in AI governance, contrasting with the current norm of developers setting the rules.