Big Tech AI names join the Coalition for Secure AI (CoSAI)

July 19, 2024

  • The largest Big Tech companies came together to cofound the Coalition for Secure AI (CoSAI)
  • CoSAI will share open-source methodologies, standardized frameworks, and tools for safe AI development
  • Current AI safety measures for securing AI and AI applications and services are fragmented

Some of the most prominent names in Big Tech have come together to cofound the Coalition for Secure AI (CoSAI).

A global standard for safe AI development practices doesn’t exist yet, with current AI safety measures fragmented and often kept in-house by the companies that create AI models.

CoSAI is an open-source initiative hosted by the OASIS global standards body that aims to standardize and share best practices related to the safe development and deployment of AI.

The who’s who of Big Tech companies supporting the initiative include Google, IBM, Intel, Microsoft, NVIDIA, and PayPal. Additional founding sponsors include Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI, and Wiz.

Notably absent are Apple and Meta.

CoSAI aims to develop and share comprehensive security measures that address risks including:

  • stealing the model
  • data poisoning of the training data
  • injecting malicious inputs through prompt injection
  • scaled abuse prevention
  • membership inference attacks
  • model inversion attacks or gradient inversion attacks to infer private information
  • extracting confidential information from the training data

CoSAI’s charter says that the “project does not envision the following topics as being in scope: misinformation, hallucinations, hateful or abusive content, bias, malware generation, phishing content generation or other topics in the domain of content safety.”

Google already has its Google Secure AI Framework (SAIF) and OpenAI has its beleaguered alignment project. However, until CoSAI there hasn’t been a forum to combine the AI safety best practices that industry players have developed independently.

We’ve seen small startups like Mistral experience meteoric rises with the AI models they produced but many of these smaller companies don’t have the resources to fund AI safety teams.

CoSAI will be a valuable free source of AI safety best practices for small and large players in the industry.

Heather Adkins, Vice President and Cybersecurity Resilience Officer at Google said, “We’ve been using AI for many years and see the ongoing potential for defenders, but also recognize its opportunities for adversaries.

“CoSAI will help organizations, big and small, securely and responsibly integrate AI – helping them leverage its benefits while mitigating risks.”

Nick Hamilton, Head of Governance, Risk, and Compliance at OpenAI said, “Developing and deploying AI technologies that are secure and trustworthy is central to OpenAI’s mission.

“We believe that developing robust standards and practices is essential for ensuring the safe and responsible use of AI and we’re committed to collaborating across the industry to do so.

“Through our participation in CoSAI, we aim to contribute our expertise and resources to help create a secure AI ecosystem that benefits everyone.”

Let’s hope people like Ilya Sutskever and others who left OpenAI due to safety concerns volunteer their input to CoSAI.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions