UK and US develop new global guidelines for AI security

November 27, 2023

As AI technologies become widely deployed, they also become targets for cybercriminals. Cyber security agencies from the UK and US developed new guidelines on how to make AI more secure.

The stated aim of the guidelines is to enable AI companies to “build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties.”

Basically, how to make sure your AI can’t be hijacked or hand over user private data.

The guidelines list design principles that should guide the development of AI products throughout the development life cycle. The four key areas addressed are in summary:

  1. Secure design
    Assess risk and do threat modeling during the design phase.
  2. Secure development
    Manage your supply chain, documentation, and digital assets in a secure way while building your product. If you take unsecure shortcuts, be sure to go back and fix them.
  3. Secure deployment
    When you give users access to your product make sure they can’t access sensitive parts via the API or other means. Only deploy after rigorous red teaming and testing.
  4. Secure operation and maintenance
    Monitor your product, roll out updates carefully, and don’t keep it a secret when things go wrong.

The guidelines are light on technical detail as far as implementation but it’s a good start. In addition to the UK and US, a further 16 countries endorsed the guidelines.

The full list of signatories are:
Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, Republic of Korea, Singapore, United Kingdom, United States of America.

Notably absent from the list are China and Russia, probably the most significant sources of cyber attacks on Western countries.

The guidelines are non-binding so the countries that endorsed the document are really just saying ‘We think these are some good AI design principles to adhere to.’ Also, in spite of the UK being at the forefront of these endeavors, it has said it won’t be enforcing any new AI development legislation in the near future.

It’s in countries’ interest to make their AI products as secure as possible, but whether it’s possible or not remains to be seen. Zero-day vulnerabilities in commercial software and systems seem like an everyday occurrence.

With regular reports about jailbreaking exploits of AI systems, is it reasonable to expect that these new models can be properly secured from cyber-attacks?

The imagined, or predicted, dangers of AI are a potential future threat. The cybercriminals, that are no doubt currently probing AI vulnerabilities, are a more clear and present danger.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions