{"id":8944,"date":"2024-01-06T20:37:14","date_gmt":"2024-01-06T20:37:14","guid":{"rendered":"https:\/\/dailyai.com\/?p=8944"},"modified":"2024-01-06T23:30:16","modified_gmt":"2024-01-06T23:30:16","slug":"the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/","title":{"rendered":"NIST udgiver artikel om fire mulige typer af generative AI-angreb"},"content":{"rendered":"<p><b>Det amerikanske National Institute of Standards and Technology (NIST) har udtrykt bekymring for sikkerheden i pr\u00e6diktive og generative AI-systemer.<\/b><\/p>\n<p><span style=\"font-weight: 400;\">If\u00f8lge Apostol Vassilev, der er datalog ved NIST, er disse teknologier trods fremskridt inden for sikkerhed stadig s\u00e5rbare over for en r\u00e6kke forskellige angreb.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I en <a href=\"https:\/\/csrc.nist.gov\/pubs\/ai\/100\/2\/e2023\/ipd\" target=\"_blank\" rel=\"noopener\">samarbejdsdokument<\/a> med titlen \"<\/span><span style=\"font-weight: 400;\">Adversarial maskinl\u00e6ring: En taksonomi og terminologi for angreb og afv\u00e6rgeforanstaltninger<\/span><span style=\"font-weight: 400;\">\" Vassilev kategoriserer sammen med kolleger fra Northeastern University og Robust Intelligence de sikkerhedsrisici, som AI-systemer udg\u00f8r.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Vassilev sagde: \"P\u00e5 trods af de betydelige fremskridt, som AI og maskinl\u00e6ring har gjort, er disse teknologier s\u00e5rbare over for angreb, der kan for\u00e5rsage spektakul\u00e6re fejl med alvorlige konsekvenser.\"\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Han advarede ogs\u00e5 mod enhver virksomhed, der h\u00e6vder at tilbyde \"fuldt sikker AI\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dette er en del af <\/span><a href=\"https:\/\/www.nist.gov\/trustworthy-and-responsible-ai\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">NIST's initiativ for p\u00e5lidelig og ansvarlig AI<\/span><\/a><span style=\"font-weight: 400;\">Det er i overensstemmelse med den amerikanske regerings m\u00e5l for AI-sikkerhed. Den unders\u00f8ger kontradiktoriske maskinl\u00e6ringsteknikker med fokus p\u00e5 fire hovedsikkerhedsproblemer: undvigelse, forgiftning, privatliv og misbrugsangreb.<\/span><\/p>\n<p>Undvigelsesangreb sker efter udrulning, hvor man \u00e6ndrer input for at forvirre AI-systemer. For eksempel ved at \u00e6ndre stopskilte, s\u00e5 autonome k\u00f8ret\u00f8jer fejll\u00e6ser dem som fartgr\u00e6nseskilte, eller ved at lave vildledende k\u00f8rebanemarkeringer for at lede k\u00f8ret\u00f8jerne p\u00e5 afveje.<\/p>\n<p>Ved forgiftningsangreb indf\u00f8res korrupte data under tr\u00e6ningen. Det kan indeb\u00e6re indlejring af hyppigt upassende sprog i tr\u00e6ningsdatas\u00e6t, hvilket f\u00e5r en chatbot til at anvende dette sprog i kundeinteraktioner.<\/p>\n<p>Angreb p\u00e5 privatlivets fred har til form\u00e5l at udtr\u00e6kke f\u00f8lsomme oplysninger om den kunstige intelligens eller dens tr\u00e6ningsdata, ofte gennem reverse-engineering-metoder. Det kan indeb\u00e6re, at man bruger en chatbots svar til at finde frem til dens tr\u00e6ningskilder og svagheder.<\/p>\n<p>Misbrugsangreb manipulerer legitime kilder, som f.eks. websider, og fodrer AI-systemer med falske oplysninger for at \u00e6ndre deres funktion. Dette adskiller sig fra forgiftningsangreb, som \u00f8del\u00e6gger selve tr\u00e6ningsprocessen.<\/p>\n<p><span style=\"font-weight: 400;\">Undvigelsesangreb indeb\u00e6rer, at man skaber modstridende eksempler for at narre AI-systemer under udrulningen, som f.eks. at fejlidentificere stopskilte i selvk\u00f8rende biler.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Alina Oprea fra Northeastern University, som var involveret i projektet <a href=\"https:\/\/www.nist.gov\/news-events\/news\/2024\/01\/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems\" target=\"_blank\" rel=\"noopener\">unders\u00f8gelse<\/a>, <\/span>\"De fleste af disse angreb er ret nemme at gennemf\u00f8re og kr\u00e6ver et minimum af viden om AI-systemet og begr\u00e6nsede evner hos modstanderen.\"<\/p>\n<h2>NIST kritiseres for links til AI-t\u00e6nketank<\/h2>\n<p><span style=\"font-weight: 400;\">Hver for sig, <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/12\/congress-concerned-about-rands-influence-on-ai-safety-body\/\"><span style=\"font-weight: 400;\">Der er blevet udtrykt bekymring<\/span><\/a><span style=\"font-weight: 400;\"> over et planlagt AI-forskningspartnerskab mellem NIST og RAND Corp.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">RAND, der er kendt for sine forbindelser til tech-milliard\u00e6rer og <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/11\/effective-altruism-long-termism-and-politics-in-openai\/\"><span style=\"font-weight: 400;\">bev\u00e6gelse for effektiv altruisme<\/span><\/a><span style=\"font-weight: 400;\">spillede en vigtig r\u00e5dgivende rolle i udformningen af<\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/10\/dissecting-the-landmark-white-house-executive-order-on-ai\/\"><span style=\"font-weight: 400;\"> Bekendtg\u00f8relse om AI-sikkerhed<\/span><\/a><span style=\"font-weight: 400;\">.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Medlemmer af House Committee on Science, Space and Technology, herunder Frank Lucas og Zoe Lofgren, kritiserede den manglende gennemsigtighed i dette partnerskab.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Udvalgets bekymringer er todelt: For det f\u00f8rste s\u00e6tter de sp\u00f8rgsm\u00e5lstegn ved, hvorfor der ikke var en konkurrencepr\u00e6get proces i forbindelse med udv\u00e6lgelsen af RAND til denne forskning i AI-sikkerhed.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">N\u00e5r regeringsorganer som NIST giver forskningsbevillinger, giver de normalt forskellige organisationer mulighed for at ans\u00f8ge, hvilket sikrer en fair udv\u00e6lgelsesproces. Men i dette tilf\u00e6lde ser det ud til, at RAND blev valgt uden en s\u00e5dan proces.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For det andet er der en vis uro omkring RAND's fokus p\u00e5 AI-forskning. RAND har v\u00e6ret involveret i studier af AI og biosikkerhed og har for nylig modtaget betydelige midler til dette arbejde fra kilder, der er t\u00e6t knyttet til tech-industrien.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Det amerikanske National Institute of Standards and Technology (NIST) har udtrykt bekymring for sikkerheden i pr\u00e6diktive og generative AI-systemer. If\u00f8lge Apostol Vassilev, der er datalog ved NIST, er disse teknologier trods fremskridt inden for sikkerhed stadig s\u00e5rbare over for en r\u00e6kke angreb. I et samarbejdspapir med titlen \"Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations\" kategoriserer Vassilev sammen med kolleger fra Northeastern University og Robust Intelligence de sikkerhedsrisici, som AI-systemer udg\u00f8r.  Vassilev sagde: \"P\u00e5 trods af de betydelige fremskridt, AI og maskinl\u00e6ring har gjort, er disse teknologier s\u00e5rbare over for angreb, der kan for\u00e5rsage spektakul\u00e6re fejl<\/p>","protected":false},"author":2,"featured_media":8945,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[512],"class_list":["post-8944","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-nist"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>The NIST publishes paper on four possible types of generative AI attacks | DailyAI<\/title>\n<meta name=\"description\" content=\"The US National Institute of Standards and Technology (NIST) has released a new paper on the risks of generative AI.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The NIST publishes paper on four possible types of generative AI attacks | DailyAI\" \/>\n<meta property=\"og:description\" content=\"The US National Institute of Standards and Technology (NIST) has released a new paper on the risks of generative AI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-01-06T20:37:14+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-01-06T23:30:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/shutterstock_1905754738.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"The NIST publishes paper on four possible types of generative AI attacks\",\"datePublished\":\"2024-01-06T20:37:14+00:00\",\"dateModified\":\"2024-01-06T23:30:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/\"},\"wordCount\":513,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/shutterstock_1905754738.jpg\",\"keywords\":[\"NIST\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/\",\"name\":\"The NIST publishes paper on four possible types of generative AI attacks | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/shutterstock_1905754738.jpg\",\"datePublished\":\"2024-01-06T20:37:14+00:00\",\"dateModified\":\"2024-01-06T23:30:16+00:00\",\"description\":\"The US National Institute of Standards and Technology (NIST) has released a new paper on the risks of generative AI.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/shutterstock_1905754738.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/01\\\/shutterstock_1905754738.jpg\",\"width\":1000,\"height\":667,\"caption\":\"NIST\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/01\\\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The NIST publishes paper on four possible types of generative AI attacks\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NIST udgiver artikel om fire mulige typer af generative AI-angreb | DailyAI","description":"Det amerikanske National Institute of Standards and Technology (NIST) har udgivet et nyt dokument om risikoen ved generativ AI.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/","og_locale":"da_DK","og_type":"article","og_title":"The NIST publishes paper on four possible types of generative AI attacks | DailyAI","og_description":"The US National Institute of Standards and Technology (NIST) has released a new paper on the risks of generative AI.","og_url":"https:\/\/dailyai.com\/da\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/","og_site_name":"DailyAI","article_published_time":"2024-01-06T20:37:14+00:00","article_modified_time":"2024-01-06T23:30:16+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/shutterstock_1905754738.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Sam Jeans","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"The NIST publishes paper on four possible types of generative AI attacks","datePublished":"2024-01-06T20:37:14+00:00","dateModified":"2024-01-06T23:30:16+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/"},"wordCount":513,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/shutterstock_1905754738.jpg","keywords":["NIST"],"articleSection":["Industry"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/","url":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/","name":"NIST udgiver artikel om fire mulige typer af generative AI-angreb | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/shutterstock_1905754738.jpg","datePublished":"2024-01-06T20:37:14+00:00","dateModified":"2024-01-06T23:30:16+00:00","description":"Det amerikanske National Institute of Standards and Technology (NIST) har udgivet et nyt dokument om risikoen ved generativ AI.","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/shutterstock_1905754738.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/01\/shutterstock_1905754738.jpg","width":1000,"height":667,"caption":"NIST"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/01\/the-nist-publishes-paper-on-four-possible-types-of-generative-ai-attacks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"The NIST publishes paper on four possible types of generative AI attacks"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er videnskabs- og teknologiforfatter og har arbejdet i forskellige AI-startups. N\u00e5r han ikke skriver, kan han finde p\u00e5 at l\u00e6se medicinske tidsskrifter eller grave i kasser med vinylplader.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/da\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8944","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=8944"}],"version-history":[{"count":6,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8944\/revisions"}],"predecessor-version":[{"id":8954,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/8944\/revisions\/8954"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/8945"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=8944"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=8944"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=8944"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}