{"id":11705,"date":"2024-04-24T08:20:50","date_gmt":"2024-04-24T08:20:50","guid":{"rendered":"https:\/\/dailyai.com\/?p=11705"},"modified":"2024-04-24T08:20:50","modified_gmt":"2024-04-24T08:20:50","slug":"llm-agents-can-autonomously-exploit-one-day-vulnerabilities","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/","title":{"rendered":"LLM-Agenten k\u00f6nnen selbst\u00e4ndig eint\u00e4gige Sicherheitsl\u00fccken ausnutzen"},"content":{"rendered":"<p><strong>Forscher der University of Illinois Urbana-Champaign (UIUC) fanden heraus, dass KI-Agenten, die von GPT-4 angetrieben werden, selbstst\u00e4ndig Schwachstellen in der Cybersicherheit ausnutzen k\u00f6nnen.<\/strong><\/p>\n<p>Da KI-Modelle immer leistungsf\u00e4higer werden, bietet ihr doppelter Verwendungszweck das Potenzial f\u00fcr Gutes und Schlechtes gleicherma\u00dfen. LLMs wie GPT-4 werden zunehmend zur Begehung von Cyberkriminalit\u00e4t eingesetzt, wobei <a href=\"https:\/\/dailyai.com\/de\/2023\/11\/googles-cybersecurity-forecast-sees-ai-playing-a-big-role\/\">Google-Prognose<\/a> dass die KI eine gro\u00dfe Rolle bei der Begehung und Verhinderung dieser Angriffe spielen wird.<\/p>\n<p>Die Bedrohung durch <a href=\"https:\/\/dailyai.com\/de\/2023\/08\/fraudgpt-and-the-rise-of-new-ai-powered-cybercrime-tools\/\">KI-gest\u00fctzte Cyberkriminalit\u00e4t<\/a> wurde erh\u00f6ht, da LLMs \u00fcber einfache Prompt-Response-Interaktionen hinausgehen und als autonome KI-Agenten agieren.<\/p>\n<p>Unter <a href=\"https:\/\/arxiv.org\/pdf\/2404.08144.pdf\" target=\"_blank\" rel=\"noopener\">ihr Papier<\/a>erl\u00e4uterten die Forscher, wie sie die F\u00e4higkeit von KI-Agenten zur Ausnutzung identifizierter \"eint\u00e4giger\" Sicherheitsl\u00fccken getestet haben.<\/p>\n<p>Eine Ein-Tages-Schwachstelle ist eine Sicherheitsl\u00fccke in einem Softwaresystem, die offiziell identifiziert und der \u00d6ffentlichkeit bekannt gegeben wurde, aber noch nicht von den Entwicklern der Software behoben oder gepatcht wurde.<\/p>\n<p>W\u00e4hrend dieser Zeit bleibt die Software angreifbar, und b\u00f6swillige Akteure mit den entsprechenden Kenntnissen k\u00f6nnen dies ausnutzen.<\/p>\n<p>Wenn eine eint\u00e4gige Sicherheitsl\u00fccke identifiziert wird, wird sie mit Hilfe des CVE-Standards (Common Vulnerabilities and Exposures) detailliert beschrieben. Der CVE-Standard soll die Besonderheiten der Schwachstellen hervorheben, die behoben werden m\u00fcssen, aber auch die B\u00f6sewichte wissen lassen, wo die Sicherheitsl\u00fccken sind.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Wir haben gezeigt, dass LLM-Agenten selbstst\u00e4ndig Schein-Websites hacken k\u00f6nnen, aber k\u00f6nnen sie auch reale Schwachstellen ausnutzen?<\/p>\n<p>Wir zeigen, dass GPT-4 in der Lage ist, reale Schwachstellen auszunutzen, wo andere Modelle und Open-Source-Schwachstellen-Scanner versagen.<\/p>\n<p>Papier: <a href=\"https:\/\/t.co\/utbmMdYfmu\">https:\/\/t.co\/utbmMdYfmu<\/a><\/p>\n<p>1\/7 <a href=\"https:\/\/t.co\/SAhdvZc8le\">https:\/\/t.co\/SAhdvZc8le<\/a><\/p>\n<p>- Daniel Kang (@daniel_d_kang) <a href=\"https:\/\/twitter.com\/daniel_d_kang\/status\/1780294662017671669?ref_src=twsrc%5Etfw\">April 16, 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Das Experiment<\/h2>\n<p>Die Forscher erstellten KI-Agenten, die auf GPT-4, GPT-3.5 und 8 weiteren Open-Source-LLMs basieren.<\/p>\n<p>Sie erm\u00f6glichten den Agenten den Zugang zu Werkzeugen, die CVE-Beschreibungen und die Verwendung des ReAct-Agenten-Frameworks. Das ReAct-Framework \u00fcberbr\u00fcckt die L\u00fccke, damit der LLM mit anderer Software und anderen Systemen interagieren kann.<\/p>\n<figure id=\"attachment_11706\" aria-describedby=\"caption-attachment-11706\" style=\"width: 1266px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11706 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-cybersecurity-exploit-agent.png\" alt=\"\" width=\"1266\" height=\"490\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-cybersecurity-exploit-agent.png 1266w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-cybersecurity-exploit-agent-300x116.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-cybersecurity-exploit-agent-1024x396.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-cybersecurity-exploit-agent-768x297.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-cybersecurity-exploit-agent-18x7.png 18w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-cybersecurity-exploit-agent-60x23.png 60w\" sizes=\"auto, (max-width: 1266px) 100vw, 1266px\" \/><figcaption id=\"caption-attachment-11706\" class=\"wp-caption-text\">Systemdiagramm des LLM-Agenten. Quelle: arXiv<\/figcaption><\/figure>\n<p>Die Forscher erstellten eine Reihe von 15 realen eint\u00e4gigen Sicherheitsl\u00fccken und setzten die Agenten darauf an, diese selbstst\u00e4ndig auszunutzen.<\/p>\n<p>GPT-3.5 und die Open-Source-Modelle scheiterten alle bei diesen Versuchen, aber GPT-4 nutzte 87% der eint\u00e4gigen Sicherheitsl\u00fccken erfolgreich aus.<\/p>\n<p>Nachdem die CVE-Beschreibung entfernt wurde, sank die Erfolgsquote von 87% auf 7%. Dies deutet darauf hin, dass GPT-4 Schwachstellen ausnutzen kann, wenn es die CVE-Details erh\u00e4lt, aber nicht sehr gut darin ist, die Schwachstellen ohne diese Anleitung zu identifizieren.<\/p>\n<h2>Auswirkungen<\/h2>\n<p>Cyberkriminalit\u00e4t und Hacking erforderten fr\u00fcher besondere F\u00e4higkeiten, aber die KI senkt die H\u00fcrde. Die Forscher sagten, dass die Erstellung ihres KI-Agenten nur 91 Zeilen Code erforderte.<\/p>\n<p>Mit der Weiterentwicklung von KI-Modellen werden die f\u00fcr die Ausnutzung von Cybersicherheitsschwachstellen erforderlichen F\u00e4higkeiten weiter sinken. Auch die Kosten f\u00fcr die Skalierung dieser autonomen Angriffe werden weiter sinken.<\/p>\n<p>Als die Forscher die API-Kosten f\u00fcr ihr Experiment zusammenrechneten, hatte ihr GPT-4-Agent $8,80 pro Exploit verursacht. Sie sch\u00e4tzen, dass ein Cybersecurity-Experte, der $50 pro Stunde berechnet, auf $25 pro Exploit k\u00e4me.<\/p>\n<p>Das bedeutet, dass der Einsatz eines LLM-Agenten bereits jetzt 2,8-mal billiger ist als menschliche Arbeit und viel einfacher zu skalieren als die Suche nach menschlichen Experten. Sobald GPT-5 und andere leistungsf\u00e4higere LLMs auf den Markt kommen, werden diese F\u00e4higkeiten und Kostenunterschiede nur noch zunehmen.<\/p>\n<p>Die Forscher sagen, dass ihre Ergebnisse \"die Notwendigkeit f\u00fcr die breitere Cybersicherheits-Community und LLM-Anbieter unterstreichen, sorgf\u00e4ltig dar\u00fcber nachzudenken, wie LLM-Agenten in Verteidigungsma\u00dfnahmen integriert werden k\u00f6nnen und ob sie weit verbreitet eingesetzt werden sollen\".<\/p>","protected":false},"excerpt":{"rendered":"<p>Forscher der University of Illinois Urbana-Champaign (UIUC) fanden heraus, dass KI-Agenten, die von GPT-4 angetrieben werden, selbstst\u00e4ndig Schwachstellen in der Cybersicherheit ausnutzen k\u00f6nnen. Da KI-Modelle immer leistungsf\u00e4higer werden, bietet ihr doppelter Verwendungszweck das Potenzial f\u00fcr Gutes und Schlechtes gleicherma\u00dfen. LLMs wie GPT-4 werden zunehmend zur Begehung von Cyberkriminalit\u00e4t eingesetzt, wobei Google prognostiziert, dass KI eine gro\u00dfe Rolle bei der Begehung und Verhinderung dieser Angriffe spielen wird. Die Bedrohung durch KI-gest\u00fctzte Cyberkriminalit\u00e4t hat zugenommen, da LLMs \u00fcber einfache Prompt-Response-Interaktionen hinausgehen und als autonome KI-Agenten agieren. In ihrem Papier erkl\u00e4ren die Forscher, wie sie die F\u00e4higkeit von KI-Agenten getestet haben, Folgendes auszunutzen<\/p>","protected":false},"author":6,"featured_media":11707,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163,365,118],"class_list":["post-11705","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks","tag-cybersecurity","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>LLM agents can autonomously exploit one-day vulnerabilities | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LLM agents can autonomously exploit one-day vulnerabilities | DailyAI\" \/>\n<meta property=\"og:description\" content=\"University of Illinois Urbana-Champaign (UIUC) researchers found that AI agents powered by GPT-4 can autonomously exploit cybersecurity vulnerabilities. As AI models become more powerful, their dual-use nature offers the potential for good and bad in equal measure. LLMs like GPT-4 are increasingly being used to commit cybercrime, with Google forecasting that AI will play a big role in committing and preventing these attacks. The threat of AI-powered cybercrime has been elevated as LLMs move beyond simple prompt-response interactions and act as autonomous AI agents. In their paper, the researchers explained how they tested the capability of AI agents to exploit\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-24T08:20:50+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-agent-cybersecurity-exploits.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"LLM agents can autonomously exploit one-day vulnerabilities\",\"datePublished\":\"2024-04-24T08:20:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/\"},\"wordCount\":557,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/AI-agent-cybersecurity-exploits.webp\",\"keywords\":[\"AI risks\",\"Cybersecurity\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/\",\"name\":\"LLM agents can autonomously exploit one-day vulnerabilities | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/AI-agent-cybersecurity-exploits.webp\",\"datePublished\":\"2024-04-24T08:20:50+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/AI-agent-cybersecurity-exploits.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/AI-agent-cybersecurity-exploits.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LLM agents can autonomously exploit one-day vulnerabilities\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"LLM-Agenten k\u00f6nnen selbstst\u00e4ndig eint\u00e4gige Sicherheitsl\u00fccken ausnutzen | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/","og_locale":"de_DE","og_type":"article","og_title":"LLM agents can autonomously exploit one-day vulnerabilities | DailyAI","og_description":"University of Illinois Urbana-Champaign (UIUC) researchers found that AI agents powered by GPT-4 can autonomously exploit cybersecurity vulnerabilities. As AI models become more powerful, their dual-use nature offers the potential for good and bad in equal measure. LLMs like GPT-4 are increasingly being used to commit cybercrime, with Google forecasting that AI will play a big role in committing and preventing these attacks. The threat of AI-powered cybercrime has been elevated as LLMs move beyond simple prompt-response interactions and act as autonomous AI agents. In their paper, the researchers explained how they tested the capability of AI agents to exploit","og_url":"https:\/\/dailyai.com\/de\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/","og_site_name":"DailyAI","article_published_time":"2024-04-24T08:20:50+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-agent-cybersecurity-exploits.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"LLM agents can autonomously exploit one-day vulnerabilities","datePublished":"2024-04-24T08:20:50+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/"},"wordCount":557,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-agent-cybersecurity-exploits.webp","keywords":["AI risks","Cybersecurity","LLMS"],"articleSection":["Industry"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/","url":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/","name":"LLM-Agenten k\u00f6nnen selbstst\u00e4ndig eint\u00e4gige Sicherheitsl\u00fccken ausnutzen | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-agent-cybersecurity-exploits.webp","datePublished":"2024-04-24T08:20:50+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-agent-cybersecurity-exploits.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/AI-agent-cybersecurity-exploits.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/llm-agents-can-autonomously-exploit-one-day-vulnerabilities\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"LLM agents can autonomously exploit one-day vulnerabilities"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/11705","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=11705"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/11705\/revisions"}],"predecessor-version":[{"id":11710,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/11705\/revisions\/11710"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/11707"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=11705"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=11705"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=11705"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}