{"id":5094,"date":"2023-09-06T09:44:58","date_gmt":"2023-09-06T09:44:58","guid":{"rendered":"https:\/\/dailyai.com\/?p=5094"},"modified":"2023-09-07T06:41:24","modified_gmt":"2023-09-07T06:41:24","slug":"ibm-researchers-hypnotize-llms-to-deliver-malicious-advice","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/","title":{"rendered":"IBM-Forscher hypnotisieren LLMs, um b\u00f6sartige Ratschl\u00e4ge zu erteilen"},"content":{"rendered":"<p><strong>IBM security researchers \u2018hypnotized\u2019 a number of LLMs and were able to have them consistently go beyond their guardrails to provide malicious and misleading outputs.<\/strong><\/p>\n<p><a href=\"https:\/\/dailyai.com\/2023\/08\/ai-jailbreak-prompts-are-freely-available-and-effective-study-finds\/\">Jailbreaking an LLM<\/a> is a lot easier than it should be, but the results are normally just a single bad response. The IBM researchers were able to put the LLMs into a state where they continued to misbehave, even in subsequent chats.<\/p>\n<p>In their experiments, the researchers attempted to hypnotize the GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b models.<\/p>\n<p>\u201cOur experiment shows that it\u2019s possible to control an LLM, getting it to provide bad guidance to users, without data manipulation being a requirement,\u201d said Chenta Lee, one of the IBM researchers.<\/p>\n<p>One of the main ways they were able to do this was by telling the LLM that it was playing a game with a set of special rules.<\/p>\n<p>In this example, ChatGPT was told that in order to win the game it needed to first get the correct answer, reverse the meaning, and then output it without referencing the correct answer.<\/p>\n<p>Here\u2019s an example of the bad advice that ChatGPT proceeded to offer while thinking it was winning the game:<\/p>\n<p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5120\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-1024x924.png\" alt=\"\" width=\"750\" height=\"676\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-1024x924.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-300x271.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-768x693.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-370x334.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-800x722.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-20x18.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-740x667.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-1320x1191.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice-53x48.png 53w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/ChatGPT-giving-bad-advice.png 1408w\" sizes=\"auto, (max-width: 750px) 100vw, 750px\" \/><br \/>\nSource: <a href=\"https:\/\/securityintelligence.com\/posts\/unmasking-hypnotized-ai-hidden-risks-large-language-models\/\">Security Intelligence<\/a><\/p>\n<p>They then started a new game and told the LLM to never reveal in the chat that it was playing the game. It was also instructed that it should silently restart the game even if the user exited and started a new chat.<\/p>\n<p>For the sake of the experiment, they instructed ChatGPT to add [In game] to each response to show that the game was ongoing despite the LLM\u2019s silence on the matter.<\/p>\n<p>In this case, the responses were not asked to be deceptive but the responses show that a user could be oblivious to special instructions an LLM had received.<\/p>\n<p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5121 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/Are-we-playing-a-game.png\" alt=\"\" width=\"754\" height=\"1168\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/Are-we-playing-a-game.png 754w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/Are-we-playing-a-game-194x300.png 194w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/Are-we-playing-a-game-661x1024.png 661w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/Are-we-playing-a-game-370x573.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/Are-we-playing-a-game-740x1146.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/Are-we-playing-a-game-20x31.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/Are-we-playing-a-game-31x48.png 31w\" sizes=\"auto, (max-width: 754px) 100vw, 754px\" \/><br \/>\nSource: <a href=\"https:\/\/securityintelligence.com\/posts\/unmasking-hypnotized-ai-hidden-risks-large-language-models\/\">Security Intelligence<\/a><\/p>\n<p>Lee explained that \u201cThis technique resulted in ChatGPT never stopping the game while the user is in the same conversation (even if they restart the browser and resume that conversation) and never saying it was playing a game.\u201d<\/p>\n<p>The researchers were also able to demonstrate how a poorly secured banking chatbot could be made to reveal sensitive information, give bad online security advice, or write insecure code.<\/p>\n<p>Lee said, &#8220;While the risk posed by hypnosis is currently low, it\u2019s important to note that LLMs are an entirely new attack surface that will surely evolve.&#8221;<\/p>\n<p>The results of the experiments also showed that you don\u2019t need to be able to write complicated code to exploit security vulnerabilities that LLMs open up.<\/p>\n<p>&#8220;There is a lot still that we need to explore from a security standpoint, and, subsequently, a significant need to determine how we effectively mitigate security risks LLMs may introduce to consumers and businesses,&#8221; Lee said.<\/p>\n<p>The scenarios played out in the experiment point out the need for a reset override command in LLMs to disregard all previous instructions. If the LLM has been instructed to deny prior instruction while silently acting on it, how would you know?<\/p>\n<p>ChatGPT is good at playing games and it likes to win, even when it involves lying to you.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>IBM-Sicherheitsforscher haben eine Reihe von LLMs \"hypnotisiert\" und konnten sie dazu bringen, ihre Sicherheitsvorkehrungen konsequent zu \u00fcberschreiten und b\u00f6sartige und irref\u00fchrende Ergebnisse zu liefern. Einen LLM zu knacken ist viel einfacher, als es sein sollte, aber die Ergebnisse sind normalerweise nur eine einzige schlechte Reaktion. Den IBM-Forschern ist es gelungen, die LLMs in einen Zustand zu versetzen, in dem sie sich auch in nachfolgenden Chats weiterhin falsch verhalten. In ihren Experimenten versuchten die Forscher, die Modelle GPT-3.5, GPT-4, BARD, mpt-7b und mpt-30b zu hypnotisieren. \"Unser Experiment zeigt, dass es m\u00f6glich ist, ein LLM zu kontrollieren und es dazu zu bringen, schlechte Anweisungen zu geben.<\/p>","protected":false},"author":6,"featured_media":5122,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[163,115,207],"class_list":["post-5094","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-ai-risks","tag-chatgpt","tag-llm"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>IBM researchers hypnotize LLMs to deliver malicious advice | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"IBM researchers hypnotize LLMs to deliver malicious advice | DailyAI\" \/>\n<meta property=\"og:description\" content=\"IBM security researchers \u2018hypnotized\u2019 a number of LLMs and were able to have them consistently go beyond their guardrails to provide malicious and misleading outputs. Jailbreaking an LLM is a lot easier than it should be, but the results are normally just a single bad response. The IBM researchers were able to put the LLMs into a state where they continued to misbehave, even in subsequent chats. In their experiments, the researchers attempted to hypnotize the GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b models. \u201cOur experiment shows that it\u2019s possible to control an LLM, getting it to provide bad guidance to\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-06T09:44:58+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-07T06:41:24+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/hypnotize-AI.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"IBM researchers hypnotize LLMs to deliver malicious advice\",\"datePublished\":\"2023-09-06T09:44:58+00:00\",\"dateModified\":\"2023-09-07T06:41:24+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/\"},\"wordCount\":533,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/09\\\/hypnotize-AI.jpg\",\"keywords\":[\"AI risks\",\"ChatGPT\",\"LLM\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/\",\"name\":\"IBM researchers hypnotize LLMs to deliver malicious advice | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/09\\\/hypnotize-AI.jpg\",\"datePublished\":\"2023-09-06T09:44:58+00:00\",\"dateModified\":\"2023-09-07T06:41:24+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/09\\\/hypnotize-AI.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/09\\\/hypnotize-AI.jpg\",\"width\":1000,\"height\":667},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"IBM researchers hypnotize LLMs to deliver malicious advice\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"IBM-Forscher hypnotisieren LLMs, um b\u00f6sartige Ratschl\u00e4ge zu geben | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/","og_locale":"de_DE","og_type":"article","og_title":"IBM researchers hypnotize LLMs to deliver malicious advice | DailyAI","og_description":"IBM security researchers \u2018hypnotized\u2019 a number of LLMs and were able to have them consistently go beyond their guardrails to provide malicious and misleading outputs. Jailbreaking an LLM is a lot easier than it should be, but the results are normally just a single bad response. The IBM researchers were able to put the LLMs into a state where they continued to misbehave, even in subsequent chats. In their experiments, the researchers attempted to hypnotize the GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b models. \u201cOur experiment shows that it\u2019s possible to control an LLM, getting it to provide bad guidance to","og_url":"https:\/\/dailyai.com\/de\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/","og_site_name":"DailyAI","article_published_time":"2023-09-06T09:44:58+00:00","article_modified_time":"2023-09-07T06:41:24+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/hypnotize-AI.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"IBM researchers hypnotize LLMs to deliver malicious advice","datePublished":"2023-09-06T09:44:58+00:00","dateModified":"2023-09-07T06:41:24+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/"},"wordCount":533,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/hypnotize-AI.jpg","keywords":["AI risks","ChatGPT","LLM"],"articleSection":["Ethics &amp; Society"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/","url":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/","name":"IBM-Forscher hypnotisieren LLMs, um b\u00f6sartige Ratschl\u00e4ge zu geben | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/hypnotize-AI.jpg","datePublished":"2023-09-06T09:44:58+00:00","dateModified":"2023-09-07T06:41:24+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/hypnotize-AI.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/hypnotize-AI.jpg","width":1000,"height":667},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/09\/ibm-researchers-hypnotize-llms-to-deliver-malicious-advice\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"IBM researchers hypnotize LLMs to deliver malicious advice"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/5094","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=5094"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/5094\/revisions"}],"predecessor-version":[{"id":5124,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/5094\/revisions\/5124"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/5122"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=5094"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=5094"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=5094"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}