{"id":13539,"date":"2024-07-22T10:04:27","date_gmt":"2024-07-22T10:04:27","guid":{"rendered":"https:\/\/dailyai.com\/?p=13539"},"modified":"2024-07-22T10:04:27","modified_gmt":"2024-07-22T10:04:27","slug":"llm-refusal-training-easily-bypassed-with-past-tense-prompts","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/","title":{"rendered":"LLM-Verweigerungstraining leicht umgangen mit Aufforderungen zur Vergangenheitsform"},"content":{"rendered":"<p><strong>Forscher der Eidgen\u00f6ssischen Technischen Hochschule Lausanne (EPFL) fanden heraus, dass das Schreiben von gef\u00e4hrlichen Aufforderungen in der Vergangenheitsform das Verweigerungstraining der fortgeschrittensten LLMs umgeht.<\/strong><\/p>\n<p>KI-Modelle werden in der Regel mit Techniken wie \u00fcberwachter Feinabstimmung (SFT) oder verst\u00e4rkendem Lernen mit menschlichem Feedback (RLHF) angepasst, um sicherzustellen, dass das Modell nicht auf gef\u00e4hrliche oder unerw\u00fcnschte Aufforderungen reagiert.<\/p>\n<p>Dieses Verweigerungstraining setzt ein, wenn du ChatGPT um Rat fragst, wie man eine Bombe oder Drogen herstellt. Wir haben eine Reihe von <a href=\"https:\/\/dailyai.com\/de\/2024\/06\/microsoft-reveal-skeleton-key-jailbreak-which-works-across-different-ai-models\/\">interessante Jailbreak-Techniken<\/a> Aber die von den EPFL-Forschern getestete Methode ist bei weitem die einfachste.<\/p>\n<p>Die Forscher nahmen einen Datensatz von 100 sch\u00e4dlichen Verhaltensweisen und verwendeten GPT-3.5, um die Aufforderungen in der Vergangenheitsform umzuschreiben.<\/p>\n<p>Hier ist ein Beispiel f\u00fcr die Methode, die in <a href=\"https:\/\/arxiv.org\/pdf\/2407.11969\" target=\"_blank\" rel=\"noopener\">ihr Papier<\/a>.<\/p>\n<figure id=\"attachment_13541\" aria-describedby=\"caption-attachment-13541\" style=\"width: 1180px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13541 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Rewrite-prompt-in-past-tense.png\" alt=\"\" width=\"1180\" height=\"574\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Rewrite-prompt-in-past-tense.png 1180w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Rewrite-prompt-in-past-tense-300x146.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Rewrite-prompt-in-past-tense-1024x498.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Rewrite-prompt-in-past-tense-768x374.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Rewrite-prompt-in-past-tense-18x9.png 18w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Rewrite-prompt-in-past-tense-60x29.png 60w\" sizes=\"auto, (max-width: 1180px) 100vw, 1180px\" \/><figcaption id=\"caption-attachment-13541\" class=\"wp-caption-text\">Verwendung eines LLM, um gef\u00e4hrliche Aufforderungen in der Vergangenheitsform umzuschreiben. Quelle: arXiv<\/figcaption><\/figure>\n<p>Anschlie\u00dfend bewerteten sie die Antworten auf diese umgeschriebenen Aufforderungen von diesen 8 LLMs: Llama-3 8B, Claude-3.5 Sonnet, GPT-3.5 Turbo, Gemma-2 9B, Phi-3-Mini, <a href=\"https:\/\/dailyai.com\/de\/2024\/07\/openai-releases-gpt-4o-mini-a-high-performance-super-low-cost-model\/\">GPT-4o-mini<\/a>, GPT-4o, und R2D2.<\/p>\n<p>Sie benutzten mehrere LLMs, um die Ergebnisse zu beurteilen und sie entweder als gescheiterten oder erfolgreichen Ausbruchsversuch zu klassifizieren.<\/p>\n<p>Die einfache \u00c4nderung der Zeitform des Prompts hatte eine \u00fcberraschend signifikante Auswirkung auf die Angriffserfolgsrate (ASR). GPT-4o und GPT-4o mini waren besonders anf\u00e4llig f\u00fcr diese Technik.<\/p>\n<p>Die ASR dieses \"einfachen Angriffs auf GPT-4o steigt von 1% bei direkten Anfragen auf 88% bei 20 Umformulierungsversuchen in der Vergangenheit bei sch\u00e4dlichen Anfragen.\"<\/p>\n<p>Hier ist ein Beispiel daf\u00fcr, wie konform GPT-4o wird, wenn man die Eingabeaufforderung einfach in der Vergangenheitsform umschreibt. Ich habe hierf\u00fcr ChatGPT verwendet und die Schwachstelle wurde noch nicht gepatcht.<\/p>\n<figure id=\"attachment_13542\" aria-describedby=\"caption-attachment-13542\" style=\"width: 1254px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13542 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Present-and-past-tense-prompt-responses.png\" alt=\"\" width=\"1254\" height=\"1058\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Present-and-past-tense-prompt-responses.png 1254w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Present-and-past-tense-prompt-responses-300x253.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Present-and-past-tense-prompt-responses-1024x864.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Present-and-past-tense-prompt-responses-768x648.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Present-and-past-tense-prompt-responses-14x12.png 14w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Present-and-past-tense-prompt-responses-60x51.png 60w\" sizes=\"auto, (max-width: 1254px) 100vw, 1254px\" \/><figcaption id=\"caption-attachment-13542\" class=\"wp-caption-text\">ChatGPT mit GPT-4o verweigert eine Aufforderung in der Gegenwartsform, erf\u00fcllt sie aber, wenn sie in der Vergangenheitsform umgeschrieben wird. Quelle: ChatGPT<\/figcaption><\/figure>\n<p>Beim Verweigerungstraining mit RLHF und SFT wird ein Modell darauf trainiert, sch\u00e4dliche Aufforderungen erfolgreich zu verallgemeinern, auch wenn es die spezifische Aufforderung noch nie gesehen hat.<\/p>\n<p>Wenn die Aufforderung in der Vergangenheitsform geschrieben ist, scheinen die LLMs die F\u00e4higkeit zur Verallgemeinerung zu verlieren. Die anderen LLMs schnitten nicht viel besser ab als GPT-4o, obwohl Llama-3 8B am widerstandsf\u00e4higsten schien.<\/p>\n<figure id=\"attachment_13543\" aria-describedby=\"caption-attachment-13543\" style=\"width: 1268px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-13543 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/ASR-using-past-tense-prompts.png\" alt=\"\" width=\"1268\" height=\"492\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/ASR-using-past-tense-prompts.png 1268w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/ASR-using-past-tense-prompts-300x116.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/ASR-using-past-tense-prompts-1024x397.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/ASR-using-past-tense-prompts-768x298.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/ASR-using-past-tense-prompts-18x7.png 18w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/ASR-using-past-tense-prompts-60x23.png 60w\" sizes=\"auto, (max-width: 1268px) 100vw, 1268px\" \/><figcaption id=\"caption-attachment-13543\" class=\"wp-caption-text\">Erfolgsraten von Angriffen mit gef\u00e4hrlichen Aufforderungen in der Gegenwart und Vergangenheit. Quelle: arXiv<\/figcaption><\/figure>\n<p>Die Umformulierung der Aufforderung in die zuk\u00fcnftige Zeitform f\u00fchrte zu einem Anstieg der ASR, war aber weniger effektiv als die Aufforderung in der Vergangenheit.<\/p>\n<p>Die Forscher kamen zu dem Schluss, dass dies daran liegen k\u00f6nnte, dass \"die Feinabstimmungsdatens\u00e4tze einen h\u00f6heren Anteil an sch\u00e4dlichen Anfragen enthalten, die in der Zukunftsform oder als hypothetische Ereignisse ausgedr\u00fcckt werden\".<\/p>\n<p>Sie schlugen auch vor, dass \"die internen \u00dcberlegungen des Modells zukunftsorientierte Anfragen als potenziell sch\u00e4dlicher interpretieren k\u00f6nnten, w\u00e4hrend Aussagen in der Vergangenheit, wie z. B. historische Ereignisse, als harmloser wahrgenommen werden k\u00f6nnten\".<\/p>\n<h2>Kann es behoben werden?<\/h2>\n<p>Weitere Experimente zeigten, dass das Hinzuf\u00fcgen von Aufforderungen in der Vergangenheitsform zu den Feinabstimmungsdatens\u00e4tzen die Anf\u00e4lligkeit f\u00fcr diese Ausbruchsmethode wirksam reduzierte.<\/p>\n<p>Dieser Ansatz ist zwar wirksam, erfordert aber, dass die Arten von gef\u00e4hrlichen Aufforderungen, die ein Benutzer eingeben k\u00f6nnte, im Voraus ber\u00fccksichtigt werden.<\/p>\n<p>Die Forscher schlagen vor, dass es einfacher ist, die Ausgabe eines Modells zu bewerten, bevor sie dem Benutzer pr\u00e4sentiert wird.<\/p>\n<p>So einfach dieser Jailbreak auch ist, es scheint, dass die f\u00fchrenden KI-Unternehmen noch keinen Weg gefunden haben, ihn zu patchen.<\/p>","protected":false},"excerpt":{"rendered":"<p>Forscher der Eidgen\u00f6ssischen Technischen Hochschule Lausanne (EPFL) fanden heraus, dass das Schreiben gef\u00e4hrlicher Aufforderungen in der Vergangenheitsform das Verweigerungstraining der fortgeschrittensten LLMs umgeht. KI-Modelle werden \u00fcblicherweise mit Techniken wie \u00fcberwachter Feinabstimmung (SFT) oder verst\u00e4rkendem Lernen mit menschlichem Feedback (RLHF) angepasst, um sicherzustellen, dass das Modell nicht auf gef\u00e4hrliche oder unerw\u00fcnschte Aufforderungen reagiert. Dieses Verweigerungstraining setzt ein, wenn Sie ChatGPT um Rat fragen, wie man eine Bombe oder Drogen herstellt. Wir haben eine Reihe interessanter Ausbruchstechniken vorgestellt, die diese Schutzmechanismen umgehen, aber die von den EPFL-Forschern getestete Methode ist bei weitem die einfachste.<\/p>","protected":false},"author":6,"featured_media":13544,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[163,118],"class_list":["post-13539","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-ai-risks","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>LLM refusal training easily bypassed with past tense prompts | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LLM refusal training easily bypassed with past tense prompts | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Researchers from the Swiss Federal Institute of Technology Lausanne (EPFL) found that writing dangerous prompts in the past tense bypassed the refusal training of the most advanced LLMs. AI models are commonly aligned using techniques like supervised fine-tuning (SFT) or reinforcement learning human feedback (RLHF) to make sure the model doesn\u2019t respond to dangerous or undesirable prompts. This refusal training kicks in when you ask ChatGPT for advice on how to make a bomb or drugs. We\u2019ve covered a range of interesting jailbreak techniques that bypass these guardrails but the method the EPFL researchers tested is by far the simplest.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-07-22T10:04:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Jailbreak-AI-model-with-past-tense.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"4\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"LLM refusal training easily bypassed with past tense prompts\",\"datePublished\":\"2024-07-22T10:04:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/\"},\"wordCount\":569,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/07\\\/Jailbreak-AI-model-with-past-tense.webp\",\"keywords\":[\"AI risks\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/\",\"name\":\"LLM refusal training easily bypassed with past tense prompts | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/07\\\/Jailbreak-AI-model-with-past-tense.webp\",\"datePublished\":\"2024-07-22T10:04:27+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/07\\\/Jailbreak-AI-model-with-past-tense.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/07\\\/Jailbreak-AI-model-with-past-tense.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/07\\\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LLM refusal training easily bypassed with past tense prompts\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"LLM-Verweigerungstraining leicht umgangen mit Aufforderungen zur Vergangenheitsform | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/","og_locale":"de_DE","og_type":"article","og_title":"LLM refusal training easily bypassed with past tense prompts | DailyAI","og_description":"Researchers from the Swiss Federal Institute of Technology Lausanne (EPFL) found that writing dangerous prompts in the past tense bypassed the refusal training of the most advanced LLMs. AI models are commonly aligned using techniques like supervised fine-tuning (SFT) or reinforcement learning human feedback (RLHF) to make sure the model doesn\u2019t respond to dangerous or undesirable prompts. This refusal training kicks in when you ask ChatGPT for advice on how to make a bomb or drugs. We\u2019ve covered a range of interesting jailbreak techniques that bypass these guardrails but the method the EPFL researchers tested is by far the simplest.","og_url":"https:\/\/dailyai.com\/de\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/","og_site_name":"DailyAI","article_published_time":"2024-07-22T10:04:27+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Jailbreak-AI-model-with-past-tense.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"4\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"LLM refusal training easily bypassed with past tense prompts","datePublished":"2024-07-22T10:04:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/"},"wordCount":569,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Jailbreak-AI-model-with-past-tense.webp","keywords":["AI risks","LLMS"],"articleSection":["Industry"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/","url":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/","name":"LLM-Verweigerungstraining leicht umgangen mit Aufforderungen zur Vergangenheitsform | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Jailbreak-AI-model-with-past-tense.webp","datePublished":"2024-07-22T10:04:27+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Jailbreak-AI-model-with-past-tense.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/07\/Jailbreak-AI-model-with-past-tense.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/07\/llm-refusal-training-easily-bypassed-with-past-tense-prompts\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"LLM refusal training easily bypassed with past tense prompts"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/13539","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=13539"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/13539\/revisions"}],"predecessor-version":[{"id":13546,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/13539\/revisions\/13546"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/13544"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=13539"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=13539"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=13539"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}