{"id":11364,"date":"2024-04-05T08:45:19","date_gmt":"2024-04-05T08:45:19","guid":{"rendered":"https:\/\/dailyai.com\/?p=11364"},"modified":"2024-04-05T08:45:19","modified_gmt":"2024-04-05T08:45:19","slug":"anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/","title":{"rendered":"Antr\u00f3pico: LLMs de gran contexto vulnerables a la fuga de muchos disparos"},"content":{"rendered":"<p><strong>Anthropic ha publicado un art\u00edculo en el que describe un m\u00e9todo de \"jailbreaking\" de muchos disparos al que son especialmente vulnerables los LLM de contexto largo.<\/strong><\/p>\n<p>El tama\u00f1o de la ventana de contexto de un LLM determina la longitud m\u00e1xima de una petici\u00f3n. En los \u00faltimos meses, las ventanas de contexto han crecido de forma constante: modelos como Claude Opus han alcanzado una ventana de contexto de un mill\u00f3n de tokens.<\/p>\n<p>La ventana de contexto ampliada hace posible un aprendizaje en contexto m\u00e1s potente. Con un aviso de disparo cero, se pide a un LLM que proporcione una respuesta sin ejemplos previos.<\/p>\n<p>En un enfoque de pocos disparos, el modelo recibe varios ejemplos en la pregunta. Esto permite el aprendizaje en contexto y prepara al modelo para dar una respuesta mejor.<\/p>\n<p>Las ventanas contextuales m\u00e1s grandes significan que el prompt del usuario puede ser extremadamente largo con muchos ejemplos, lo que seg\u00fan Anthropic es a la vez una bendici\u00f3n y una maldici\u00f3n.<\/p>\n<h2>Fuga m\u00faltiple<\/h2>\n<p>El m\u00e9todo de jailbreak es sumamente sencillo. El LLM se solicita con un \u00fanico aviso compuesto por un di\u00e1logo falso entre un usuario y un asistente de IA muy complaciente.<\/p>\n<p>El di\u00e1logo consiste en una serie de preguntas sobre c\u00f3mo hacer algo peligroso o ilegal seguidas de respuestas falsas del asistente de IA con informaci\u00f3n sobre c\u00f3mo realizar las actividades.<\/p>\n<p>La pregunta termina con una pregunta objetivo como \"\u00bfC\u00f3mo se construye una bomba?\" y deja que el LLM objetivo responda.<\/p>\n<figure id=\"attachment_11366\" aria-describedby=\"caption-attachment-11366\" style=\"width: 1578px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-11366\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak.png\" alt=\"\" width=\"1578\" height=\"904\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak.png 1578w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-300x172.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-1024x587.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-768x440.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-1536x880.png 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-60x34.png 60w\" sizes=\"auto, (max-width: 1578px) 100vw, 1578px\" \/><figcaption id=\"caption-attachment-11366\" class=\"wp-caption-text\">Pocos disparos vs muchos disparos jailbreak. Fuente: Antr\u00f3pico<\/figcaption><\/figure>\n<p>Si s\u00f3lo tienes unas pocas interacciones de ida y vuelta en el prompt, no funciona. Pero con un modelo como el de Claude Opus, el texto puede ser tan largo como varias novelas largas.<\/p>\n<p><a href=\"https:\/\/www-cdn.anthropic.com\/af5633c94ed2beb282f6a53c595eb437e8e7b630\/Many_Shot_Jailbreaking__2024_04_02_0936.pdf\" target=\"_blank\" rel=\"noopener\">En su documento<\/a>Los investigadores de Anthropic descubrieron que \"a medida que el n\u00famero de di\u00e1logos incluidos (el n\u00famero de \"disparos\") aumenta m\u00e1s all\u00e1 de cierto punto, es m\u00e1s probable que el modelo produzca una respuesta perjudicial\".<\/p>\n<p>Tambi\u00e9n descubrieron que cuando se combina con otros <a href=\"https:\/\/dailyai.com\/es\/2023\/11\/study-reveals-new-techniques-for-jailbreak-language-models\/\">t\u00e9cnicas de jailbreaking<\/a>Sin embargo, el enfoque de muchos disparos fue a\u00fan m\u00e1s eficaz o podr\u00eda tener \u00e9xito con indicaciones m\u00e1s cortas.<\/p>\n<figure id=\"attachment_11367\" aria-describedby=\"caption-attachment-11367\" style=\"width: 1400px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-11367\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-results.png\" alt=\"\" width=\"1400\" height=\"888\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-results.png 1400w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-results-300x190.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-results-1024x650.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-results-768x487.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak-results-60x38.png 60w\" sizes=\"auto, (max-width: 1400px) 100vw, 1400px\" \/><figcaption id=\"caption-attachment-11367\" class=\"wp-caption-text\">A medida que aumenta el n\u00famero de di\u00e1logos en la pregunta, aumentan las probabilidades de obtener una respuesta perjudicial. Fuente: Antr\u00f3pico<\/figcaption><\/figure>\n<h2>\u00bfSe puede arreglar?<\/h2>\n<p>Anthropic dice que la defensa m\u00e1s f\u00e1cil contra la fuga de muchos disparos es reducir el tama\u00f1o de la ventana contextual de un modelo. Pero entonces se pierden las ventajas obvias de poder utilizar entradas m\u00e1s largas.<\/p>\n<p>Anthropic intent\u00f3 que su LLM identificara cu\u00e1ndo un usuario estaba intentando una fuga m\u00faltiple y se negara a responder a la consulta. Descubrieron que esto simplemente retrasaba la fuga y requer\u00eda una consulta m\u00e1s larga para obtener finalmente el resultado da\u00f1ino.<\/p>\n<p>Clasificando y modificando el mensaje antes de pasarlo al modelo, consiguieron evitar el ataque. Aun as\u00ed, Anthropic dice que es consciente de que variaciones del ataque podr\u00edan eludir la detecci\u00f3n.<\/p>\n<p>Anthropic afirma que la ventana de contexto cada vez m\u00e1s amplia de los LLM \"hace que los modelos sean mucho m\u00e1s \u00fatiles en todo tipo de aspectos, pero tambi\u00e9n hace factible una nueva clase de vulnerabilidades de jailbreaking.\"<\/p>\n<p>La empresa ha publicado su investigaci\u00f3n con la esperanza de que otras empresas de IA encuentren formas de mitigar los ataques con m\u00faltiples disparos.<\/p>\n<p>Una conclusi\u00f3n interesante a la que llegaron los investigadores fue que \"incluso las mejoras positivas e inocuas de los LLM (en este caso, permitir entradas m\u00e1s largas) pueden tener a veces consecuencias imprevistas.\"<\/p>","protected":false},"excerpt":{"rendered":"<p>Anthropic ha publicado un art\u00edculo en el que se describe un m\u00e9todo de \"jailbreaking\" de muchos disparos al que son especialmente vulnerables los LLM de contexto largo. El tama\u00f1o de la ventana de contexto de un LLM determina la longitud m\u00e1xima de una petici\u00f3n. Las ventanas de contexto han crecido constantemente en los \u00faltimos meses, con modelos como Claude Opus alcanzando una ventana de contexto de 1 mill\u00f3n de tokens. La ventana de contexto ampliada hace posible un aprendizaje en contexto m\u00e1s potente. Con un enfoque de cero disparos, se pide a un LLM que proporcione una respuesta sin ejemplos previos. En un enfoque de pocos disparos, el modelo recibe varios ejemplos en la solicitud. Esto permite el aprendizaje en contexto y prepara<\/p>","protected":false},"author":6,"featured_media":11368,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[],"class_list":["post-11364","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Anthropic: Large context LLMs vulnerable to many-shot jailbreak | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Anthropic: Large context LLMs vulnerable to many-shot jailbreak | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Anthropic released a paper outlining a many-shot jailbreaking method to which long-context LLMs are particularly vulnerable. The size of an LLM\u2019s context window determines the maximum length of a prompt. Context windows have been growing consistently over the last few months with models like Claude Opus reaching a context window of 1 million tokens. The expanded context window makes more powerful in-context learning possible. With a zero-shot prompt, an LLM is prompted to provide a response without prior examples. In a few-shot approach, the model is provided with several examples in the prompt. This allows for in-context learning and primes\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-05T08:45:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Anthropic: Large context LLMs vulnerable to many-shot jailbreak\",\"datePublished\":\"2024-04-05T08:45:19+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/\"},\"wordCount\":548,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Many-shot-jailbreak.webp\",\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/\",\"name\":\"Anthropic: Large context LLMs vulnerable to many-shot jailbreak | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Many-shot-jailbreak.webp\",\"datePublished\":\"2024-04-05T08:45:19+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Many-shot-jailbreak.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Many-shot-jailbreak.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Anthropic: Large context LLMs vulnerable to many-shot jailbreak\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Antr\u00f3pico: Los LLM de gran contexto, vulnerables a la fuga m\u00faltiple | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/","og_locale":"es_ES","og_type":"article","og_title":"Anthropic: Large context LLMs vulnerable to many-shot jailbreak | DailyAI","og_description":"Anthropic released a paper outlining a many-shot jailbreaking method to which long-context LLMs are particularly vulnerable. The size of an LLM\u2019s context window determines the maximum length of a prompt. Context windows have been growing consistently over the last few months with models like Claude Opus reaching a context window of 1 million tokens. The expanded context window makes more powerful in-context learning possible. With a zero-shot prompt, an LLM is prompted to provide a response without prior examples. In a few-shot approach, the model is provided with several examples in the prompt. This allows for in-context learning and primes","og_url":"https:\/\/dailyai.com\/es\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/","og_site_name":"DailyAI","article_published_time":"2024-04-05T08:45:19+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Anthropic: Large context LLMs vulnerable to many-shot jailbreak","datePublished":"2024-04-05T08:45:19+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/"},"wordCount":548,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak.webp","articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/","url":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/","name":"Antr\u00f3pico: Los LLM de gran contexto, vulnerables a la fuga m\u00faltiple | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak.webp","datePublished":"2024-04-05T08:45:19+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Many-shot-jailbreak.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Anthropic: Large context LLMs vulnerable to many-shot jailbreak"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11364","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=11364"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11364\/revisions"}],"predecessor-version":[{"id":11370,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11364\/revisions\/11370"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/11368"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=11364"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=11364"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=11364"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}