{"id":11530,"date":"2024-04-15T10:11:12","date_gmt":"2024-04-15T10:11:12","guid":{"rendered":"https:\/\/dailyai.com\/?p=11530"},"modified":"2024-04-15T10:16:25","modified_gmt":"2024-04-15T10:16:25","slug":"googles-infini-attention-gives-llms-infinite-context","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","title":{"rendered":"Infini-attention de Google da a los LLM un contexto \"infinito"},"content":{"rendered":"<p><strong>Los investigadores de Google desarrollaron una t\u00e9cnica llamada Infini-attention, que permite a los LLM manejar textos infinitamente largos sin aumentar los requisitos de computaci\u00f3n y memoria.<\/strong><\/p>\n<p>La arquitectura Transformer de un LLM es lo que le permite prestar atenci\u00f3n a todos los tokens de un prompt. La complejidad de las multiplicaciones matriciales y de productos de puntos que realiza es cuadr\u00e1tica.<\/p>\n<p>Esto significa que duplicar los tokens de tu prompt supone una necesidad cuatro veces mayor de memoria y potencia de procesamiento. Por eso es tan dif\u00edcil hacer LLM con <a href=\"https:\/\/dailyai.com\/es\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/\">grandes ventanas contextuales<\/a> sin que se disparen los requisitos de memoria y computaci\u00f3n.<\/p>\n<p>En un LLM \"est\u00e1ndar\", la informaci\u00f3n del principio del contenido del prompt se pierde una vez que el prompt se hace m\u00e1s grande que la ventana contextual. Google <a href=\"https:\/\/arxiv.org\/pdf\/2404.07143.pdf\" target=\"_blank\" rel=\"noopener\">trabajo de investigaci\u00f3n<\/a> explica c\u00f3mo Infini-attention puede retener datos m\u00e1s all\u00e1 de la ventana contextual.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Google presenta Leave No Context Behind: Transformadores eficientes de contexto infinito con Infini-atenci\u00f3n<\/p>\n<p>El modelo 1B, ajustado a secuencias de hasta 5.000 claves de paso, resuelve el problema de las secuencias de 1M de longitud.<a href=\"https:\/\/t.co\/zyHMt3inhi\">https:\/\/t.co\/zyHMt3inhi<\/a> <a href=\"https:\/\/t.co\/ySYEMET9Ef\">pic.twitter.com\/ySYEMET9Ef<\/a><\/p>\n<p>- Aran Komatsuzaki (@arankomatsuzaki) <a href=\"https:\/\/twitter.com\/arankomatsuzaki\/status\/1778230430090592454?ref_src=twsrc%5Etfw\">11 de abril de 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>\u00bfC\u00f3mo funciona Infini-attention?<\/h2>\n<p>Infini-attention combina t\u00e9cnicas de memoria compresiva con mecanismos de atenci\u00f3n modificados para que no se pierda la informaci\u00f3n relevante m\u00e1s antigua.<\/p>\n<p>Una vez que la solicitud de entrada crece m\u00e1s all\u00e1 de la longitud de contexto del modelo, la memoria compresiva almacena la informaci\u00f3n en un formato comprimido en lugar de descartarla.<\/p>\n<p>Esto permite almacenar informaci\u00f3n m\u00e1s antigua y menos relevante de forma inmediata sin que los requisitos de memoria y computaci\u00f3n aumenten indefinidamente a medida que crece la entrada de datos.<\/p>\n<p>En lugar de intentar retener toda la informaci\u00f3n de entrada m\u00e1s antigua, la memoria compresiva de Infini-attention pondera y resume la informaci\u00f3n que se considera relevante y digna de ser retenida.<\/p>\n<p>Infini-attention toma entonces un mecanismo de atenci\u00f3n \"vainilla\" pero reutiliza los estados de valor clave (KV) de cada segmento posterior del modelo en lugar de descartarlos.<\/p>\n<p>He aqu\u00ed un diagrama que muestra la diferencia entre Infini-attention y otro modelo de contexto ampliado Transformer XL.<\/p>\n<figure id=\"attachment_11566\" aria-describedby=\"caption-attachment-11566\" style=\"width: 1356px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11566 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png\" alt=\"\" width=\"1356\" height=\"664\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png 1356w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-1024x501.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-768x376.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-60x29.png 60w\" sizes=\"auto, (max-width: 1356px) 100vw, 1356px\" \/><figcaption id=\"caption-attachment-11566\" class=\"wp-caption-text\">Infini-Transformer (arriba) tiene un historial de contextos completo, mientras que Transformer-XL (abajo) descarta los contextos antiguos, ya que s\u00f3lo almacena en cach\u00e9 los estados KV del \u00faltimo segmento. Fuente: arXiv<\/figcaption><\/figure>\n<p>El resultado es un LLM que presta atenci\u00f3n local a los datos de entrada recientes, pero que tambi\u00e9n lleva continuamente destilados datos hist\u00f3ricos comprimidos a los que puede aplicar atenci\u00f3n a largo plazo.<\/p>\n<p>El art\u00edculo se\u00f1ala que \"esta sutil pero cr\u00edtica modificaci\u00f3n de la capa de atenci\u00f3n permite a las LLM procesar contextos infinitamente largos con recursos limitados de memoria y computaci\u00f3n.\"<\/p>\n<h2>\u00bfEs bueno?<\/h2>\n<p>Google realiz\u00f3 pruebas comparativas utilizando modelos Infini-attention m\u00e1s peque\u00f1os de 1B y 8B par\u00e1metros. Se compararon con otros modelos de contexto ampliado, como Transformer-XL y Memorizing Transformers.<\/p>\n<p>El Infini-Transformer obtuvo puntuaciones de perplejidad significativamente m\u00e1s bajas que los dem\u00e1s modelos al procesar contenidos de texto largo. Una puntuaci\u00f3n de perplejidad m\u00e1s baja significa que el modelo est\u00e1 m\u00e1s seguro de sus predicciones de salida.<\/p>\n<p>En las pruebas de \"recuperaci\u00f3n de claves de acceso\", los modelos Infini-attention encontraron sistem\u00e1ticamente el n\u00famero aleatorio oculto en un texto de hasta 1 mill\u00f3n de fichas.<\/p>\n<p>Otros modelos a menudo consiguen recuperar la clave de acceso hacia el final de la entrada, pero tienen dificultades para encontrarla en medio o al principio de contenidos largos. Infini-attention no tuvo problemas en esta prueba.<\/p>\n<p>Las pruebas comparativas son muy t\u00e9cnicas, pero la conclusi\u00f3n es que Infini-attention super\u00f3 a los modelos de referencia a la hora de resumir y gestionar secuencias largas, manteniendo el contexto durante periodos prolongados.<\/p>\n<p>Y lo que es m\u00e1s significativo, mantuvo esta capacidad de retenci\u00f3n superior al tiempo que requer\u00eda 114 veces menos memoria.<\/p>\n<p>Los resultados de las pruebas de referencia convencen a los investigadores de que Infini-attention podr\u00eda escalarse para manejar secuencias de entrada extremadamente largas manteniendo limitados los recursos de memoria y computaci\u00f3n.<\/p>\n<p>La naturaleza plug-and-play de Infini-attention significa que podr\u00eda utilizarse para el preentrenamiento continuo y la puesta a punto de los modelos Transformer existentes. De este modo se podr\u00edan ampliar eficazmente sus ventanas contextuales sin necesidad de volver a entrenar completamente el modelo.<\/p>\n<p>Las ventanas de contexto seguir\u00e1n creciendo, pero este enfoque demuestra que una memoria eficiente podr\u00eda ser una soluci\u00f3n mejor que una gran biblioteca.<\/p>","protected":false},"excerpt":{"rendered":"<p>Los investigadores de Google desarrollaron una t\u00e9cnica llamada Infini-attention, que permite a los LLM manejar textos infinitamente largos sin aumentar los requisitos de computaci\u00f3n y memoria. La arquitectura Transformer de un LLM es lo que le permite prestar atenci\u00f3n a todos los tokens de un prompt. Las complejas multiplicaciones matriciales y de producto punto que realiza son cuadr\u00e1ticas en complejidad. Esto significa que si se duplican los s\u00edmbolos de la consulta, se necesitar\u00e1 cuatro veces m\u00e1s memoria y capacidad de procesamiento. Por eso es tan dif\u00edcil hacer LLM con ventanas de contexto grandes sin que los requisitos de memoria y computaci\u00f3n se disparen. En un LLM \"est\u00e1ndar\", la informaci\u00f3n<\/p>","protected":false},"author":6,"featured_media":11567,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[102,118],"class_list":["post-11530","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-google","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-15T10:11:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-15T10:16:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"wordCount\":638,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"keywords\":[\"Google\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Infini-attention de Google ofrece un contexto \"infinito\" | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","og_locale":"es_ES","og_type":"article","og_title":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI","og_description":"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information","og_url":"https:\/\/dailyai.com\/es\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","og_site_name":"DailyAI","article_published_time":"2024-04-15T10:11:12+00:00","article_modified_time":"2024-04-15T10:16:25+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"wordCount":638,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","keywords":["Google","LLMS"],"articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","url":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","name":"Infini-attention de Google ofrece un contexto \"infinito\" | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11530","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=11530"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11530\/revisions"}],"predecessor-version":[{"id":11570,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/11530\/revisions\/11570"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/11567"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=11530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=11530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=11530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}