{"id":11530,"date":"2024-04-15T10:11:12","date_gmt":"2024-04-15T10:11:12","guid":{"rendered":"https:\/\/dailyai.com\/?p=11530"},"modified":"2024-04-15T10:16:25","modified_gmt":"2024-04-15T10:16:25","slug":"googles-infini-attention-gives-llms-infinite-context","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","title":{"rendered":"O Infini-attention da Google d\u00e1 aos LLM um contexto \"infinito"},"content":{"rendered":"<p><strong>Os investigadores da Google desenvolveram uma t\u00e9cnica denominada Infini-attention, que permite aos LLMs tratar textos infinitamente longos sem aumentar os requisitos de computa\u00e7\u00e3o e mem\u00f3ria.<\/strong><\/p>\n<p>A arquitetura transformadora de um LLM \u00e9 o que lhe permite dar aten\u00e7\u00e3o a todos os s\u00edmbolos de uma mensagem. O produto escalar complexo e as multiplica\u00e7\u00f5es matriciais que efectua s\u00e3o de complexidade quadr\u00e1tica.<\/p>\n<p>Isto significa que duplicar os tokens no seu prompt resulta num requisito de quatro vezes mais mem\u00f3ria e poder de processamento. \u00c9 por isso que \u00e9 t\u00e3o dif\u00edcil fazer LLMs com <a href=\"https:\/\/dailyai.com\/pt\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/\">grandes janelas de contexto<\/a> sem que os requisitos de mem\u00f3ria e computa\u00e7\u00e3o disparem.<\/p>\n<p>Num LLM \"standard\", a informa\u00e7\u00e3o no in\u00edcio do conte\u00fado do prompt perde-se quando este se torna maior do que a janela de contexto. O sistema <a href=\"https:\/\/arxiv.org\/pdf\/2404.07143.pdf\" target=\"_blank\" rel=\"noopener\">trabalho de investiga\u00e7\u00e3o<\/a> explica como o Infini-attention pode reter dados para al\u00e9m da janela de contexto.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">A Google apresenta Leave No Context Behind: Transformadores de Contexto Infinito Eficientes com Infini-aten\u00e7\u00e3o<\/p>\n<p>O modelo 1B, que foi ajustado em inst\u00e2ncias de chaves de passagem de comprimento de sequ\u00eancia at\u00e9 5K, resolve o problema de comprimento de 1M<a href=\"https:\/\/t.co\/zyHMt3inhi\">https:\/\/t.co\/zyHMt3inhi<\/a> <a href=\"https:\/\/t.co\/ySYEMET9Ef\">pic.twitter.com\/ySYEMET9Ef<\/a><\/p>\n<p>- Aran Komatsuzaki (@arankomatsuzaki) <a href=\"https:\/\/twitter.com\/arankomatsuzaki\/status\/1778230430090592454?ref_src=twsrc%5Etfw\">11 de abril de 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Como \u00e9 que o Infini-attention funciona?<\/h2>\n<p>O Infini-attention combina t\u00e9cnicas de mem\u00f3ria compressiva com mecanismos de aten\u00e7\u00e3o modificados para que n\u00e3o se percam informa\u00e7\u00f5es relevantes mais antigas.<\/p>\n<p>Quando o pedido de entrada ultrapassa o comprimento do contexto do modelo, a mem\u00f3ria de compress\u00e3o armazena a informa\u00e7\u00e3o num formato comprimido em vez de a descartar.<\/p>\n<p>Isto permite que informa\u00e7\u00f5es mais antigas e menos imediatamente relevantes sejam armazenadas sem que os requisitos de mem\u00f3ria e computa\u00e7\u00e3o cres\u00e7am indefinidamente \u00e0 medida que a entrada aumenta.<\/p>\n<p>Em vez de tentar reter toda a informa\u00e7\u00e3o de entrada mais antiga, a mem\u00f3ria de compress\u00e3o do Infini-attention pesa e resume a informa\u00e7\u00e3o que \u00e9 considerada relevante e que vale a pena reter.<\/p>\n<p>O Infini-attention utiliza um mecanismo de aten\u00e7\u00e3o \"normal\", mas reutiliza os estados de valor-chave (KV) de cada segmento subsequente do modelo, em vez de os descartar.<\/p>\n<p>Aqui est\u00e1 um diagrama que mostra a diferen\u00e7a entre o Infini-attention e outro modelo de contexto alargado, o Transformer XL.<\/p>\n<figure id=\"attachment_11566\" aria-describedby=\"caption-attachment-11566\" style=\"width: 1356px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11566 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png\" alt=\"\" width=\"1356\" height=\"664\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png 1356w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-1024x501.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-768x376.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-60x29.png 60w\" sizes=\"auto, (max-width: 1356px) 100vw, 1356px\" \/><figcaption id=\"caption-attachment-11566\" class=\"wp-caption-text\">O Infini-Transformer (em cima) tem um hist\u00f3rico de contexto completo, enquanto o Transformer-XL (em baixo) descarta contextos antigos, uma vez que armazena em cache os estados KV apenas para o \u00faltimo segmento. Fonte: arXiv<\/figcaption><\/figure>\n<p>O resultado \u00e9 uma LLM que d\u00e1 aten\u00e7\u00e3o local a dados de entrada recentes, mas tamb\u00e9m transporta dados hist\u00f3ricos comprimidos continuamente destilados aos quais pode aplicar aten\u00e7\u00e3o a longo prazo.<\/p>\n<p>O artigo refere que \"esta modifica\u00e7\u00e3o subtil mas cr\u00edtica da camada de aten\u00e7\u00e3o permite que os LLM processem contextos infinitamente longos com recursos limitados de mem\u00f3ria e computa\u00e7\u00e3o\".<\/p>\n<h2>\u00c9 muito bom?<\/h2>\n<p>A Google efectuou testes de avalia\u00e7\u00e3o comparativa utilizando modelos Infini-attention de par\u00e2metros 1B e 8B mais pequenos. Estes foram comparados com outros modelos de contexto alargado, como o Transformer-XL e o Memorizing Transformers.<\/p>\n<p>O Infini-Transformer obteve pontua\u00e7\u00f5es de perplexidade significativamente mais baixas do que os outros modelos ao processar conte\u00fados de texto longo. Uma pontua\u00e7\u00e3o de perplexidade mais baixa significa que o modelo est\u00e1 mais seguro das suas previs\u00f5es de resultados.<\/p>\n<p>Nos testes de \"recupera\u00e7\u00e3o de chaves de acesso\", os modelos Infini-attention encontraram consistentemente o n\u00famero aleat\u00f3rio escondido num texto com at\u00e9 1 milh\u00e3o de tokens.<\/p>\n<p>Outros modelos conseguem frequentemente recuperar a chave de acesso no final da introdu\u00e7\u00e3o, mas t\u00eam dificuldade em encontr\u00e1-la no meio ou no in\u00edcio de conte\u00fados longos. O Infini-attention n\u00e3o teve problemas com este teste.<\/p>\n<p>Os testes de avalia\u00e7\u00e3o comparativa s\u00e3o muito t\u00e9cnicos, mas a hist\u00f3ria resumida \u00e9 que o Infini-attention superou os modelos de base no resumo e tratamento de sequ\u00eancias longas, mantendo o contexto durante longos per\u00edodos.<\/p>\n<p>Significativamente, manteve esta capacidade de reten\u00e7\u00e3o superior, exigindo 114 vezes menos mem\u00f3ria.<\/p>\n<p>Os resultados do benchmark convencem os investigadores de que o Infini-attention pode ser dimensionado para lidar com sequ\u00eancias de entrada extremamente longas, mantendo a mem\u00f3ria e os recursos computacionais limitados.<\/p>\n<p>A natureza plug-and-play do Infini-attention significa que pode ser utilizado para pr\u00e9-treino e afina\u00e7\u00e3o cont\u00ednuos dos modelos Transformer existentes. Isto poderia efetivamente alargar as suas janelas de contexto sem exigir uma reciclagem completa do modelo.<\/p>\n<p>As janelas de contexto continuar\u00e3o a crescer, mas esta abordagem mostra que uma mem\u00f3ria eficiente pode ser uma solu\u00e7\u00e3o melhor do que uma grande biblioteca.<\/p>","protected":false},"excerpt":{"rendered":"<p>Os pesquisadores do Google desenvolveram uma t\u00e9cnica chamada Infini-attention, que permite que os LLMs lidem com textos infinitamente longos sem aumentar os requisitos de computa\u00e7\u00e3o e mem\u00f3ria. A arquitetura Transformer de um LLM \u00e9 o que lhe permite dar aten\u00e7\u00e3o a todos os tokens de um prompt. O produto escalar complexo e as multiplica\u00e7\u00f5es matriciais que efectua s\u00e3o de complexidade quadr\u00e1tica. Isto significa que duplicar o n\u00famero de tokens no seu prompt resulta num requisito de quatro vezes mais mem\u00f3ria e capacidade de processamento. \u00c9 por isso que \u00e9 t\u00e3o dif\u00edcil fazer LLMs com grandes janelas de contexto sem que os requisitos de mem\u00f3ria e computa\u00e7\u00e3o disparem. Numa LLM \"standard\", a informa\u00e7\u00e3o<\/p>","protected":false},"author":6,"featured_media":11567,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[102,118],"class_list":["post-11530","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-google","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-15T10:11:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-15T10:16:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"wordCount\":638,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"keywords\":[\"Google\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"O Infini-attention da Google d\u00e1 aos LLMs um contexto \"infinito\" | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","og_locale":"pt_PT","og_type":"article","og_title":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI","og_description":"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information","og_url":"https:\/\/dailyai.com\/pt\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","og_site_name":"DailyAI","article_published_time":"2024-04-15T10:11:12+00:00","article_modified_time":"2024-04-15T10:16:25+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"wordCount":638,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","keywords":["Google","LLMS"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","url":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","name":"O Infini-attention da Google d\u00e1 aos LLMs um contexto \"infinito\" | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11530","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=11530"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11530\/revisions"}],"predecessor-version":[{"id":11570,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/11530\/revisions\/11570"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/11567"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=11530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=11530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=11530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}