{"id":11530,"date":"2024-04-15T10:11:12","date_gmt":"2024-04-15T10:11:12","guid":{"rendered":"https:\/\/dailyai.com\/?p=11530"},"modified":"2024-04-15T10:16:25","modified_gmt":"2024-04-15T10:16:25","slug":"googles-infini-attention-gives-llms-infinite-context","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","title":{"rendered":"L'Infini-attention di Google offre un contesto \"infinito\" ai LLM"},"content":{"rendered":"<p><strong>I ricercatori di Google hanno sviluppato una tecnica chiamata Infini-attention, che consente agli LLM di gestire testi infinitamente lunghi senza aumentare i requisiti di calcolo e di memoria.<\/strong><\/p>\n<p>L'architettura Transformer di un LLM \u00e8 ci\u00f2 che gli consente di prestare attenzione a tutti i token di un prompt. Le complesse moltiplicazioni di punti e matrici che esegue sono di complessit\u00e0 quadratica.<\/p>\n<p>Ci\u00f2 significa che il raddoppio dei token nel prompt richiede una quantit\u00e0 di memoria e di potenza di elaborazione quattro volte superiore. Questo \u00e8 il motivo per cui \u00e8 cos\u00ec impegnativo creare LLM con <a href=\"https:\/\/dailyai.com\/it\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/\">Finestre contestuali di grandi dimensioni<\/a> senza che i requisiti di memoria e di calcolo salgano alle stelle.<\/p>\n<p>In un LLM \"standard\", le informazioni all'inizio del contenuto del prompt vengono perse quando il prompt diventa pi\u00f9 grande della finestra di contesto. Il sistema di Google <a href=\"https:\/\/arxiv.org\/pdf\/2404.07143.pdf\" target=\"_blank\" rel=\"noopener\">carta di ricerca<\/a> spiega come Infini-attention possa conservare i dati oltre la finestra di contesto.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Google presenta Leave No Context Behind: Trasformatori di contesto infinito efficienti con Infini-attenzione<\/p>\n<p>Il modello 1B, messo a punto su istanze di passkey di lunghezza fino a 5K, risolve il problema della lunghezza di 1M.<a href=\"https:\/\/t.co\/zyHMt3inhi\">https:\/\/t.co\/zyHMt3inhi<\/a> <a href=\"https:\/\/t.co\/ySYEMET9Ef\">pic.twitter.com\/ySYEMET9Ef<\/a><\/p>\n<p>- Aran Komatsuzaki (@arankomatsuzaki) <a href=\"https:\/\/twitter.com\/arankomatsuzaki\/status\/1778230430090592454?ref_src=twsrc%5Etfw\">11 aprile 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Come funziona Infini-attention?<\/h2>\n<p>Infini-attention combina tecniche di memoria compressiva con meccanismi di attenzione modificati, in modo che le informazioni rilevanti pi\u00f9 vecchie non vadano perse.<\/p>\n<p>Quando la richiesta di input cresce oltre la lunghezza del contesto del modello, la memoria compressiva memorizza le informazioni in un formato compresso anzich\u00e9 scartarle.<\/p>\n<p>In questo modo \u00e8 possibile memorizzare le informazioni pi\u00f9 vecchie, meno rilevanti nell'immediato, senza che i requisiti di memoria e di calcolo crescano indefinitamente con l'aumentare degli input.<\/p>\n<p>Invece di cercare di conservare tutte le informazioni pi\u00f9 vecchie, la memoria compressiva di Infini-attention pesa e riassume le informazioni ritenute rilevanti e degne di essere conservate.<\/p>\n<p>Infini-attention riprende quindi un meccanismo di attenzione \"vanilla\", ma riutilizza gli stati del valore chiave (KV) di ogni segmento successivo del modello, invece di scartarli.<\/p>\n<p>Ecco un diagramma che mostra la differenza tra Infini-attention e un altro modello di contesto esteso Transformer XL.<\/p>\n<figure id=\"attachment_11566\" aria-describedby=\"caption-attachment-11566\" style=\"width: 1356px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11566 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png\" alt=\"\" width=\"1356\" height=\"664\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png 1356w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-1024x501.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-768x376.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-60x29.png 60w\" sizes=\"auto, (max-width: 1356px) 100vw, 1356px\" \/><figcaption id=\"caption-attachment-11566\" class=\"wp-caption-text\">Infini-Transformer (in alto) ha un'intera cronologia di contesti, mentre Transformer-XL (in basso) scarta i vecchi contesti poich\u00e9 memorizza nella cache gli stati KV solo per l'ultimo segmento. Fonte: arXiv<\/figcaption><\/figure>\n<p>Il risultato \u00e8 un LLM che presta attenzione locale ai dati di input recenti, ma che porta con s\u00e9 anche dati storici compressi e continuamente distillati, ai quali pu\u00f2 applicare un'attenzione a lungo termine.<\/p>\n<p>L'articolo sottolinea che \"questa sottile ma fondamentale modifica del livello di attenzione consente ai LLM di elaborare contesti infinitamente lunghi con risorse di memoria e di calcolo limitate\".<\/p>\n<h2>Quanto \u00e8 buono?<\/h2>\n<p>Google ha condotto test di benchmarking utilizzando modelli Infini-attention pi\u00f9 piccoli, a 1B e 8B parametri. Questi sono stati confrontati con altri modelli di contesto esteso come Transformer-XL e Memorizing Transformers.<\/p>\n<p>L'Infini-Transformer ha ottenuto punteggi di perplessit\u00e0 significativamente pi\u00f9 bassi rispetto agli altri modelli durante l'elaborazione di contenuti a contesto lungo. Un punteggio di perplessit\u00e0 pi\u00f9 basso significa che il modello \u00e8 pi\u00f9 sicuro delle sue previsioni di output.<\/p>\n<p>Nei test di \"recupero della chiave d'accesso\", i modelli Infini-attention hanno trovato costantemente il numero casuale nascosto in un testo fino a 1 milione di token.<\/p>\n<p>Altri modelli riescono spesso a recuperare la chiave di accesso verso la fine dell'input, ma faticano a trovarla nel mezzo o all'inizio di un contenuto lungo. Infini-attention non ha avuto problemi con questo test.<\/p>\n<p>I test di benchmarking sono molto tecnici, ma in breve si pu\u00f2 dire che Infini-attention ha superato i modelli di base nella sintesi e nella gestione di sequenze lunghe, mantenendo il contesto per periodi prolungati.<\/p>\n<p>\u00c8 significativo che abbia mantenuto questa capacit\u00e0 di conservazione superiore pur richiedendo una memoria 114 volte inferiore.<\/p>\n<p>I risultati del benchmark convincono i ricercatori che Infini-attention pu\u00f2 essere scalato per gestire sequenze di input estremamente lunghe, mantenendo le risorse di memoria e di calcolo limitate.<\/p>\n<p>La natura plug-and-play di Infini-attention consente di utilizzarlo per il pre-addestramento continuo e la messa a punto dei modelli Transformer esistenti. In questo modo si potrebbero estendere efficacemente le finestre di contesto senza richiedere una riqualificazione completa del modello.<\/p>\n<p>Le finestre contestuali continueranno a crescere, ma questo approccio dimostra che una memoria efficiente potrebbe essere una soluzione migliore di una grande libreria.<\/p>","protected":false},"excerpt":{"rendered":"<p>I ricercatori di Google hanno sviluppato una tecnica chiamata Infini-attention, che consente agli LLM di gestire testi infinitamente lunghi senza aumentare i requisiti di calcolo e di memoria. L'architettura Transformer di un LLM gli consente di prestare attenzione a tutti i token di un messaggio. Le complesse moltiplicazioni di punti e matrici che esegue sono di complessit\u00e0 quadratica. Ci\u00f2 significa che il raddoppio dei token nel prompt richiede una quantit\u00e0 di memoria e di potenza di elaborazione quattro volte superiore. Ecco perch\u00e9 \u00e8 cos\u00ec difficile creare LLM con finestre di contesto ampie senza che i requisiti di memoria e di calcolo salgano alle stelle. In un LLM \"standard\", le informazioni<\/p>","protected":false},"author":6,"featured_media":11567,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[102,118],"class_list":["post-11530","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-google","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-15T10:11:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-15T10:16:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"wordCount\":638,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"keywords\":[\"Google\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Infini-attention di Google offre ai LLM un contesto \"infinito\" | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","og_locale":"it_IT","og_type":"article","og_title":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI","og_description":"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information","og_url":"https:\/\/dailyai.com\/it\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","og_site_name":"DailyAI","article_published_time":"2024-04-15T10:11:12+00:00","article_modified_time":"2024-04-15T10:16:25+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Eugene van der Watt","Tempo di lettura stimato":"3 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"wordCount":638,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","keywords":["Google","LLMS"],"articleSection":["Industry"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","url":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","name":"Infini-attention di Google offre ai LLM un contesto \"infinito\" | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene proviene da un background di ingegneria elettronica e ama tutto ci\u00f2 che \u00e8 tecnologico. Quando si prende una pausa dal consumo di notizie sull'intelligenza artificiale, lo si pu\u00f2 trovare al tavolo da biliardo.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/it\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/11530","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=11530"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/11530\/revisions"}],"predecessor-version":[{"id":11570,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/11530\/revisions\/11570"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/11567"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=11530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=11530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=11530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}