{"id":11530,"date":"2024-04-15T10:11:12","date_gmt":"2024-04-15T10:11:12","guid":{"rendered":"https:\/\/dailyai.com\/?p=11530"},"modified":"2024-04-15T10:16:25","modified_gmt":"2024-04-15T10:16:25","slug":"-3","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nb\/2024\/04\/-3\/","title":{"rendered":"Googles Infini-attention gir LLM-er \"uendelig\" kontekst"},"content":{"rendered":"<p><strong>Google-forskere har utviklet en teknikk kalt Infini-attention, som gj\u00f8r det mulig for LLM-er \u00e5 h\u00e5ndtere uendelig lange tekster uten \u00e5 \u00f8ke kravene til databehandling og minne.<\/strong><\/p>\n<p>Transformatorarkitekturen i en LLM gj\u00f8r at den kan ta hensyn til alle symbolene i en ledetekst. De komplekse prikkprodukt- og matrisemultiplikasjonene den utf\u00f8rer, er kvadratiske i kompleksitet.<\/p>\n<p>Det betyr at en dobling av antall tokens i ledeteksten krever fire ganger s\u00e5 mye minne og prosessorkraft. Dette er grunnen til at det er s\u00e5 utfordrende \u00e5 lage LLM-er med <a href=\"https:\/\/dailyai.com\/nb\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/\">store kontekstvinduer<\/a> uten at minne- og datakravene skyter i v\u00e6ret.<\/p>\n<p>I en \"standard\" LLM g\u00e5r informasjonen i begynnelsen av ledeteksten tapt n\u00e5r ledeteksten blir st\u00f8rre enn kontekstvinduet. Googles <a href=\"https:\/\/arxiv.org\/pdf\/2404.07143.pdf\" target=\"_blank\" rel=\"noopener\">forskningsoppgave<\/a> forklarer hvordan Infini-attention kan lagre data utenfor kontekstvinduet.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Google presenterer Leave No Context Behind: Effektive uendelige konteksttransformatorer med uendelig oppmerksomhet<\/p>\n<p>1B-modellen som ble finjustert p\u00e5 opptil 5K sekvenslengder, l\u00f8ser problemet med 1M lengde<a href=\"https:\/\/t.co\/zyHMt3inhi\">https:\/\/t.co\/zyHMt3inhi<\/a> <a href=\"https:\/\/t.co\/ySYEMET9Ef\">pic.twitter.com\/ySYEMET9Ef<\/a><\/p>\n<p>- Aran Komatsuzaki (@arankomatsuzaki) <a href=\"https:\/\/twitter.com\/arankomatsuzaki\/status\/1778230430090592454?ref_src=twsrc%5Etfw\">11. april 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Hvordan fungerer Infini-attention?<\/h2>\n<p>Infini-attention kombinerer komprimerende hukommelsesteknikker med modifiserte oppmerksomhetsmekanismer, slik at relevant, eldre informasjon ikke g\u00e5r tapt.<\/p>\n<p>N\u00e5r inndataprompten vokser utover modellens kontekstlengde, lagrer det komprimerende minnet informasjonen i et komprimert format i stedet for \u00e5 forkaste den.<\/p>\n<p>Dette gj\u00f8r det mulig \u00e5 lagre eldre, mindre umiddelbart relevant informasjon uten at minne- og databehandlingsbehovet vokser i det uendelige i takt med at inndataene vokser.<\/p>\n<p>I stedet for \u00e5 fors\u00f8ke \u00e5 ta vare p\u00e5 all den eldre inndata-informasjonen, veier Infini-attentions komprimerende minne og sammenfatter informasjon som anses som relevant og verdt \u00e5 ta vare p\u00e5.<\/p>\n<p>Infini-attention tar utgangspunkt i en \"vanilje\"-oppmerksomhetsmekanisme, men gjenbruker n\u00f8kkelverdiene (KV) fra hvert p\u00e5f\u00f8lgende segment i modellen i stedet for \u00e5 forkaste dem.<\/p>\n<p>Her er et diagram som viser forskjellen mellom Infini-attention og en annen modell med utvidet kontekst, Transformer XL.<\/p>\n<figure id=\"attachment_11566\" aria-describedby=\"caption-attachment-11566\" style=\"width: 1356px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11566 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png\" alt=\"\" width=\"1356\" height=\"664\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png 1356w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-1024x501.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-768x376.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-60x29.png 60w\" sizes=\"auto, (max-width: 1356px) 100vw, 1356px\" \/><figcaption id=\"caption-attachment-11566\" class=\"wp-caption-text\">Infini-Transformer (\u00f8verst) har en hel konteksthistorikk, mens Transformer-XL (nederst) forkaster gamle kontekster siden den bare lagrer KV-tilstandene for det siste segmentet. Kilde: arXiv<\/figcaption><\/figure>\n<p>Resultatet er en LLM som gir lokal oppmerksomhet til nylige inndata, men som ogs\u00e5 har kontinuerlig destillerte, komprimerte historiske data som den kan bruke langsiktig oppmerksomhet p\u00e5.<\/p>\n<p>I artikkelen heter det: \"Denne subtile, men kritiske modifikasjonen av oppmerksomhetslaget gj\u00f8r det mulig for LLM-er \u00e5 behandle uendelig lange kontekster med begrensede minne- og beregningsressurser.\"<\/p>\n<h2>Hvor god er den?<\/h2>\n<p>Google kj\u00f8rte benchmarking-tester med mindre 1B- og 8B-parameter Infini-attention-modeller. Disse ble sammenlignet med andre utvidede kontekstmodeller som Transformer-XL og Memorizing Transformers.<\/p>\n<p>Infini-Transformer oppn\u00e5dde betydelig lavere perplexity-score enn de andre modellene ved behandling av innhold med lang kontekst. En lavere forvirringspoengsum betyr at modellen er sikrere p\u00e5 sine prediksjoner.<\/p>\n<p>I \"passkey retrieval\"-testene fant Infini-attention-modellene konsekvent det tilfeldige tallet som var skjult i tekst med opptil 1 million tokens.<\/p>\n<p>Andre modeller klarer ofte \u00e5 finne passordet mot slutten av inndataene, men sliter med \u00e5 finne det midt i eller i begynnelsen av et langt innhold. Infini-attention hadde ingen problemer med denne testen.<\/p>\n<p>Benchmarking-testene er sv\u00e6rt tekniske, men den korte historien er at Infini-attention utkonkurrerte basismodellene n\u00e5r det gjaldt \u00e5 oppsummere og h\u00e5ndtere lange sekvenser, samtidig som konteksten ble opprettholdt over lengre perioder.<\/p>\n<p>Det er bemerkelsesverdig at den beholdt denne overlegne lagringsevnen samtidig som den krevde 114 ganger mindre minne.<\/p>\n<p>Benchmark-resultatene overbeviser forskerne om at Infini-attention kan skaleres til \u00e5 h\u00e5ndtere ekstremt lange inndatasekvenser, samtidig som minne- og databehandlingsressursene er begrenset.<\/p>\n<p>Infini-attentions plug-and-play-egenskaper betyr at den kan brukes til kontinuerlig forh\u00e5ndstrening og finjustering av eksisterende Transformer-modeller. Dette kan effektivt utvide kontekstvinduene uten \u00e5 kreve fullstendig omskolering av modellen.<\/p>\n<p>Kontekstvinduene vil fortsette \u00e5 vokse, men denne tiln\u00e6rmingen viser at et effektivt minne kan v\u00e6re en bedre l\u00f8sning enn et stort bibliotek.<\/p>","protected":false},"excerpt":{"rendered":"<p>Google-forskere har utviklet en teknikk kalt Infini-attention, som gj\u00f8r det mulig for LLM-er \u00e5 h\u00e5ndtere uendelig lang tekst uten \u00e5 \u00f8ke kravene til beregning og minne. Transformer-arkitekturen i en LLM gj\u00f8r det mulig \u00e5 gi oppmerksomhet til alle symbolene i en ledetekst. De komplekse prikkprodukt- og matrisemultiplikasjonene den utf\u00f8rer, er kvadratiske i kompleksitet. Det betyr at en dobling av symbolene i ledeteksten krever fire ganger s\u00e5 mye minne og prosessorkraft. Det er derfor det er s\u00e5 utfordrende \u00e5 lage LLM-er med store kontekstvinduer uten at minne- og regnekravene skyter i v\u00e6ret. I en \"standard\" LLM vil informasjon<\/p>","protected":false},"author":6,"featured_media":11567,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[102,118],"class_list":["post-11530","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-google","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nb\/2024\/04\/-3\/\" \/>\n<meta property=\"og:locale\" content=\"nb_NO\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nb\/2024\/04\/-3\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-15T10:11:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-15T10:16:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ansl. lesetid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"wordCount\":638,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"keywords\":[\"Google\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nb-NO\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\"},\"inLanguage\":\"nb-NO\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nb-NO\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nb\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Googles Infini-attention gir LLM-er \"uendelig\" kontekst | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nb\/2024\/04\/-3\/","og_locale":"nb_NO","og_type":"article","og_title":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI","og_description":"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information","og_url":"https:\/\/dailyai.com\/nb\/2024\/04\/-3\/","og_site_name":"DailyAI","article_published_time":"2024-04-15T10:11:12+00:00","article_modified_time":"2024-04-15T10:16:25+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet av":"Eugene van der Watt","Ansl. lesetid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"wordCount":638,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","keywords":["Google","LLMS"],"articleSection":["Industry"],"inLanguage":"nb-NO"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","url":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","name":"Googles Infini-attention gir LLM-er \"uendelig\" kontekst | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb"},"inLanguage":"nb-NO","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"]}]},{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligAI","description":"Din daglige dose med AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nb-NO"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har bakgrunn som elektroingeni\u00f8r og elsker alt som har med teknologi \u00e5 gj\u00f8re. N\u00e5r han tar en pause fra AI-nyhetene, finner du ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nb\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/11530","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/comments?post=11530"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/11530\/revisions"}],"predecessor-version":[{"id":11570,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/11530\/revisions\/11570"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media\/11567"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media?parent=11530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/categories?post=11530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/tags?post=11530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}