{"id":11530,"date":"2024-04-15T10:11:12","date_gmt":"2024-04-15T10:11:12","guid":{"rendered":"https:\/\/dailyai.com\/?p=11530"},"modified":"2024-04-15T10:16:25","modified_gmt":"2024-04-15T10:16:25","slug":"googles-infini-attention-gives-llms-infinite-context","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","title":{"rendered":"Googles Infini-attention ger juristerna ett \"o\u00e4ndligt\" sammanhang"},"content":{"rendered":"<p><strong>Googles forskare utvecklade en teknik som kallas Infini-attention, som g\u00f6r det m\u00f6jligt f\u00f6r LLM:er att hantera o\u00e4ndligt l\u00e5nga texter utan att \u00f6ka ber\u00e4knings- och minneskraven.<\/strong><\/p>\n<p>Transformatorarkitekturen i en LLM \u00e4r det som g\u00f6r att den kan uppm\u00e4rksamma alla tokens i en prompt. De komplexa punktprodukt- och matrismultiplikationerna som den utf\u00f6r \u00e4r kvadratiska i komplexitet.<\/p>\n<p>Det inneb\u00e4r att om du f\u00f6rdubblar antalet tokens i din prompt kr\u00e4vs det fyra g\u00e5nger mer minne och processorkraft. Det \u00e4r d\u00e4rf\u00f6r det \u00e4r s\u00e5 utmanande att g\u00f6ra LLM:er med <a href=\"https:\/\/dailyai.com\/sv\/2024\/04\/anthropic-large-context-llms-vulnerable-to-many-shot-jailbreak\/\">stora kontextf\u00f6nster<\/a> utan att minnes- och ber\u00e4kningskraven skjuter i h\u00f6jden.<\/p>\n<p>I en \"vanlig\" LLM g\u00e5r informationen i b\u00f6rjan av promptens inneh\u00e5ll f\u00f6rlorad n\u00e4r prompten blir st\u00f6rre \u00e4n kontextf\u00f6nstret. Googles <a href=\"https:\/\/arxiv.org\/pdf\/2404.07143.pdf\" target=\"_blank\" rel=\"noopener\">forskningsrapport<\/a> f\u00f6rklarar hur Infini-attention kan lagra data utanf\u00f6r kontextf\u00f6nstret.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Google presenterar Leave No Context Behind: Effektiva o\u00e4ndliga kontexttransformatorer med infini-uppm\u00e4rksamhet<\/p>\n<p>1B-modell som finjusterades p\u00e5 upp till 5K sekvensl\u00e4ngdsinstanser l\u00f6ser problemet med 1M l\u00e4ngd<a href=\"https:\/\/t.co\/zyHMt3inhi\">https:\/\/t.co\/zyHMt3inhi<\/a> <a href=\"https:\/\/t.co\/ySYEMET9Ef\">pic.twitter.com\/ySYEMET9Ef<\/a><\/p>\n<p>- Aran Komatsuzaki (@arankomatsuzaki) <a href=\"https:\/\/twitter.com\/arankomatsuzaki\/status\/1778230430090592454?ref_src=twsrc%5Etfw\">11 april 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Hur fungerar Infini-attention?<\/h2>\n<p>Infini-attention kombinerar komprimerande minnestekniker med modifierade uppm\u00e4rksamhetsmekanismer s\u00e5 att relevant \u00e4ldre information inte g\u00e5r f\u00f6rlorad.<\/p>\n<p>N\u00e4r inmatningsuppmaningen blir st\u00f6rre \u00e4n modellens kontextl\u00e4ngd lagrar det komprimerande minnet informationen i ett komprimerat format i st\u00e4llet f\u00f6r att kassera den.<\/p>\n<p>Detta g\u00f6r att \u00e4ldre, mindre omedelbart relevant information kan lagras utan att minnes- och ber\u00e4kningskraven v\u00e4xer i o\u00e4ndlighet i takt med att inmatningen v\u00e4xer.<\/p>\n<p>I st\u00e4llet f\u00f6r att f\u00f6rs\u00f6ka h\u00e5lla kvar all \u00e4ldre information v\u00e4ger Infini-attentions komprimerande minne samman och sammanfattar information som bed\u00f6ms vara relevant och v\u00e4rd att h\u00e5lla kvar.<\/p>\n<p>Infini-attention utg\u00e5r fr\u00e5n en \"vanilj\"-uppm\u00e4rksamhetsmekanism men \u00e5teranv\u00e4nder KV-tillst\u00e5nden (key value) fr\u00e5n varje efterf\u00f6ljande segment i modellen i st\u00e4llet f\u00f6r att f\u00f6rkasta dem.<\/p>\n<p>H\u00e4r \u00e4r ett diagram som visar skillnaden mellan Infini-attention och en annan modell med ut\u00f6kad kontext, Transformer XL.<\/p>\n<figure id=\"attachment_11566\" aria-describedby=\"caption-attachment-11566\" style=\"width: 1356px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-11566 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png\" alt=\"\" width=\"1356\" height=\"664\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL.png 1356w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-1024x501.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-768x376.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Infini-attention-vs-Transformer-XL-60x29.png 60w\" sizes=\"auto, (max-width: 1356px) 100vw, 1356px\" \/><figcaption id=\"caption-attachment-11566\" class=\"wp-caption-text\">Infini-Transformer (\u00f6verst) har en hel kontexthistorik medan Transformer-XL (nederst) kasserar gamla kontexter eftersom den bara lagrar KV-tillst\u00e5nden f\u00f6r det sista segmentet. K\u00e4lla: arXiv<\/figcaption><\/figure>\n<p>Resultatet \u00e4r en LLM som ger lokal uppm\u00e4rksamhet \u00e5t nya indata, men som ocks\u00e5 har kontinuerligt destillerad komprimerad historisk data som den kan ge l\u00e5ngsiktig uppm\u00e4rksamhet.<\/p>\n<p>I artikeln konstateras att \"denna subtila men kritiska modifiering av uppm\u00e4rksamhetsskiktet g\u00f6r det m\u00f6jligt f\u00f6r LLM att bearbeta o\u00e4ndligt l\u00e5nga sammanhang med begr\u00e4nsade minnes- och ber\u00e4kningsresurser\".<\/p>\n<h2>Hur bra \u00e4r den?<\/h2>\n<p>Google genomf\u00f6rde benchmarkingtester med mindre Infini-attention-modeller med 1B- och 8B-parametrar. Dessa j\u00e4mf\u00f6rdes med andra modeller med ut\u00f6kat sammanhang som Transformer-XL och Memorizing Transformers.<\/p>\n<p>Infini-Transformer uppn\u00e5dde betydligt l\u00e4gre perplexitetspo\u00e4ng \u00e4n de andra modellerna vid bearbetning av inneh\u00e5ll med l\u00e5nga kontexter. En l\u00e4gre perplexitetspo\u00e4ng inneb\u00e4r att modellen \u00e4r mer s\u00e4ker p\u00e5 sina f\u00f6ruts\u00e4gelser av utdata.<\/p>\n<p>I \"passkey retrieval\"-testerna hittade Infini-attention-modellerna konsekvent det slumpm\u00e4ssiga talet som var g\u00f6mt i text med upp till 1 miljon symboler.<\/p>\n<p>Andra modeller klarar ofta av att h\u00e4mta nyckeln mot slutet av inmatningen men har sv\u00e5rt att hitta den i mitten eller b\u00f6rjan av ett l\u00e5ngt inneh\u00e5ll. Infini-attention hade inga problem med detta test.<\/p>\n<p>Benchmarktesterna \u00e4r mycket tekniska, men den korta historien \u00e4r att Infini-attention \u00f6vertr\u00e4ffade baslinjemodellerna n\u00e4r det g\u00e4ller att sammanfatta och hantera l\u00e5nga sekvenser samtidigt som sammanhanget bevarades under l\u00e4ngre perioder.<\/p>\n<p>Betecknande nog beh\u00f6ll den denna \u00f6verl\u00e4gsna lagringsf\u00f6rm\u00e5ga samtidigt som den kr\u00e4vde 114 g\u00e5nger mindre minne.<\/p>\n<p>Benchmarkresultaten \u00f6vertygar forskarna om att Infini-attention kan skalas upp f\u00f6r att hantera extremt l\u00e5nga indatasekvenser med begr\u00e4nsade minnes- och ber\u00e4kningsresurser.<\/p>\n<p>Infini-attentions \"plug-and-play\"-karakt\u00e4r inneb\u00e4r att den kan anv\u00e4ndas f\u00f6r kontinuerlig f\u00f6rtr\u00e4ning och finjustering av befintliga Transformer-modeller. Detta skulle effektivt kunna ut\u00f6ka deras kontextf\u00f6nster utan att kr\u00e4va fullst\u00e4ndig omskolning av modellen.<\/p>\n<p>Kontextf\u00f6nstren kommer att forts\u00e4tta v\u00e4xa, men den h\u00e4r metoden visar att ett effektivt minne kan vara en b\u00e4ttre l\u00f6sning \u00e4n ett stort bibliotek.<\/p>","protected":false},"excerpt":{"rendered":"<p>Googles forskare utvecklade en teknik som kallas Infini-attention, som g\u00f6r det m\u00f6jligt f\u00f6r LLM:er att hantera o\u00e4ndligt l\u00e5nga texter utan att \u00f6ka ber\u00e4knings- och minneskraven. Transformer-arkitekturen i en LLM \u00e4r det som g\u00f6r att den kan uppm\u00e4rksamma alla tokens i en prompt. De komplexa punktprodukt- och matrismultiplikationer som den utf\u00f6r \u00e4r kvadratiska i komplexitet. Det inneb\u00e4r att om du f\u00f6rdubblar antalet symboler i din prompt kr\u00e4vs det fyra g\u00e5nger mer minne och processorkraft. Det \u00e4r d\u00e4rf\u00f6r det \u00e4r s\u00e5 utmanande att g\u00f6ra LLM:er med stora kontextf\u00f6nster utan att minnes- och ber\u00e4kningskraven skjuter i h\u00f6jden. I en \"standard\" LLM, information<\/p>","protected":false},"author":6,"featured_media":11567,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[102,118],"class_list":["post-11530","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-google","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-15T10:11:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-04-15T10:16:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"},\"wordCount\":638,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"keywords\":[\"Google\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\",\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"datePublished\":\"2024-04-15T10:11:12+00:00\",\"dateModified\":\"2024-04-15T10:16:25+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/infinite-library.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/googles-infini-attention-gives-llms-infinite-context\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Googles Infini-attention ger juristerna \"o\u00e4ndligt\" sammanhang | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","og_locale":"sv_SE","og_type":"article","og_title":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context | DailyAI","og_description":"Google researchers developed a technique called Infini-attention, which allows LLMs to handle infinitely long text without increasing compute and memory requirements. The Transformer architecture of an LLM is what allows it to give attention to all of the tokens in a prompt. The complex dot-product and matrix multiplications it performs are quadratic in complexity. This means that doubling the tokens in your prompt results in a requirement of four times more memory and processing power. This is why it\u2019s so challenging to make LLMs with large context windows without having memory and compute requirements skyrocket. In a \u201cstandard\u201d LLM, information","og_url":"https:\/\/dailyai.com\/sv\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","og_site_name":"DailyAI","article_published_time":"2024-04-15T10:11:12+00:00","article_modified_time":"2024-04-15T10:16:25+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Eugene van der Watt","Ber\u00e4knad l\u00e4stid":"3 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"},"wordCount":638,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","keywords":["Google","LLMS"],"articleSection":["Industry"],"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","url":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/","name":"Googles Infini-attention ger juristerna \"o\u00e4ndligt\" sammanhang | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","datePublished":"2024-04-15T10:11:12+00:00","dateModified":"2024-04-15T10:16:25+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/infinite-library.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/googles-infini-attention-gives-llms-infinite-context\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s Infini-attention gives LLMs \u201cinfinite\u201d context"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommer fr\u00e5n en bakgrund som elektronikingenj\u00f6r och \u00e4lskar allt som har med teknik att g\u00f6ra. N\u00e4r han tar en paus fr\u00e5n att konsumera AI-nyheter hittar du honom vid snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/sv\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/11530","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=11530"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/11530\/revisions"}],"predecessor-version":[{"id":11570,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/11530\/revisions\/11570"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/11567"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=11530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=11530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=11530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}