{"id":7671,"date":"2023-11-23T15:10:15","date_gmt":"2023-11-23T15:10:15","guid":{"rendered":"https:\/\/dailyai.com\/?p=7671"},"modified":"2023-11-23T15:13:38","modified_gmt":"2023-11-23T15:13:38","slug":"system-2-attention-improves-accuracy-of-llm-responses","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nb\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","title":{"rendered":"System 2 Oppmerksomhet forbedrer n\u00f8yaktigheten av LLM-svar"},"content":{"rendered":"<p><strong>Store spr\u00e5kmodeller (Large Language Models, LLM) blir ofte villedet av skjevheter eller irrelevant kontekst i en ledetekst. Forskere ved Meta har funnet en tilsynelatende enkel m\u00e5te \u00e5 l\u00f8se dette p\u00e5.<\/strong><\/p>\n<p>Etter hvert som kontekstvinduene blir st\u00f8rre, kan instruksjonene vi legger inn i en LLM, bli lengre og mer detaljerte. LLM-ene har blitt flinkere til \u00e5 fange opp nyanser eller mindre detaljer i instruksjonene v\u00e5re, men noen ganger kan dette forvirre dem.<\/p>\n<p>Tidlig maskinl\u00e6ring brukte en \"hard attention\"-tiln\u00e6rming som skilte ut den mest relevante delen av en input og kun reagerte p\u00e5 den. Dette fungerer fint n\u00e5r du pr\u00f8ver \u00e5 sette bildetekst til et bilde, men d\u00e5rlig n\u00e5r du skal oversette en setning eller svare p\u00e5 et sp\u00f8rsm\u00e5l med flere lag.<\/p>\n<p>De fleste LLM-er bruker n\u00e5 en \"soft attention\"-tiln\u00e6rming som tokeniserer hele ledeteksten og tildeler vekting til hver enkelt.<\/p>\n<p>Meta foresl\u00e5r en tiln\u00e6rming som kalles <a href=\"https:\/\/arxiv.org\/pdf\/2311.11829.pdf\" target=\"_blank\" rel=\"noopener\">System 2 Oppmerksomhet<\/a> (S2A) for \u00e5 f\u00e5 det beste fra begge verdener. S2A bruker den naturlige spr\u00e5kbehandlingsevnen til en LLM til \u00e5 ta imot ledeteksten din og fjerne skjevheter og irrelevant informasjon f\u00f8r du begynner \u00e5 jobbe med et svar.<\/p>\n<p>Her er et eksempel.<\/p>\n<figure id=\"attachment_7673\" aria-describedby=\"caption-attachment-7673\" style=\"width: 1200px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7673 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png\" alt=\"\" width=\"1200\" height=\"750\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png 1200w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-300x188.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-1024x640.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-768x480.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-370x231.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-800x500.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-20x13.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-740x463.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-77x48.png 77w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><figcaption id=\"caption-attachment-7673\" class=\"wp-caption-text\">S2A Math-eksempel. Kilde: arXiv<\/figcaption><\/figure>\n<p>S2A fjerner informasjonen om Max, siden den er irrelevant for sp\u00f8rsm\u00e5let. S2A regenererer en optimalisert ledetekst f\u00f8r den begynner \u00e5 jobbe med den. LLM-er er notorisk d\u00e5rlige til \u00e5 <a href=\"https:\/\/dailyai.com\/nb\/2023\/10\/chatgpts-accounting-skills-are-put-to-the-test\/\">matte<\/a> s\u00e5 det er en stor hjelp \u00e5 gj\u00f8re ledeteksten mindre forvirrende.<\/p>\n<p>LLM-er er folkelige og vil gjerne v\u00e6re enige med deg, selv n\u00e5r du tar feil. S2A fjerner alle skjevheter i en ledetekst og behandler deretter bare de relevante delene av ledeteksten. Dette reduserer det AI-forskere kaller \"sycophancy\", eller en AI-modells tilb\u00f8yelighet til \u00e5 smiske.<\/p>\n<figure id=\"attachment_7674\" aria-describedby=\"caption-attachment-7674\" style=\"width: 1190px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7674 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png\" alt=\"\" width=\"1190\" height=\"584\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png 1190w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-1024x503.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-768x377.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-370x182.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-800x393.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-740x363.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-20x10.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-98x48.png 98w\" sizes=\"auto, (max-width: 1190px) 100vw, 1190px\" \/><figcaption id=\"caption-attachment-7674\" class=\"wp-caption-text\">S2A reduksjon av sycophancy. Kilde: arXiv<\/figcaption><\/figure>\n<p>S2A er egentlig bare en systemmelding som ber LLM-enheten om \u00e5 finpusse den opprinnelige meldingen litt f\u00f8r den begynner \u00e5 jobbe med den. Resultatene forskerne oppn\u00e5dde med matte-, fakta- og langsp\u00f8rsm\u00e5l var imponerende.<\/p>\n<p>Her er et eksempel p\u00e5 forbedringene S2A oppn\u00e5dde p\u00e5 faktasp\u00f8rsm\u00e5l. Utgangspunktet var svar p\u00e5 sp\u00f8rsm\u00e5l som inneholdt skjevheter, mens Oracle-sp\u00f8rsm\u00e5let var et menneskeskapt idealsp\u00f8rsm\u00e5l.<\/p>\n<p>S2A kommer sv\u00e6rt n\u00e6r resultatene fra Oracle-prompten og leverer nesten 50% bedre n\u00f8yaktighet enn baseline-prompten.<\/p>\n<figure id=\"attachment_7675\" aria-describedby=\"caption-attachment-7675\" style=\"width: 586px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7675 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png\" alt=\"\" width=\"586\" height=\"342\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png 586w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-300x175.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-370x216.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-20x12.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-82x48.png 82w\" sizes=\"auto, (max-width: 586px) 100vw, 586px\" \/><figcaption id=\"caption-attachment-7675\" class=\"wp-caption-text\">S2A Sammenligning av resultater. Kilde: arXiv<\/figcaption><\/figure>\n<p>S\u00e5 hva er haken? Forbehandling av den opprinnelige ledeteksten f\u00f8r du svarer p\u00e5 den, gj\u00f8r at prosessen krever flere beregninger. Hvis ledeteksten er lang og inneholder mye relevant informasjon, kan det medf\u00f8re betydelige kostnader \u00e5 generere ledeteksten p\u00e5 nytt.<\/p>\n<p>Det er lite sannsynlig at brukerne blir flinkere til \u00e5 skrive velformulerte sp\u00f8rsm\u00e5l, s\u00e5 S2A kan v\u00e6re en god m\u00e5te \u00e5 omg\u00e5 dette p\u00e5.<\/p>\n<p>Kommer Meta til \u00e5 bygge S2A inn i sin <a href=\"https:\/\/dailyai.com\/nb\/2023\/07\/meta-and-microsoft-release-advanced-ai-llama-2-for-free\/\">Lama<\/a> modell? Vi vet ikke, men du kan selv benytte deg av S2A-metoden.<\/p>\n<p>Hvis du er n\u00f8ye med \u00e5 utelate meninger eller ledende forslag fra sp\u00f8rsm\u00e5lene dine, er det mer sannsynlig at du f\u00e5r n\u00f8yaktige svar fra disse modellene.<\/p>","protected":false},"excerpt":{"rendered":"<p>Store spr\u00e5kmodeller (Large Language Models, LLM) blir ofte villedet av skjevheter eller irrelevant kontekst i en ledetekst. Forskere ved Meta har funnet en tilsynelatende enkel m\u00e5te \u00e5 l\u00f8se dette p\u00e5. Etter hvert som kontekstvinduene blir st\u00f8rre, kan sp\u00f8rsm\u00e5lene vi legger inn i en LLM, bli lengre og mer detaljerte. LLM-ene har blitt flinkere til \u00e5 plukke opp nyanser eller mindre detaljer i instruksjonene v\u00e5re, men noen ganger kan dette forvirre dem. Tidlig maskinl\u00e6ring brukte en \"hard attention\"-tiln\u00e6rming som skilte ut den mest relevante delen av en input og kun reagerte p\u00e5 den. Dette fungerer fint n\u00e5r du pr\u00f8ver \u00e5 sette bildetekst til et bilde,<\/p>","protected":false},"author":6,"featured_media":7676,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,131],"class_list":["post-7671","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>System 2 Attention improves accuracy of LLM responses | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nb\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:locale\" content=\"nb_NO\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"System 2 Attention improves accuracy of LLM responses | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nb\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-23T15:10:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-23T15:13:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ansl. lesetid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"System 2 Attention improves accuracy of LLM responses\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"wordCount\":520,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"keywords\":[\"LLMS\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nb-NO\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"name\":\"System 2 Attention improves accuracy of LLM responses | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\"},\"inLanguage\":\"nb-NO\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"System 2 Attention improves accuracy of LLM responses\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nb-NO\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nb-NO\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nb\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"System 2 Oppmerksomhet forbedrer n\u00f8yaktigheten av LLM-svar | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nb\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_locale":"nb_NO","og_type":"article","og_title":"System 2 Attention improves accuracy of LLM responses | DailyAI","og_description":"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,","og_url":"https:\/\/dailyai.com\/nb\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_site_name":"DailyAI","article_published_time":"2023-11-23T15:10:15+00:00","article_modified_time":"2023-11-23T15:13:38+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet av":"Eugene van der Watt","Ansl. lesetid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"System 2 Attention improves accuracy of LLM responses","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"wordCount":520,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","keywords":["LLMS","Meta"],"articleSection":["Industry"],"inLanguage":"nb-NO"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","url":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","name":"System 2 Oppmerksomhet forbedrer n\u00f8yaktigheten av LLM-svar | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb"},"inLanguage":"nb-NO","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"]}]},{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"System 2 Attention improves accuracy of LLM responses"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligAI","description":"Din daglige dose med AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nb-NO"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nb-NO","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har bakgrunn som elektroingeni\u00f8r og elsker alt som har med teknologi \u00e5 gj\u00f8re. N\u00e5r han tar en pause fra AI-nyhetene, finner du ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nb\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/7671","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/comments?post=7671"}],"version-history":[{"count":7,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/7671\/revisions"}],"predecessor-version":[{"id":7682,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/posts\/7671\/revisions\/7682"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media\/7676"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/media?parent=7671"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/categories?post=7671"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nb\/wp-json\/wp\/v2\/tags?post=7671"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}