{"id":7671,"date":"2023-11-23T15:10:15","date_gmt":"2023-11-23T15:10:15","guid":{"rendered":"https:\/\/dailyai.com\/?p=7671"},"modified":"2023-11-23T15:13:38","modified_gmt":"2023-11-23T15:13:38","slug":"system-2-attention-improves-accuracy-of-llm-responses","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","title":{"rendered":"System 2 Opm\u00e6rksomhed forbedrer n\u00f8jagtigheden af LLM-svar"},"content":{"rendered":"<p><strong>Store sprogmodeller (LLM) bliver ofte vildledt af bias eller irrelevant kontekst i en prompt. Forskere hos Meta har fundet en tilsyneladende enkel m\u00e5de at l\u00f8se det p\u00e5.<\/strong><\/p>\n<p>Efterh\u00e5nden som kontekstvinduerne bliver st\u00f8rre, kan de beskeder, vi giver til en LLM, blive l\u00e6ngere og mere detaljerede. LLM'er er blevet bedre til at opfange nuancer eller mindre detaljer i vores beskeder, men nogle gange kan det forvirre dem.<\/p>\n<p>Tidlig maskinl\u00e6ring brugte en \"h\u00e5rd opm\u00e6rksomhed\"-tilgang, der udpegede den mest relevante del af et input og kun reagerede p\u00e5 det. Det fungerer fint, n\u00e5r man skal lave en billedtekst til et billede, men d\u00e5rligt, n\u00e5r man skal overs\u00e6tte en s\u00e6tning eller besvare et sp\u00f8rgsm\u00e5l med flere lag.<\/p>\n<p>De fleste LLM'er bruger nu en \"soft attention\"-tilgang, som tokeniserer hele beskeden og tildeler v\u00e6gt til hver enkelt.<\/p>\n<p>Meta foresl\u00e5r en tilgang kaldet <a href=\"https:\/\/arxiv.org\/pdf\/2311.11829.pdf\" target=\"_blank\" rel=\"noopener\">System 2 Opm\u00e6rksomhed<\/a> (S2A) for at f\u00e5 det bedste fra begge verdener. S2A bruger en LLM's evne til at behandle naturligt sprog til at tage din foresp\u00f8rgsel og fjerne bias og irrelevante oplysninger, f\u00f8r du g\u00e5r i gang med at udarbejde et svar.<\/p>\n<p>Her er et eksempel.<\/p>\n<figure id=\"attachment_7673\" aria-describedby=\"caption-attachment-7673\" style=\"width: 1200px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7673 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png\" alt=\"\" width=\"1200\" height=\"750\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png 1200w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-300x188.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-1024x640.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-768x480.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-370x231.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-800x500.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-20x13.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-740x463.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-77x48.png 77w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><figcaption id=\"caption-attachment-7673\" class=\"wp-caption-text\">S2A Math-eksempel. Kilde: arXiv<\/figcaption><\/figure>\n<p>S2A fjerner oplysningerne om Max, da de er irrelevante for sp\u00f8rgsm\u00e5let. S2A regenererer en optimeret prompt, f\u00f8r den begynder at arbejde p\u00e5 den. LLM'er er notorisk d\u00e5rlige til at <a href=\"https:\/\/dailyai.com\/da\/2023\/10\/chatgpts-accounting-skills-are-put-to-the-test\/\">matematik<\/a> s\u00e5 det er en stor hj\u00e6lp at g\u00f8re beskeden mindre forvirrende.<\/p>\n<p>LLM'er er behagelige mennesker og er glade for at v\u00e6re enige med dig, selv n\u00e5r du tager fejl. S2A fjerner enhver bias i en foresp\u00f8rgsel og behandler derefter kun de relevante dele af foresp\u00f8rgslen. Det reducerer det, som AI-forskere kalder \"sycophancy\", eller en AI-models tilb\u00f8jelighed til at kysse r\u00f8v.<\/p>\n<figure id=\"attachment_7674\" aria-describedby=\"caption-attachment-7674\" style=\"width: 1190px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7674 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png\" alt=\"\" width=\"1190\" height=\"584\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png 1190w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-1024x503.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-768x377.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-370x182.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-800x393.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-740x363.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-20x10.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-98x48.png 98w\" sizes=\"auto, (max-width: 1190px) 100vw, 1190px\" \/><figcaption id=\"caption-attachment-7674\" class=\"wp-caption-text\">S2A reduktion af sycophancy. Kilde: arXiv<\/figcaption><\/figure>\n<p>S2A er egentlig bare en systemprompt, der beder LLM'en om at forfine den oprindelige prompt en smule, f\u00f8r den g\u00e5r i gang med at arbejde p\u00e5 den. De resultater, forskerne opn\u00e5ede med matematiske, faktuelle og lange sp\u00f8rgsm\u00e5l, var imponerende.<\/p>\n<p>Her er et eksempel p\u00e5 de forbedringer, S2A opn\u00e5ede p\u00e5 faktuelle sp\u00f8rgsm\u00e5l. Baseline var svar p\u00e5 sp\u00f8rgsm\u00e5l, der indeholdt bias, mens Oracle-prompten var en menneskelig raffineret idealprompt.<\/p>\n<p>S2A kommer meget t\u00e6t p\u00e5 resultaterne af Oracle-prompten og leverer n\u00e6sten 50% forbedring i n\u00f8jagtighed i forhold til baseline-prompten.<\/p>\n<figure id=\"attachment_7675\" aria-describedby=\"caption-attachment-7675\" style=\"width: 586px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7675 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png\" alt=\"\" width=\"586\" height=\"342\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png 586w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-300x175.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-370x216.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-20x12.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-82x48.png 82w\" sizes=\"auto, (max-width: 586px) 100vw, 586px\" \/><figcaption id=\"caption-attachment-7675\" class=\"wp-caption-text\">S2A Sammenligning af resultater. Kilde: arXiv<\/figcaption><\/figure>\n<p>Hvad er s\u00e5 problemet? Forbehandling af den oprindelige foresp\u00f8rgsel, f\u00f8r den besvares, tilf\u00f8jer yderligere beregningskrav til processen. Hvis sp\u00f8rgsm\u00e5let er langt og indeholder mange relevante oplysninger, kan det medf\u00f8re betydelige omkostninger at genskabe sp\u00f8rgsm\u00e5let.<\/p>\n<p>Det er usandsynligt, at brugerne bliver bedre til at skrive velformulerede beskeder, s\u00e5 S2A kan v\u00e6re en god m\u00e5de at komme uden om det p\u00e5.<\/p>\n<p>Vil Meta bygge S2A ind i sin <a href=\"https:\/\/dailyai.com\/da\/2023\/07\/meta-and-microsoft-release-advanced-ai-llama-2-for-free\/\">Lama<\/a> model? Vi ved det ikke, men du kan selv udnytte S2A-tilgangen.<\/p>\n<p>Hvis du er omhyggelig med at udelade meninger eller ledende forslag fra dine sp\u00f8rgsm\u00e5l, er det mere sandsynligt, at du f\u00e5r n\u00f8jagtige svar fra disse modeller.<\/p>","protected":false},"excerpt":{"rendered":"<p>Store sprogmodeller (LLM) bliver ofte vildledt af bias eller irrelevant kontekst i en prompt. Forskere hos Meta har fundet en tilsyneladende enkel m\u00e5de at l\u00f8se det p\u00e5. N\u00e5r kontekstvinduerne \u00f8ges, kan de beskeder, vi indtaster i en LLM, blive l\u00e6ngere og mere detaljerede. LLM'er er blevet bedre til at opfange nuancer eller mindre detaljer i vores beskeder, men nogle gange kan det forvirre dem. Tidlig maskinl\u00e6ring brugte en \"h\u00e5rd opm\u00e6rksomhed\"-tilgang, der udpegede den mest relevante del af et input og kun reagerede p\u00e5 det. Det fungerer fint, n\u00e5r du pr\u00f8ver at skrive en billedtekst,<\/p>","protected":false},"author":6,"featured_media":7676,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,131],"class_list":["post-7671","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>System 2 Attention improves accuracy of LLM responses | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"System 2 Attention improves accuracy of LLM responses | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-23T15:10:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-23T15:13:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"System 2 Attention improves accuracy of LLM responses\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"wordCount\":520,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"keywords\":[\"LLMS\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"name\":\"System 2 Attention improves accuracy of LLM responses | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"System 2 Attention improves accuracy of LLM responses\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"System 2-opm\u00e6rksomhed forbedrer n\u00f8jagtigheden af LLM-svar | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_locale":"da_DK","og_type":"article","og_title":"System 2 Attention improves accuracy of LLM responses | DailyAI","og_description":"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,","og_url":"https:\/\/dailyai.com\/da\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_site_name":"DailyAI","article_published_time":"2023-11-23T15:10:15+00:00","article_modified_time":"2023-11-23T15:13:38+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Eugene van der Watt","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"System 2 Attention improves accuracy of LLM responses","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"wordCount":520,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","keywords":["LLMS","Meta"],"articleSection":["Industry"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","url":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","name":"System 2-opm\u00e6rksomhed forbedrer n\u00f8jagtigheden af LLM-svar | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"System 2 Attention improves accuracy of LLM responses"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har en baggrund som elektronikingeni\u00f8r og elsker alt, hvad der har med teknologi at g\u00f8re. N\u00e5r han tager en pause fra at l\u00e6se AI-nyheder, kan du finde ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/da\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/7671","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=7671"}],"version-history":[{"count":7,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/7671\/revisions"}],"predecessor-version":[{"id":7682,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/7671\/revisions\/7682"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/7676"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=7671"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=7671"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=7671"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}