{"id":7671,"date":"2023-11-23T15:10:15","date_gmt":"2023-11-23T15:10:15","guid":{"rendered":"https:\/\/dailyai.com\/?p=7671"},"modified":"2023-11-23T15:13:38","modified_gmt":"2023-11-23T15:13:38","slug":"system-2-attention-improves-accuracy-of-llm-responses","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nl\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","title":{"rendered":"Systeem 2 Aandacht verbetert nauwkeurigheid van LLM-reacties"},"content":{"rendered":"<p><strong>Grote taalmodellen (LLM) worden vaak misleid door vertekening of irrelevante context in een prompt. Onderzoekers van Meta hebben een schijnbaar eenvoudige manier gevonden om dat op te lossen.<\/strong><\/p>\n<p>Naarmate de contextvensters groter worden, kunnen de aanwijzingen die we aan een LLM geven langer en gedetailleerder worden. LLM's zijn beter geworden in het oppikken van de nuances of kleinere details in onze prompts, maar soms kan dit ze verwarren.<\/p>\n<p>In het begin gebruikte machine learning een \"harde aandacht\" benadering die het meest relevante deel van een input eruit pikte en alleen daarop reageerde. Dit werkt prima als je een afbeelding probeert te ondertitelen, maar slecht bij het vertalen van een zin of het beantwoorden van een vraag met meerdere lagen.<\/p>\n<p>De meeste LLM's gebruiken nu een \"zachte aandacht\"-benadering die de hele prompt toekent en aan elke prompt een gewicht toekent.<\/p>\n<p>Meta stelt een aanpak voor die <a href=\"https:\/\/arxiv.org\/pdf\/2311.11829.pdf\" target=\"_blank\" rel=\"noopener\">Systeem 2 Aandacht<\/a> (S2A) om het beste van beide werelden te krijgen. S2A maakt gebruik van de natuurlijke taalverwerking van een LLM om je vraag te verwerken en vooroordelen en irrelevante informatie te verwijderen voordat je aan de slag gaat met een antwoord.<\/p>\n<p>Hier is een voorbeeld.<\/p>\n<figure id=\"attachment_7673\" aria-describedby=\"caption-attachment-7673\" style=\"width: 1200px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7673 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png\" alt=\"\" width=\"1200\" height=\"750\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png 1200w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-300x188.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-1024x640.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-768x480.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-370x231.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-800x500.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-20x13.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-740x463.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-77x48.png 77w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><figcaption id=\"caption-attachment-7673\" class=\"wp-caption-text\">S2A Math-voorbeeld. Bron: arXiv<\/figcaption><\/figure>\n<p>S2A verwijdert de info met betrekking tot Max omdat die irrelevant is voor de vraag. S2A regenereert een geoptimaliseerde prompt voordat er aan wordt gewerkt. LLM's zijn notoir slecht in <a href=\"https:\/\/dailyai.com\/nl\/2023\/10\/chatgpts-accounting-skills-are-put-to-the-test\/\">wiskunde<\/a> dus het minder verwarrend maken van de prompt is een grote hulp.<\/p>\n<p>LLM's zijn 'people pleasers' en geven je graag gelijk, zelfs als je het mis hebt. S2A verwijdert elke vooringenomenheid in een prompt en verwerkt vervolgens alleen de relevante delen van de prompt. Dit vermindert wat AI-onderzoekers \"sycophancy\" noemen, of de neiging van een AI-model om te slijmen.<\/p>\n<figure id=\"attachment_7674\" aria-describedby=\"caption-attachment-7674\" style=\"width: 1190px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7674 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png\" alt=\"\" width=\"1190\" height=\"584\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png 1190w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-1024x503.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-768x377.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-370x182.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-800x393.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-740x363.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-20x10.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-98x48.png 98w\" sizes=\"auto, (max-width: 1190px) 100vw, 1190px\" \/><figcaption id=\"caption-attachment-7674\" class=\"wp-caption-text\">S2A sycofantiereductie. Bron: arXiv<\/figcaption><\/figure>\n<p>S2A is eigenlijk gewoon een systeemvraag die de LLM instrueert om de oorspronkelijke vraag wat te verfijnen voordat hij ermee aan de slag gaat. De resultaten die de onderzoekers behaalden met wiskundige, feitelijke en lange vragen waren indrukwekkend.<\/p>\n<p>Hier zijn bijvoorbeeld de verbeteringen die S2A behaalde op feitelijke vragen. De basislijn waren antwoorden op vragen die vooringenomenheid bevatten, terwijl de Oracle prompt een door mensen verfijnde ideale prompt was.<\/p>\n<p>S2A komt heel dicht in de buurt van de Oracle prompt resultaten en levert bijna 50% verbetering in nauwkeurigheid ten opzichte van de basislijn prompt.<\/p>\n<figure id=\"attachment_7675\" aria-describedby=\"caption-attachment-7675\" style=\"width: 586px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7675 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png\" alt=\"\" width=\"586\" height=\"342\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png 586w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-300x175.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-370x216.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-20x12.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-82x48.png 82w\" sizes=\"auto, (max-width: 586px) 100vw, 586px\" \/><figcaption id=\"caption-attachment-7675\" class=\"wp-caption-text\">S2A Vergelijking van resultaten. Bron: arXiv<\/figcaption><\/figure>\n<p>Dus wat is het addertje onder het gras? Het voorbewerken van de oorspronkelijke prompt voordat deze wordt beantwoord, voegt extra rekenkracht toe aan het proces. Als de prompt lang is en veel relevante informatie bevat, kan het opnieuw genereren van de prompt aanzienlijke kosten met zich meebrengen.<\/p>\n<p>Het is onwaarschijnlijk dat gebruikers beter worden in het schrijven van goed geformuleerde prompts, dus S2A kan een goede manier zijn om dat te omzeilen.<\/p>\n<p>Zal Meta S2A inbouwen in zijn <a href=\"https:\/\/dailyai.com\/nl\/2023\/07\/meta-and-microsoft-release-advanced-ai-llama-2-for-free\/\">Lama<\/a> model? We weten het niet, maar je kunt de S2A-aanpak zelf gebruiken.<\/p>\n<p>Als je voorzichtig bent met het weglaten van meningen of leidende suggesties uit je vragen, is de kans groter dat je accurate antwoorden krijgt van deze modellen.<\/p>","protected":false},"excerpt":{"rendered":"<p>Grote taalmodellen (LLM) worden vaak misleid door vertekening of irrelevante context in een prompt. Onderzoekers van Meta hebben een schijnbaar eenvoudige manier gevonden om dit op te lossen. Naarmate de contextvensters groter worden, kunnen de prompts die we in een LLM invoeren langer en gedetailleerder worden. LLM's zijn beter geworden in het oppikken van de nuances of kleinere details in onze prompts, maar soms kan dit ze verwarren. In het begin gebruikte machine learning een \"harde aandacht\" benadering die het meest relevante deel van een invoer eruit pikte en alleen daarop reageerde. Dit werkt prima als je een afbeelding van een bijschrift probeert te voorzien,<\/p>","protected":false},"author":6,"featured_media":7676,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,131],"class_list":["post-7671","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>System 2 Attention improves accuracy of LLM responses | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nl\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:locale\" content=\"nl_NL\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"System 2 Attention improves accuracy of LLM responses | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nl\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-23T15:10:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-23T15:13:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Geschreven door\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Geschatte leestijd\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"System 2 Attention improves accuracy of LLM responses\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"wordCount\":520,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"keywords\":[\"LLMS\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nl-NL\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"name\":\"System 2 Attention improves accuracy of LLM responses | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\"},\"inLanguage\":\"nl-NL\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"System 2 Attention improves accuracy of LLM responses\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nl-NL\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nl\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Systeem 2 Aandacht verbetert nauwkeurigheid van LLM reacties | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nl\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_locale":"nl_NL","og_type":"article","og_title":"System 2 Attention improves accuracy of LLM responses | DailyAI","og_description":"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,","og_url":"https:\/\/dailyai.com\/nl\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_site_name":"DailyAI","article_published_time":"2023-11-23T15:10:15+00:00","article_modified_time":"2023-11-23T15:13:38+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Geschreven door":"Eugene van der Watt","Geschatte leestijd":"3 minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"System 2 Attention improves accuracy of LLM responses","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"wordCount":520,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","keywords":["LLMS","Meta"],"articleSection":["Industry"],"inLanguage":"nl-NL"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","url":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","name":"Systeem 2 Aandacht verbetert nauwkeurigheid van LLM reacties | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb"},"inLanguage":"nl-NL","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"]}]},{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"System 2 Attention improves accuracy of LLM responses"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Uw dagelijkse dosis AI-nieuws","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nl-NL"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene heeft een achtergrond in elektrotechniek en houdt van alles wat met techniek te maken heeft. Als hij even pauzeert van het consumeren van AI-nieuws, kun je hem aan de snookertafel vinden.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nl\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/7671","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/comments?post=7671"}],"version-history":[{"count":7,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/7671\/revisions"}],"predecessor-version":[{"id":7682,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/7671\/revisions\/7682"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media\/7676"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media?parent=7671"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/categories?post=7671"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/tags?post=7671"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}