{"id":7671,"date":"2023-11-23T15:10:15","date_gmt":"2023-11-23T15:10:15","guid":{"rendered":"https:\/\/dailyai.com\/?p=7671"},"modified":"2023-11-23T15:13:38","modified_gmt":"2023-11-23T15:13:38","slug":"system-2-attention-improves-accuracy-of-llm-responses","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","title":{"rendered":"L'attention du syst\u00e8me 2 am\u00e9liore la pr\u00e9cision des r\u00e9ponses du LLM"},"content":{"rendered":"<p><strong>Les grands mod\u00e8les de langage (LLM) sont souvent induits en erreur par des biais ou un contexte non pertinent dans une invite. Les chercheurs de Meta ont trouv\u00e9 un moyen apparemment simple d'y rem\u00e9dier.<\/strong><\/p>\n<p>Au fur et \u00e0 mesure que les fen\u00eatres contextuelles augmentent, les messages que nous envoyons au LLM peuvent devenir plus longs et plus d\u00e9taill\u00e9s. Les LLM sont devenus plus aptes \u00e0 saisir les nuances ou les petits d\u00e9tails de nos messages, mais cela peut parfois les d\u00e9concerter.<\/p>\n<p>Les premi\u00e8res m\u00e9thodes d'apprentissage automatique utilisaient une approche \"d'attention stricte\" qui s\u00e9lectionnait la partie la plus pertinente d'une entr\u00e9e et ne r\u00e9pondait qu'\u00e0 cette partie. Cela fonctionne bien lorsque vous essayez de l\u00e9gender une image, mais mal lorsque vous traduisez une phrase ou r\u00e9pondez \u00e0 une question \u00e0 plusieurs niveaux.<\/p>\n<p>La plupart des LLM utilisent aujourd'hui une approche \"d'attention douce\", qui consiste \u00e0 donner un sens \u00e0 l'ensemble de l'invite et \u00e0 attribuer un poids \u00e0 chacun d'entre eux.<\/p>\n<p>Meta propose une approche appel\u00e9e <a href=\"https:\/\/arxiv.org\/pdf\/2311.11829.pdf\" target=\"_blank\" rel=\"noopener\">Syst\u00e8me 2 Attention<\/a> (S2A) pour obtenir le meilleur des deux mondes. S2A utilise la capacit\u00e9 de traitement du langage naturel d'un LLM pour prendre votre message et \u00e9liminer les pr\u00e9jug\u00e9s et les informations non pertinentes avant de commencer \u00e0 travailler sur une r\u00e9ponse.<\/p>\n<p>Voici un exemple.<\/p>\n<figure id=\"attachment_7673\" aria-describedby=\"caption-attachment-7673\" style=\"width: 1200px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7673 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png\" alt=\"\" width=\"1200\" height=\"750\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png 1200w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-300x188.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-1024x640.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-768x480.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-370x231.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-800x500.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-20x13.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-740x463.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-77x48.png 77w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><figcaption id=\"caption-attachment-7673\" class=\"wp-caption-text\">Exemple math\u00e9matique S2A. Source : arXiv<\/figcaption><\/figure>\n<p>S2A supprime les informations relatives \u00e0 Max car elles ne sont pas pertinentes pour la question. S2A r\u00e9g\u00e9n\u00e8re une invite optimis\u00e9e avant de commencer \u00e0 travailler dessus. Les LLM sont notoirement mauvais en <a href=\"https:\/\/dailyai.com\/fr\/2023\/10\/chatgpts-accounting-skills-are-put-to-the-test\/\">math\u00e9matiques<\/a> Il est donc tr\u00e8s utile de rendre l'invite moins confuse.<\/p>\n<p>Les LLM font plaisir aux gens et sont heureux d'\u00eatre d'accord avec vous, m\u00eame si vous avez tort. S2A \u00e9limine tout biais dans une invite et ne traite ensuite que les parties pertinentes de l'invite. Cela permet de r\u00e9duire ce que les chercheurs en IA appellent la \"flagornerie\", c'est-\u00e0-dire la propension d'un mod\u00e8le d'IA \u00e0 l\u00e9cher les bottes.<\/p>\n<figure id=\"attachment_7674\" aria-describedby=\"caption-attachment-7674\" style=\"width: 1190px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7674 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png\" alt=\"\" width=\"1190\" height=\"584\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png 1190w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-1024x503.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-768x377.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-370x182.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-800x393.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-740x363.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-20x10.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-98x48.png 98w\" sizes=\"auto, (max-width: 1190px) 100vw, 1190px\" \/><figcaption id=\"caption-attachment-7674\" class=\"wp-caption-text\">R\u00e9duction de la sycophilie S2A. Source : arXiv<\/figcaption><\/figure>\n<p>S2A n'est en fait qu'une invite du syst\u00e8me qui demande au LLM d'affiner un peu l'invite originale avant de se mettre au travail. Les r\u00e9sultats obtenus par les chercheurs pour les questions math\u00e9matiques, les questions factuelles et les questions longues sont impressionnants.<\/p>\n<p>\u00c0 titre d'exemple, voici les am\u00e9liorations obtenues par S2A sur les questions factuelles. La base de r\u00e9f\u00e9rence \u00e9tait constitu\u00e9e par les r\u00e9ponses aux questions contenant des biais, tandis que l'invite Oracle \u00e9tait une invite id\u00e9ale affin\u00e9e par l'homme.<\/p>\n<p>S2A se rapproche vraiment des r\u00e9sultats de l'invite Oracle et offre une am\u00e9lioration de la pr\u00e9cision de pr\u00e8s de 50% par rapport \u00e0 l'invite de base.<\/p>\n<figure id=\"attachment_7675\" aria-describedby=\"caption-attachment-7675\" style=\"width: 586px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7675 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png\" alt=\"\" width=\"586\" height=\"342\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png 586w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-300x175.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-370x216.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-20x12.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-82x48.png 82w\" sizes=\"auto, (max-width: 586px) 100vw, 586px\" \/><figcaption id=\"caption-attachment-7675\" class=\"wp-caption-text\">S2A Comparaison des r\u00e9sultats. Source : arXiv<\/figcaption><\/figure>\n<p>Quel est le probl\u00e8me ? Le pr\u00e9traitement de l'invite originale avant d'y r\u00e9pondre ajoute des exigences de calcul suppl\u00e9mentaires au processus. Si l'invite est longue et contient beaucoup d'informations pertinentes, la r\u00e9g\u00e9n\u00e9ration de l'invite peut entra\u00eener des co\u00fbts importants.<\/p>\n<p>Il est peu probable que les utilisateurs s'am\u00e9liorent dans la r\u00e9daction d'invites bien con\u00e7ues ; S2A peut donc \u00eatre un bon moyen de contourner ce probl\u00e8me.<\/p>\n<p>Meta int\u00e9grera-t-il S2A dans son <a href=\"https:\/\/dailyai.com\/fr\/2023\/07\/meta-and-microsoft-release-advanced-ai-llama-2-for-free\/\">Lamas<\/a> mod\u00e8le ? Nous ne le savons pas, mais vous pouvez vous-m\u00eame tirer parti de l'approche S2A.<\/p>\n<p>Si vous prenez soin d'omettre les opinions ou les suggestions suggestives dans vos messages-guides, vous aurez plus de chances d'obtenir des r\u00e9ponses pr\u00e9cises de la part de ces mod\u00e8les.<\/p>","protected":false},"excerpt":{"rendered":"<p>Les grands mod\u00e8les de langage (LLM) sont souvent induits en erreur par des biais ou un contexte non pertinent dans une invite. Les chercheurs de Meta ont trouv\u00e9 un moyen apparemment simple d'y rem\u00e9dier. Au fur et \u00e0 mesure que les fen\u00eatres contextuelles augmentent, les invites que nous entrons dans un LLM peuvent devenir plus longues et plus d\u00e9taill\u00e9es. Les LLM sont devenus plus aptes \u00e0 saisir les nuances ou les petits d\u00e9tails de nos invites, mais cela peut parfois les d\u00e9concerter. Les premi\u00e8res m\u00e9thodes d'apprentissage automatique utilisaient une approche \"d'attention particuli\u00e8re\" qui identifiait la partie la plus pertinente d'une entr\u00e9e et ne r\u00e9pondait qu'\u00e0 cette partie. Cela fonctionne bien lorsque vous essayez de l\u00e9gender une image,<\/p>","protected":false},"author":6,"featured_media":7676,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,131],"class_list":["post-7671","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>System 2 Attention improves accuracy of LLM responses | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"System 2 Attention improves accuracy of LLM responses | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-23T15:10:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-23T15:13:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"System 2 Attention improves accuracy of LLM responses\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"wordCount\":520,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"keywords\":[\"LLMS\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"name\":\"System 2 Attention improves accuracy of LLM responses | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"System 2 Attention improves accuracy of LLM responses\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"L'attention du syst\u00e8me 2 am\u00e9liore la pr\u00e9cision des r\u00e9ponses LLM | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_locale":"fr_FR","og_type":"article","og_title":"System 2 Attention improves accuracy of LLM responses | DailyAI","og_description":"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,","og_url":"https:\/\/dailyai.com\/fr\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_site_name":"DailyAI","article_published_time":"2023-11-23T15:10:15+00:00","article_modified_time":"2023-11-23T15:13:38+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"System 2 Attention improves accuracy of LLM responses","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"wordCount":520,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","keywords":["LLMS","Meta"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","url":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","name":"L'attention du syst\u00e8me 2 am\u00e9liore la pr\u00e9cision des r\u00e9ponses LLM | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"System 2 Attention improves accuracy of LLM responses"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/7671","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=7671"}],"version-history":[{"count":7,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/7671\/revisions"}],"predecessor-version":[{"id":7682,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/7671\/revisions\/7682"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/7676"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=7671"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=7671"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=7671"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}