{"id":7671,"date":"2023-11-23T15:10:15","date_gmt":"2023-11-23T15:10:15","guid":{"rendered":"https:\/\/dailyai.com\/?p=7671"},"modified":"2023-11-23T15:13:38","modified_gmt":"2023-11-23T15:13:38","slug":"system-2-attention-improves-accuracy-of-llm-responses","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","title":{"rendered":"Sistema 2 A aten\u00e7\u00e3o melhora a precis\u00e3o das respostas LLM"},"content":{"rendered":"<p><strong>Os modelos de linguagem de grande dimens\u00e3o (LLM) s\u00e3o muitas vezes induzidos em erro por preconceitos ou contextos irrelevantes numa mensagem. Os investigadores da Meta descobriram uma forma aparentemente simples de resolver este problema.<\/strong><\/p>\n<p>\u00c0 medida que as janelas de contexto aumentam, as instru\u00e7\u00f5es que introduzimos num LLM podem tornar-se mais longas e cada vez mais detalhadas. Os LLMs tornaram-se mais aptos a captar as nuances ou os pequenos detalhes das nossas instru\u00e7\u00f5es, mas por vezes isso pode confundi-los.<\/p>\n<p>A aprendizagem autom\u00e1tica inicial utilizava uma abordagem de \"aten\u00e7\u00e3o rigorosa\" que seleccionava a parte mais relevante de uma entrada e respondia apenas a essa parte. Isto funciona bem quando se est\u00e1 a tentar legendar uma imagem, mas mal quando se est\u00e1 a traduzir uma frase ou a responder a uma pergunta com v\u00e1rias camadas.<\/p>\n<p>Atualmente, a maioria dos LLM utiliza uma abordagem de \"aten\u00e7\u00e3o suave\", que simboliza toda a mensagem e atribui pesos a cada uma delas.<\/p>\n<p>Meta prop\u00f5e uma abordagem denominada <a href=\"https:\/\/arxiv.org\/pdf\/2311.11829.pdf\" target=\"_blank\" rel=\"noopener\">Sistema 2 Aten\u00e7\u00e3o<\/a> (S2A) para obter o melhor dos dois mundos. O S2A utiliza a capacidade de processamento de linguagem natural de um LLM para receber o seu pedido e eliminar preconceitos e informa\u00e7\u00f5es irrelevantes antes de come\u00e7ar a trabalhar numa resposta.<\/p>\n<p>Eis um exemplo.<\/p>\n<figure id=\"attachment_7673\" aria-describedby=\"caption-attachment-7673\" style=\"width: 1200px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7673 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png\" alt=\"\" width=\"1200\" height=\"750\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example.png 1200w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-300x188.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-1024x640.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-768x480.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-370x231.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-800x500.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-20x13.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-740x463.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Math-example-77x48.png 77w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><figcaption id=\"caption-attachment-7673\" class=\"wp-caption-text\">Exemplo de matem\u00e1tica S2A. Fonte: arXiv<\/figcaption><\/figure>\n<p>O S2A elimina a informa\u00e7\u00e3o relativa ao Max, uma vez que \u00e9 irrelevante para a pergunta. S2A regenera um prompt optimizado antes de come\u00e7ar a trabalhar nele. Os LLMs s\u00e3o notoriamente maus em <a href=\"https:\/\/dailyai.com\/pt\/2023\/10\/chatgpts-accounting-skills-are-put-to-the-test\/\">matem\u00e1tica<\/a> por isso, tornar o prompt menos confuso \u00e9 uma grande ajuda.<\/p>\n<p>Os LLMs agradam \u00e0s pessoas e t\u00eam todo o gosto em concordar consigo, mesmo quando est\u00e1 errado. O S2A elimina qualquer preconceito numa mensagem e depois processa apenas as partes relevantes da mensagem. Isto reduz aquilo a que os investigadores de IA chamam \"bajula\u00e7\u00e3o\", ou seja, a propens\u00e3o de um modelo de IA para dar graxa.<\/p>\n<figure id=\"attachment_7674\" aria-describedby=\"caption-attachment-7674\" style=\"width: 1190px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7674 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png\" alt=\"\" width=\"1190\" height=\"584\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction.png 1190w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-300x147.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-1024x503.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-768x377.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-370x182.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-800x393.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-740x363.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-20x10.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-sycophancy-reduction-98x48.png 98w\" sizes=\"auto, (max-width: 1190px) 100vw, 1190px\" \/><figcaption id=\"caption-attachment-7674\" class=\"wp-caption-text\">Redu\u00e7\u00e3o de sicofantasia S2A. Fonte: arXiv<\/figcaption><\/figure>\n<p>O S2A \u00e9, na verdade, apenas um prompt do sistema que instrui o LLM a refinar um pouco o prompt original antes de come\u00e7ar a trabalhar nele. Os resultados obtidos pelos investigadores com quest\u00f5es de matem\u00e1tica, factuais e longas foram impressionantes.<\/p>\n<p>Como exemplo, eis as melhorias que o S2A obteve em quest\u00f5es factuais. A linha de base era constitu\u00edda por respostas a perguntas que continham preconceitos, enquanto que a pergunta Oracle era uma pergunta ideal refinada por humanos.<\/p>\n<p>O S2A aproxima-se muito dos resultados do prompt Oracle e proporciona uma melhoria de quase 50% na precis\u00e3o em rela\u00e7\u00e3o ao prompt de base.<\/p>\n<figure id=\"attachment_7675\" aria-describedby=\"caption-attachment-7675\" style=\"width: 586px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7675 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png\" alt=\"\" width=\"586\" height=\"342\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results.png 586w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-300x175.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-370x216.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-20x12.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/S2A-Comparison-of-results-82x48.png 82w\" sizes=\"auto, (max-width: 586px) 100vw, 586px\" \/><figcaption id=\"caption-attachment-7675\" class=\"wp-caption-text\">S2A Compara\u00e7\u00e3o de resultados. Fonte: arXiv<\/figcaption><\/figure>\n<p>Ent\u00e3o, qual \u00e9 o problema? O pr\u00e9-processamento do pedido original antes de responder acrescenta requisitos de computa\u00e7\u00e3o adicionais ao processo. Se o pedido for longo e tiver muitas informa\u00e7\u00f5es relevantes, a regenera\u00e7\u00e3o do pedido pode acrescentar custos significativos.<\/p>\n<p>\u00c9 pouco prov\u00e1vel que os utilizadores melhorem a sua capacidade de escrever prompts bem elaborados, pelo que o S2A pode ser uma boa forma de contornar esse problema.<\/p>\n<p>A Meta vai integrar o S2A na sua <a href=\"https:\/\/dailyai.com\/pt\/2023\/07\/meta-and-microsoft-release-advanced-ai-llama-2-for-free\/\">Lhama<\/a> modelo? N\u00e3o sabemos, mas pode aproveitar a abordagem S2A.<\/p>\n<p>Se tiver o cuidado de omitir opini\u00f5es ou sugest\u00f5es de lideran\u00e7a nos seus avisos, \u00e9 mais prov\u00e1vel que obtenha respostas exactas destes modelos.<\/p>","protected":false},"excerpt":{"rendered":"<p>Os modelos de linguagem de grande dimens\u00e3o (LLM) s\u00e3o frequentemente induzidos em erro por preconceitos ou contextos irrelevantes numa mensagem. Os investigadores do Meta descobriram uma forma aparentemente simples de resolver este problema. \u00c0 medida que as janelas de contexto aumentam, os avisos que introduzimos num LLM podem tornar-se mais longos e cada vez mais pormenorizados. Os LLMs tornaram-se melhores a captar as nuances ou os pormenores mais pequenos das nossas instru\u00e7\u00f5es, mas por vezes isso pode confundi-los. A aprendizagem autom\u00e1tica inicial utilizava uma abordagem de \"aten\u00e7\u00e3o dif\u00edcil\" que seleccionava a parte mais relevante de uma entrada e respondia apenas a essa parte. Isto funciona bem quando se est\u00e1 a tentar legendar uma imagem,<\/p>","protected":false},"author":6,"featured_media":7676,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,131],"class_list":["post-7671","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>System 2 Attention improves accuracy of LLM responses | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"System 2 Attention improves accuracy of LLM responses | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-23T15:10:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-23T15:13:38+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"System 2 Attention improves accuracy of LLM responses\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"},\"wordCount\":520,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"keywords\":[\"LLMS\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\",\"name\":\"System 2 Attention improves accuracy of LLM responses | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"datePublished\":\"2023-11-23T15:10:15+00:00\",\"dateModified\":\"2023-11-23T15:13:38+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/Simplify.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/11\\\/system-2-attention-improves-accuracy-of-llm-responses\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"System 2 Attention improves accuracy of LLM responses\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"A aten\u00e7\u00e3o do sistema 2 melhora a precis\u00e3o das respostas LLM | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_locale":"pt_PT","og_type":"article","og_title":"System 2 Attention improves accuracy of LLM responses | DailyAI","og_description":"Large Language Models (LLM) are often mislead by bias or irrelevant context in a prompt. Researchers at Meta have found a seemingly simple way to fix that. As context windows increase the prompts that we enter into an LLM can become longer and increasingly detailed. LLMs have become better at picking up on the nuances or smaller details in our prompts, but sometimes this can confuse them. Early machine learning used a \u201chard attention\u201d approach that singled out the most relevant part of an input and responded only to that. This works fine when you\u2019re trying to caption an image,","og_url":"https:\/\/dailyai.com\/pt\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","og_site_name":"DailyAI","article_published_time":"2023-11-23T15:10:15+00:00","article_modified_time":"2023-11-23T15:13:38+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"System 2 Attention improves accuracy of LLM responses","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"},"wordCount":520,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","keywords":["LLMS","Meta"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","url":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/","name":"A aten\u00e7\u00e3o do sistema 2 melhora a precis\u00e3o das respostas LLM | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","datePublished":"2023-11-23T15:10:15+00:00","dateModified":"2023-11-23T15:13:38+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/11\/Simplify.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/11\/system-2-attention-improves-accuracy-of-llm-responses\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"System 2 Attention improves accuracy of LLM responses"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/7671","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=7671"}],"version-history":[{"count":7,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/7671\/revisions"}],"predecessor-version":[{"id":7682,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/7671\/revisions\/7682"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/7676"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=7671"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=7671"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=7671"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}