{"id":10866,"date":"2024-03-22T10:03:11","date_gmt":"2024-03-22T10:03:11","guid":{"rendered":"https:\/\/dailyai.com\/?p=10866"},"modified":"2024-03-28T09:32:30","modified_gmt":"2024-03-28T09:32:30","slug":"quiet-star-teaches-language-models-to-think-before-they-speak","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","title":{"rendered":"O Quiet-STaR ensina os modelos lingu\u00edsticos a pensar antes de falar"},"content":{"rendered":"<p><strong>Os investigadores da Universidade de Stanford e da Notbad AI desenvolveram o Quiet-STaR, uma t\u00e9cnica que treina um modelo de linguagem (LM) para raciocinar internamente antes de gerar um resultado.<\/strong><\/p>\n<p>Quando os seres humanos falam, normalmente temos um di\u00e1logo interior que molda as palavras que acabamos por verbalizar. Quanto mais pensarmos antes de falar, melhor ser\u00e1 a qualidade das nossas palavras.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2403.09629.pdf\" target=\"_blank\" rel=\"noopener\">No seu documento<\/a>Os investigadores descrevem como treinaram um LM (<a href=\"https:\/\/dailyai.com\/pt\/2024\/02\/mistral-ai-releases-new-model-and-chatbot-to-take-on-gpt-4\/\">Mistral-7B<\/a>) para aprender a imitar este processo de uma forma generalizada. O Quiet-STaR \u00e9 uma progress\u00e3o de outra t\u00e9cnica chamada STaR, ou Self-Taught Reasoner.<\/p>\n<p>O STaR \u00e9 um m\u00e9todo de treino de um modelo com alguns exemplos de perguntas com explica\u00e7\u00f5es (fundamentos) para as respostas. O modelo utiliza estes exemplos de cadeia de pensamento para tentar responder \u00e0s perguntas por si pr\u00f3prio, descobrindo os fundamentos.<\/p>\n<p>O STaR avalia se os racioc\u00ednios que apresenta resultam ou n\u00e3o em respostas correctas e aperfei\u00e7oa os seus racioc\u00ednios.<\/p>\n<p>Por muito impressionante que seja o STaR, a sua capacidade de racioc\u00ednio est\u00e1 limitada aos contextos de resposta a perguntas (QA) durante o treino. O objetivo do Quiet-STaR \u00e9 fornecer a um LM uma capacidade generalizada de aprender a raciocinar ou desenvolver racioc\u00ednios, numa gama mais vasta de textos, e n\u00e3o apenas em conjuntos de dados de QA.<\/p>\n<h2>Como \u00e9 que o Quiet-STaR funciona?<\/h2>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Atualmente, os modelos lingu\u00edsticos s\u00e3o treinados para raciocinar de forma 1) geral, imitando dados de racioc\u00ednio em linha, ou 2) restrita, auto-aprendendo as suas pr\u00f3prias solu\u00e7\u00f5es para tarefas espec\u00edficas<\/p>\n<p>Podem os LM ensinar-se a si pr\u00f3prios a raciocinar em geral?\ud83c\udf1fIntrodu\u00e7\u00e3o do Quiet-STaR, auto-ensino atrav\u00e9s de mon\u00f3logo interno!\ud83e\uddf5 <a href=\"https:\/\/t.co\/WCSxLPZeCX\">pic.twitter.com\/WCSxLPZeCX<\/a><\/p>\n<p>- Eric Zelikman (@ericzelikman) <a href=\"https:\/\/twitter.com\/ericzelikman\/status\/1768663835106513041?ref_src=twsrc%5Etfw\">15 de mar\u00e7o de 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Uma das principais inova\u00e7\u00f5es do Quiet-STaR \u00e9 que gera racioc\u00ednios, ou pensamentos, em paralelo, seguindo todos os tokens do texto que est\u00e1 a processar. N\u00e3o produz estes racioc\u00ednios em cadeia, da\u00ed a parte \"Silencioso\" do nome do algoritmo.<\/p>\n<p>O algoritmo processa os racioc\u00ednios atrav\u00e9s de uma \"cabe\u00e7a de mistura\". Cada racioc\u00ednio \u00e9 avaliado com base na exatid\u00e3o da previs\u00e3o da pr\u00f3xima ficha que produziu, em compara\u00e7\u00e3o com a previs\u00e3o feita pelo modelo de base.<\/p>\n<p>Se o modelo de base (sem o Quiet-STaR) fornecer uma previs\u00e3o melhor, ent\u00e3o o racioc\u00ednio n\u00e3o era bom. Se o racioc\u00ednio resultar numa previs\u00e3o mais exacta da pr\u00f3xima ficha, ent\u00e3o o algoritmo sabe que est\u00e1 a fazer uma coisa boa.<\/p>\n<p>De seguida, utiliza um algoritmo de aprendizagem por refor\u00e7o (REINFORCE) para aprender quais os racioc\u00ednios que ajudam e quais os que prejudicam o desempenho do modelo. O resultado \u00e9 que o modelo aprende uma capacidade generalizada de pensar antes de prever a pr\u00f3xima ficha.<\/p>\n<h2>Resultados do Quiet-STaR<\/h2>\n<p>Os investigadores testaram o modelo Mistral-7B treinado pelo Quiet-STaR nos benchmarks de matem\u00e1tica GSM8K e de racioc\u00ednio de senso comum CommonsenseQA. Descobriram que o Quiet-STaR melhorou a perplexidade e as capacidades de racioc\u00ednio direto de disparo zero em ambos os benchmarks CommonsenseQA (36,3% para 47,2%) e GSM8K (5,9% para 10,9%).<\/p>\n<figure id=\"attachment_10868\" aria-describedby=\"caption-attachment-10868\" style=\"width: 1334px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-10868\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results.jpg\" alt=\"\" width=\"1334\" height=\"518\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results.jpg 1334w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-300x116.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-1024x398.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-768x298.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-370x144.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-800x311.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-740x287.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-20x8.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-124x48.jpg 124w\" sizes=\"auto, (max-width: 1334px) 100vw, 1334px\" \/><figcaption id=\"caption-attachment-10868\" class=\"wp-caption-text\">Resultados do Quiet-STaR nos benchmarks de matem\u00e1tica do ensino b\u00e1sico GMSK8 e de racioc\u00ednio de senso comum CommonsenseQA. Cada linha representa uma itera\u00e7\u00e3o do Quiet-STaR com diferentes comprimentos de token de pensamento e quantos tokens \u00e0 frente ele raciocinou. A linha de base \u00e9 o Mistral-7B sem o Quiet-STaR. Fonte: arXiv<\/figcaption><\/figure>\n<p>Embora o racioc\u00ednio matem\u00e1tico do Mistral-7B ainda n\u00e3o seja excelente, o Quiet-STaR proporcionou uma melhoria de quase 85% em rela\u00e7\u00e3o ao modelo base, e isto sem qualquer afina\u00e7\u00e3o espec\u00edfica do conjunto de dados.\"<\/p>\n<p>Os resultados dos testes tamb\u00e9m mostraram que as melhorias no desempenho estavam diretamente relacionadas com o n\u00famero de tokens atribu\u00eddos aos pensamentos internos do modelo. Quanto mais ele pensava antes de responder, melhor era a resposta.<\/p>\n<p>Estas melhorias t\u00eam o custo de uma sobrecarga computacional substancial. O mon\u00f3logo interior em que o modelo se envolve durante o processo de pensamento gera muitos tokens.<\/p>\n<p>Os melhoramentos no hardware acabar\u00e3o por tornar menos consequentes as despesas adicionais decorrentes de t\u00e9cnicas como estas.<\/p>\n<p>Os investigadores concluem que o trabalho futuro de otimiza\u00e7\u00e3o do Quiet-STaR tamb\u00e9m pode ajudar. Prever dinamicamente se um processo de pensamento \u00e9 necess\u00e1rio, ou qual a sua dura\u00e7\u00e3o, poderia reduzir o n\u00famero de tokens de pensamento desnecess\u00e1rios.<\/p>\n<p>Os resultados do treino de um modelo pequeno como o Mistral-7B com o Quiet-STaR s\u00e3o prometedores. Os investigadores acreditam que \"as mesmas t\u00e9cnicas aplicadas a um modelo melhor produziriam provavelmente resultados desproporcionadamente melhores\".<\/p>\n<h2>Quest\u00f5es \u00e9ticas<\/h2>\n<p>Fazer com que um modelo lingu\u00edstico raciocine mais como um humano acarreta alguns problemas interessantes e quest\u00f5es \u00e9ticas.<\/p>\n<p>Os investigadores referem que \"\u00e9 imposs\u00edvel saber se o racioc\u00ednio expresso pelo modelo em linguagem representa exatamente o processamento interno do modelo\". Os racioc\u00ednios que o modelo gera s\u00e3o representa\u00e7\u00f5es em linguagem natural do seu racioc\u00ednio interno. Ser\u00e3o elas um reflexo exato?<\/p>\n<p>Observam ainda que \"n\u00e3o existem salvaguardas contra padr\u00f5es de racioc\u00ednio prejudiciais ou tendenciosos se o modelo os considerar \u00fateis\".<\/p>\n<p>Podemos ficar satisfeitos com a resposta de um modelo de IA, mas podemos n\u00e3o gostar, ou mesmo n\u00e3o compreender, o processo de racioc\u00ednio que a produziu.<\/p>\n<p>Um dos principais autores do artigo, Eric Zelikman, juntou-se esta semana \u00e0 xAI de Elon Musk. Ele pode achar que <a href=\"https:\/\/dailyai.com\/pt\/2024\/03\/elon-musks-xai-open-sources-its-llm-grok-1\/\">Grok<\/a> est\u00e1 menos preocupado com estas quest\u00f5es \u00e9ticas e mais entusiasmado com a perspetiva de avan\u00e7o da IA.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Os investigadores da Universidade de Stanford e da Notbad AI desenvolveram o Quiet-STaR, uma t\u00e9cnica que treina um modelo de linguagem (LM) para raciocinar internamente antes de gerar um resultado. Quando os humanos falam, normalmente temos um di\u00e1logo interior que molda as palavras que acabamos por verbalizar. Quanto mais pensarmos antes de falar, melhor ser\u00e1 a qualidade das nossas palavras faladas. No seu artigo, os investigadores descrevem como treinaram um LM (Mistral-7B) para aprender a imitar este processo de uma forma generalizada. O Quiet-STaR \u00e9 uma progress\u00e3o de outra t\u00e9cnica chamada STaR, ou Self-Taught Reasoner. O STaR \u00e9 um m\u00e9todo de treino de um modelo com alguns<\/p>","protected":false},"author":6,"featured_media":10869,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118],"class_list":["post-10866","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Quiet-STaR teaches language models to think before they speak | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Quiet-STaR teaches language models to think before they speak | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Researchers from Stanford University and Notbad AI developed Quiet-STaR, a technique that trains a language model (LM) to reason internally before generating an output. When humans speak, we normally have an inner dialogue that shapes the words we eventually verbalize. The more we think before speaking, the better the quality of our spoken words. In their paper, the researchers describe how they trained an LM (Mistral-7B) to learn how to imitate this process in a generalized way. Quiet-STaR is a progression of another technique called STaR, or Self-Taught Reasoner. STaR is a method of training a model with a few\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-22T10:03:11+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T09:32:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Quiet-STaR teaches language models to think before they speak\",\"datePublished\":\"2024-03-22T10:03:11+00:00\",\"dateModified\":\"2024-03-28T09:32:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"},\"wordCount\":808,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"keywords\":[\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\",\"name\":\"Quiet-STaR teaches language models to think before they speak | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"datePublished\":\"2024-03-22T10:03:11+00:00\",\"dateModified\":\"2024-03-28T09:32:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Quiet-STaR teaches language models to think before they speak\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Quiet-STaR ensina modelos lingu\u00edsticos a pensar antes de falar | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","og_locale":"pt_PT","og_type":"article","og_title":"Quiet-STaR teaches language models to think before they speak | DailyAI","og_description":"Researchers from Stanford University and Notbad AI developed Quiet-STaR, a technique that trains a language model (LM) to reason internally before generating an output. When humans speak, we normally have an inner dialogue that shapes the words we eventually verbalize. The more we think before speaking, the better the quality of our spoken words. In their paper, the researchers describe how they trained an LM (Mistral-7B) to learn how to imitate this process in a generalized way. Quiet-STaR is a progression of another technique called STaR, or Self-Taught Reasoner. STaR is a method of training a model with a few","og_url":"https:\/\/dailyai.com\/pt\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","og_site_name":"DailyAI","article_published_time":"2024-03-22T10:03:11+00:00","article_modified_time":"2024-03-28T09:32:30+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"4 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Quiet-STaR teaches language models to think before they speak","datePublished":"2024-03-22T10:03:11+00:00","dateModified":"2024-03-28T09:32:30+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"},"wordCount":808,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","keywords":["LLMS"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","url":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","name":"Quiet-STaR ensina modelos lingu\u00edsticos a pensar antes de falar | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","datePublished":"2024-03-22T10:03:11+00:00","dateModified":"2024-03-28T09:32:30+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Quiet-STaR teaches language models to think before they speak"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10866","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=10866"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10866\/revisions"}],"predecessor-version":[{"id":10873,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10866\/revisions\/10873"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/10869"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=10866"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=10866"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=10866"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}