{"id":10866,"date":"2024-03-22T10:03:11","date_gmt":"2024-03-22T10:03:11","guid":{"rendered":"https:\/\/dailyai.com\/?p=10866"},"modified":"2024-03-28T09:32:30","modified_gmt":"2024-03-28T09:32:30","slug":"quiet-star-teaches-language-models-to-think-before-they-speak","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","title":{"rendered":"Quiet-STaR apprend aux mod\u00e8les linguistiques \u00e0 r\u00e9fl\u00e9chir avant de parler"},"content":{"rendered":"<p><strong>Des chercheurs de l'universit\u00e9 de Stanford et de Notbad AI ont mis au point Quiet-STaR, une technique qui permet d'entra\u00eener un mod\u00e8le de langage (LM) \u00e0 raisonner en interne avant de g\u00e9n\u00e9rer un r\u00e9sultat.<\/strong><\/p>\n<p>Lorsque nous parlons, nous avons normalement un dialogue int\u00e9rieur qui fa\u00e7onne les mots que nous finissons par prononcer. Plus nous r\u00e9fl\u00e9chissons avant de parler, meilleure est la qualit\u00e9 de nos paroles.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2403.09629.pdf\" target=\"_blank\" rel=\"noopener\">Dans leur document<\/a>Les chercheurs d\u00e9crivent comment ils ont form\u00e9 un LM (<a href=\"https:\/\/dailyai.com\/fr\/2024\/02\/mistral-ai-releases-new-model-and-chatbot-to-take-on-gpt-4\/\">Mistral-7B<\/a>) pour apprendre \u00e0 imiter ce processus de mani\u00e8re g\u00e9n\u00e9ralis\u00e9e. Quiet-STaR est une progression d'une autre technique appel\u00e9e STaR, ou Self-Taught Reasoner (raisonneur autodidacte).<\/p>\n<p>STaR est une m\u00e9thode qui consiste \u00e0 former un mod\u00e8le \u00e0 l'aide de quelques exemples de questions accompagn\u00e9es d'explications (justifications) pour les r\u00e9ponses. Le mod\u00e8le utilise ces exemples de cha\u00eenes de pens\u00e9e pour essayer de r\u00e9pondre aux questions de son propre chef, en trouvant lui-m\u00eame les justifications.<\/p>\n<p>STaR \u00e9value si les raisonnements qu'il propose aboutissent ou non \u00e0 des r\u00e9ponses correctes et affine ses raisonnements.<\/p>\n<p>Aussi impressionnante que soit STaR, sa capacit\u00e9 \u00e0 raisonner est limit\u00e9e aux contextes de questions-r\u00e9ponses (QA) pendant la formation. L'objectif de Quiet-STaR est de fournir \u00e0 un LM une capacit\u00e9 g\u00e9n\u00e9ralis\u00e9e \u00e0 apprendre \u00e0 raisonner ou \u00e0 d\u00e9velopper des justifications, dans une gamme plus large de textes, et pas seulement dans des ensembles de donn\u00e9es d'AQ.<\/p>\n<h2>Comment fonctionne Quiet-STaR ?<\/h2>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Aujourd'hui, les mod\u00e8les de langage sont entra\u00een\u00e9s \u00e0 raisonner soit 1) de mani\u00e8re g\u00e9n\u00e9rale, en imitant les donn\u00e9es de raisonnement en ligne, soit 2) de mani\u00e8re \u00e9troite, en s'auto-apprenant \u00e0 trouver leurs propres solutions \u00e0 des t\u00e2ches sp\u00e9cifiques.<\/p>\n<p>Les LM peuvent-ils s'apprendre \u00e0 raisonner de mani\u00e8re g\u00e9n\u00e9rale ? \ud83c\udf1fIntroducing Quiet-STaR, self-teaching via internal monologue!\ud83e\uddf5 <a href=\"https:\/\/t.co\/WCSxLPZeCX\">pic.twitter.com\/WCSxLPZeCX<\/a><\/p>\n<p>- Eric Zelikman (@ericzelikman) <a href=\"https:\/\/twitter.com\/ericzelikman\/status\/1768663835106513041?ref_src=twsrc%5Etfw\">15 mars 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>L'une des principales innovations de Quiet-STaR est qu'il g\u00e9n\u00e8re des raisonnements, ou des pens\u00e9es, en parall\u00e8le, en suivant tous les tokens du texte qu'il traite. Il ne produit pas ces raisonnements en cha\u00eene, d'o\u00f9 la partie \"silencieuse\" du nom de l'algorithme.<\/p>\n<p>L'algorithme traite les justifications par le biais d'une \"t\u00eate de m\u00e9lange\". Chaque raisonnement est \u00e9valu\u00e9 sur la base de la pr\u00e9cision de la pr\u00e9diction du trait suivant qu'il a produite par rapport \u00e0 la pr\u00e9diction faite par le mod\u00e8le de base.<\/p>\n<p>Si le mod\u00e8le de base (sans Quiet-STaR) fournit une meilleure pr\u00e9diction, c'est que le raisonnement n'\u00e9tait pas bon. Si le raisonnement aboutit \u00e0 une pr\u00e9diction plus pr\u00e9cise du prochain jeton, l'algorithme sait qu'il est sur la bonne voie.<\/p>\n<p>Il utilise ensuite un algorithme d'apprentissage par renforcement (REINFORCE) pour apprendre quels raisonnements aident et lesquels entravent les performances du mod\u00e8le. Le r\u00e9sultat est que le mod\u00e8le apprend une capacit\u00e9 g\u00e9n\u00e9ralis\u00e9e \u00e0 r\u00e9fl\u00e9chir avant de pr\u00e9dire le prochain jeton.<\/p>\n<h2>R\u00e9sultats de Quiet-STaR<\/h2>\n<p>Les chercheurs ont test\u00e9 le mod\u00e8le Mistral-7B entra\u00een\u00e9 par Quiet-STaR sur les benchmarks de math\u00e9matiques GSM8K et de raisonnement par le bon sens CommonsenseQA. Ils ont constat\u00e9 que Quiet-STaR am\u00e9liorait la perplexit\u00e9 et les capacit\u00e9s de raisonnement direct \u00e0 partir de z\u00e9ro, tant pour CommonsenseQA (36,3% \u00e0 47,2%) que pour GSM8K (5,9% \u00e0 10,9%).<\/p>\n<figure id=\"attachment_10868\" aria-describedby=\"caption-attachment-10868\" style=\"width: 1334px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-10868\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results.jpg\" alt=\"\" width=\"1334\" height=\"518\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results.jpg 1334w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-300x116.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-1024x398.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-768x298.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-370x144.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-800x311.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-740x287.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-20x8.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-124x48.jpg 124w\" sizes=\"auto, (max-width: 1334px) 100vw, 1334px\" \/><figcaption id=\"caption-attachment-10868\" class=\"wp-caption-text\">R\u00e9sultats de Quiet-STaR sur les benchmarks de raisonnement math\u00e9matique GMSK8 et de raisonnement de sens commun CommonsenseQA. Chaque ligne repr\u00e9sente une it\u00e9ration de Quiet-STaR avec diff\u00e9rentes longueurs de jetons de pens\u00e9e, et le nombre de jetons en avance sur le raisonnement. La r\u00e9f\u00e9rence est Mistral-7B sans Quiet-STaR. Source : arXiv<\/figcaption><\/figure>\n<p>Alors que le raisonnement math\u00e9matique de Mistral-7B n'est toujours pas excellent, Quiet-STaR a apport\u00e9 une am\u00e9lioration de pr\u00e8s de 85% par rapport au mod\u00e8le de base, et ce, sans aucun r\u00e9glage fin sp\u00e9cifique \u00e0 l'ensemble de donn\u00e9es.<\/p>\n<p>Les r\u00e9sultats des tests ont \u00e9galement montr\u00e9 que l'am\u00e9lioration des performances \u00e9tait directement li\u00e9e au nombre de jetons allou\u00e9s aux pens\u00e9es internes du mod\u00e8le. Plus le mod\u00e8le r\u00e9fl\u00e9chit avant de r\u00e9pondre, meilleure est la r\u00e9ponse.<\/p>\n<p>Ces am\u00e9liorations se font au prix d'un surco\u00fbt informatique important. Le monologue int\u00e9rieur auquel se livre le mod\u00e8le pendant le processus de r\u00e9flexion g\u00e9n\u00e8re un grand nombre de jetons.<\/p>\n<p>Les am\u00e9liorations apport\u00e9es au mat\u00e9riel informatique finiront par r\u00e9duire l'importance des frais g\u00e9n\u00e9raux suppl\u00e9mentaires li\u00e9s \u00e0 ce type de techniques.<\/p>\n<p>Les chercheurs concluent que les travaux futurs sur l'optimisation de Quiet-STaR pourraient \u00e9galement \u00eatre utiles. La pr\u00e9diction dynamique de la n\u00e9cessit\u00e9 d'un processus de r\u00e9flexion ou de sa dur\u00e9e pourrait r\u00e9duire le nombre de jetons de r\u00e9flexion inutiles.<\/p>\n<p>Les r\u00e9sultats de l'entra\u00eenement d'un petit mod\u00e8le comme Mistral-7B avec Quiet-STaR sont prometteurs. Les chercheurs estiment que \"les m\u00eames techniques appliqu\u00e9es \u00e0 un meilleur mod\u00e8le donneraient probablement des r\u00e9sultats disproportionn\u00e9s\".<\/p>\n<h2>Questions \u00e9thiques<\/h2>\n<p>Faire en sorte qu'un mod\u00e8le linguistique raisonne davantage comme un humain soul\u00e8ve des questions int\u00e9ressantes et des probl\u00e8mes \u00e9thiques.<\/p>\n<p>Les chercheurs notent qu'\"il est impossible de savoir si le raisonnement exprim\u00e9 par le mod\u00e8le dans le langage repr\u00e9sente exactement le traitement interne du mod\u00e8le\". Les raisonnements g\u00e9n\u00e9r\u00e9s par le mod\u00e8le sont des repr\u00e9sentations en langage naturel de son raisonnement interne. Sont-elles un reflet exact ?<\/p>\n<p>Ils notent en outre qu'\"il n'y a aucune garantie contre les sch\u00e9mas de raisonnement nuisibles ou biais\u00e9s si le mod\u00e8le les trouve utiles\".<\/p>\n<p>Nous pouvons \u00eatre satisfaits de la r\u00e9ponse d'un mod\u00e8le d'IA, mais il se peut que nous n'aimions pas, ou m\u00eame que nous ne comprenions pas, le processus de r\u00e9flexion qui a abouti \u00e0 cette r\u00e9ponse.<\/p>\n<p>L'un des auteurs principaux de l'article, Eric Zelikman, a rejoint cette semaine l'entreprise xAI d'Elon Musk. Il se peut qu'il trouve que <a href=\"https:\/\/dailyai.com\/fr\/2024\/03\/elon-musks-xai-open-sources-its-llm-grok-1\/\">Grok<\/a> se pr\u00e9occupe moins de ces questions \u00e9thiques et est plus enthousiaste \u00e0 l'id\u00e9e des progr\u00e8s de l'IA.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Des chercheurs de l'universit\u00e9 de Stanford et de Notbad AI ont mis au point Quiet-STaR, une technique qui permet d'entra\u00eener un mod\u00e8le de langage (LM) \u00e0 raisonner en interne avant de g\u00e9n\u00e9rer un r\u00e9sultat. Lorsque nous parlons, nous avons normalement un dialogue int\u00e9rieur qui fa\u00e7onne les mots que nous finissons par prononcer. Plus nous r\u00e9fl\u00e9chissons avant de parler, meilleure est la qualit\u00e9 de nos paroles. Dans leur article, les chercheurs d\u00e9crivent comment ils ont entra\u00een\u00e9 un LM (Mistral-7B) \u00e0 apprendre \u00e0 imiter ce processus de mani\u00e8re g\u00e9n\u00e9ralis\u00e9e. Quiet-STaR est une progression d'une autre technique appel\u00e9e STaR, ou Self-Taught Reasoner (raisonneur autodidacte). STaR est une m\u00e9thode d'entra\u00eenement d'un mod\u00e8le \u00e0 l'aide d'un petit nombre de donn\u00e9es.<\/p>","protected":false},"author":6,"featured_media":10869,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118],"class_list":["post-10866","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Quiet-STaR teaches language models to think before they speak | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Quiet-STaR teaches language models to think before they speak | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Researchers from Stanford University and Notbad AI developed Quiet-STaR, a technique that trains a language model (LM) to reason internally before generating an output. When humans speak, we normally have an inner dialogue that shapes the words we eventually verbalize. The more we think before speaking, the better the quality of our spoken words. In their paper, the researchers describe how they trained an LM (Mistral-7B) to learn how to imitate this process in a generalized way. Quiet-STaR is a progression of another technique called STaR, or Self-Taught Reasoner. STaR is a method of training a model with a few\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-22T10:03:11+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T09:32:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Quiet-STaR teaches language models to think before they speak\",\"datePublished\":\"2024-03-22T10:03:11+00:00\",\"dateModified\":\"2024-03-28T09:32:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"},\"wordCount\":808,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"keywords\":[\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\",\"name\":\"Quiet-STaR teaches language models to think before they speak | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"datePublished\":\"2024-03-22T10:03:11+00:00\",\"dateModified\":\"2024-03-28T09:32:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Quiet-STaR teaches language models to think before they speak\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Quiet-STaR apprend aux mod\u00e8les de langage \u00e0 r\u00e9fl\u00e9chir avant de parler | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","og_locale":"fr_FR","og_type":"article","og_title":"Quiet-STaR teaches language models to think before they speak | DailyAI","og_description":"Researchers from Stanford University and Notbad AI developed Quiet-STaR, a technique that trains a language model (LM) to reason internally before generating an output. When humans speak, we normally have an inner dialogue that shapes the words we eventually verbalize. The more we think before speaking, the better the quality of our spoken words. In their paper, the researchers describe how they trained an LM (Mistral-7B) to learn how to imitate this process in a generalized way. Quiet-STaR is a progression of another technique called STaR, or Self-Taught Reasoner. STaR is a method of training a model with a few","og_url":"https:\/\/dailyai.com\/fr\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","og_site_name":"DailyAI","article_published_time":"2024-03-22T10:03:11+00:00","article_modified_time":"2024-03-28T09:32:30+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Quiet-STaR teaches language models to think before they speak","datePublished":"2024-03-22T10:03:11+00:00","dateModified":"2024-03-28T09:32:30+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"},"wordCount":808,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","keywords":["LLMS"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","url":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","name":"Quiet-STaR apprend aux mod\u00e8les de langage \u00e0 r\u00e9fl\u00e9chir avant de parler | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","datePublished":"2024-03-22T10:03:11+00:00","dateModified":"2024-03-28T09:32:30+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Quiet-STaR teaches language models to think before they speak"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10866","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=10866"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10866\/revisions"}],"predecessor-version":[{"id":10873,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10866\/revisions\/10873"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/10869"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=10866"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=10866"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=10866"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}