{"id":10866,"date":"2024-03-22T10:03:11","date_gmt":"2024-03-22T10:03:11","guid":{"rendered":"https:\/\/dailyai.com\/?p=10866"},"modified":"2024-03-28T09:32:30","modified_gmt":"2024-03-28T09:32:30","slug":"quiet-star-teaches-language-models-to-think-before-they-speak","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","title":{"rendered":"Quiet-STaR l\u00e4r spr\u00e5kmodeller att t\u00e4nka efter innan de talar"},"content":{"rendered":"<p><strong>Forskare fr\u00e5n Stanford University och Notbad AI har utvecklat Quiet-STaR, en teknik som tr\u00e4nar en spr\u00e5kmodell (LM) att resonera internt innan den genererar en utdata.<\/strong><\/p>\n<p>N\u00e4r vi m\u00e4nniskor talar har vi normalt en inre dialog som formar de ord vi s\u00e5 sm\u00e5ningom uttalar. Ju mer vi t\u00e4nker innan vi talar, desto b\u00e4ttre blir kvaliteten p\u00e5 v\u00e5ra talade ord.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2403.09629.pdf\" target=\"_blank\" rel=\"noopener\">I sin artikel<\/a>beskriver forskarna hur de tr\u00e4nade en LM (<a href=\"https:\/\/dailyai.com\/sv\/2024\/02\/mistral-ai-releases-new-model-and-chatbot-to-take-on-gpt-4\/\">Mistral-7B<\/a>) f\u00f6r att l\u00e4ra dig hur du kan imitera denna process p\u00e5 ett allm\u00e4nt s\u00e4tt. Quiet-STaR \u00e4r en vidareutveckling av en annan teknik som kallas STaR, eller Self-Taught Reasoner.<\/p>\n<p>STaR \u00e4r en metod f\u00f6r att tr\u00e4na en modell med n\u00e5gra exempel p\u00e5 fr\u00e5gor med f\u00f6rklaringar (rationaler) till svaren. Modellen anv\u00e4nder dessa exempel p\u00e5 tankekedjor f\u00f6r att f\u00f6rs\u00f6ka besvara fr\u00e5gor p\u00e5 egen hand och sj\u00e4lv r\u00e4kna ut motiveringarna.<\/p>\n<p>STaR utv\u00e4rderar om de motiveringar som den kommer fram till leder till korrekta svar eller inte och f\u00f6rfinar sina motiveringar.<\/p>\n<p>Hur imponerande STaR \u00e4n \u00e4r s\u00e5 \u00e4r dess f\u00f6rm\u00e5ga att resonera begr\u00e4nsad till kontexter med fr\u00e5gor och svar (QA) under utbildningen. M\u00e5let med Quiet-STaR \u00e4r att ge LM en generaliserad f\u00f6rm\u00e5ga att l\u00e4ra sig resonera eller utveckla motiveringar i ett bredare urval av texter, inte bara QA-dataset.<\/p>\n<h2>Hur fungerar Quiet-STaR?<\/h2>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Dagens spr\u00e5kmodeller tr\u00e4nas i att resonera antingen 1) generellt, genom att imitera data fr\u00e5n online-resonemang, eller 2) sn\u00e4vt, genom att l\u00e4ra sig att l\u00f6sa specifika uppgifter p\u00e5 egen hand<\/p>\n<p>Kan LM l\u00e4ra sig sj\u00e4lva att resonera generellt?\ud83c\udf1fIntroduktion av Quiet-STaR, sj\u00e4lvundervisning via intern monolog!\ud83e\uddf5 <a href=\"https:\/\/t.co\/WCSxLPZeCX\">pic.twitter.com\/WCSxLPZeCX<\/a><\/p>\n<p>- Eric Zelikman (@ericzelikman) <a href=\"https:\/\/twitter.com\/ericzelikman\/status\/1768663835106513041?ref_src=twsrc%5Etfw\">15 mars 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>En av de viktigaste innovationerna i Quiet-STaR \u00e4r att den genererar resonemang, eller tankar, parallellt, efter alla symboler i den text som den bearbetar. Den matar inte ut dessa tankekedjors resonemang, d\u00e4rav den \"tysta\" delen av algoritmens namn.<\/p>\n<p>Algoritmen bearbetar motiveringarna genom ett \"blandningshuvud\". Varje motivering utv\u00e4rderas utifr\u00e5n noggrannheten i den f\u00f6ruts\u00e4gelse av n\u00e4sta token som den producerade j\u00e4mf\u00f6rt med f\u00f6ruts\u00e4gelsen som gjordes av basmodellen.<\/p>\n<p>Om basmodellen (utan Quiet-STaR) ger en b\u00e4ttre f\u00f6ruts\u00e4gelse, d\u00e5 var motiveringen inte bra. Om motiveringen resulterar i en mer exakt f\u00f6ruts\u00e4gelse av n\u00e4sta token, vet algoritmen att den \u00e4r p\u00e5 v\u00e4g mot en bra sak.<\/p>\n<p>Den anv\u00e4nder sedan en f\u00f6rst\u00e4rkande inl\u00e4rningsalgoritm (REINFORCE) f\u00f6r att l\u00e4ra sig vilka rationaler som hj\u00e4lper och vilka som hindrar modellens prestanda. Resultatet \u00e4r att modellen l\u00e4r sig en generaliserad f\u00f6rm\u00e5ga att t\u00e4nka innan den f\u00f6rutsp\u00e5r n\u00e4sta token.<\/p>\n<h2>Tysta-STaR resultat<\/h2>\n<p>Forskarna testade den Quiet-STaR-tr\u00e4nade Mistral-7B-modellen p\u00e5 GSM8K-matematiken och CommonsenseQA-riktm\u00e4rkena f\u00f6r sunt f\u00f6rnuft. De fann att Quiet-STaR f\u00f6rb\u00e4ttrade f\u00f6rm\u00e5gan till perplexitet och direkt resonemang med nollskott p\u00e5 b\u00e5de CommonsenseQA (36,3% till 47,2%) och GSM8K (5,9% till 10,9%) riktm\u00e4rken.<\/p>\n<figure id=\"attachment_10868\" aria-describedby=\"caption-attachment-10868\" style=\"width: 1334px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-10868\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results.jpg\" alt=\"\" width=\"1334\" height=\"518\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results.jpg 1334w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-300x116.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-1024x398.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-768x298.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-370x144.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-800x311.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-740x287.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-20x8.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Quiet-STaR-benchmark-results-124x48.jpg 124w\" sizes=\"auto, (max-width: 1334px) 100vw, 1334px\" \/><figcaption id=\"caption-attachment-10868\" class=\"wp-caption-text\">Quiet-STaR-resultat p\u00e5 GMSK8 grundskolematematik och CommonsenseQA riktm\u00e4rken f\u00f6r resonemang med sunt f\u00f6rnuft. Varje linje representerar en iteration av Quiet-STaR med varierande tanke-tokenl\u00e4ngder och hur m\u00e5nga tokens fram\u00e5t den resonerade. Baslinjen \u00e4r Mistral-7B utan Quiet-STaR. K\u00e4lla: arXiv<\/figcaption><\/figure>\n<p>\u00c4ven om Mistral-7B:s matematiska resonemang fortfarande inte \u00e4r s\u00e4rskilt bra, levererade Quiet-STaR en f\u00f6rb\u00e4ttring p\u00e5 n\u00e4stan 85% j\u00e4mf\u00f6rt med basmodellen, och detta utan n\u00e5gon datasetspecifik finjustering.\"<\/p>\n<p>Testresultaten visade ocks\u00e5 att prestandaf\u00f6rb\u00e4ttringarna var direkt relaterade till hur m\u00e5nga tokens som tilldelades modellens inre tankar. Ju mer den t\u00e4nkte innan den svarade, desto b\u00e4ttre blev svaret.<\/p>\n<p>Dessa f\u00f6rb\u00e4ttringar sker p\u00e5 bekostnad av en betydande datorkostnad. Den inre monolog som modellen f\u00f6r under tankeprocessen genererar en hel del tokens.<\/p>\n<p>F\u00f6rb\u00e4ttringar i h\u00e5rdvaran kommer s\u00e5 sm\u00e5ningom att g\u00f6ra den extra overhead som f\u00f6ljer med tekniker som dessa mindre betydelsefull.<\/p>\n<p>Forskarna drar slutsatsen att framtida arbete med att optimera Quiet-STaR ocks\u00e5 kan vara till hj\u00e4lp. Genom att dynamiskt f\u00f6ruts\u00e4ga om en tankeprocess kr\u00e4vs eller hur l\u00e5ng den ska vara kan man minska antalet on\u00f6diga tanketokens.<\/p>\n<p>Resultaten fr\u00e5n att tr\u00e4na en liten modell som Mistral-7B med Quiet-STaR \u00e4r lovande. Forskarna tror att \"samma tekniker som till\u00e4mpas p\u00e5 en b\u00e4ttre modell sannolikt skulle ge oproportionerligt b\u00e4ttre resultat.\"<\/p>\n<h2>Etiska fr\u00e5gor<\/h2>\n<p>Att f\u00e5 en spr\u00e5kmodell att resonera mer som en m\u00e4nniska \u00e4r f\u00f6renat med en del intressanta och etiska fr\u00e5gor.<\/p>\n<p>Forskarna konstaterar att \"det \u00e4r om\u00f6jligt att veta att de resonemang som modellen uttrycker i spr\u00e5ket p\u00e5 ett korrekt s\u00e4tt representerar modellens interna bearbetning\". De resonemang som modellen genererar \u00e4r naturliga spr\u00e5kliga representationer av dess inre resonemang. \u00c4r de en korrekt reflektion?<\/p>\n<p>De konstaterar vidare att \"det inte finns n\u00e5gra skydds\u00e5tg\u00e4rder mot skadliga eller partiska resonemangsm\u00f6nster om modellen finner dem anv\u00e4ndbara\".<\/p>\n<p>Vi kanske \u00e4r n\u00f6jda med en AI-modells svar, men vi kanske inte gillar, eller ens f\u00f6rst\u00e5r, tankeprocessen som ledde fram till det.<\/p>\n<p>En av artikelns huvudf\u00f6rfattare, Eric Zelikman, har precis b\u00f6rjat p\u00e5 Elon Musks xAI den h\u00e4r veckan. Han kanske tycker att <a href=\"https:\/\/dailyai.com\/sv\/2024\/03\/elon-musks-xai-open-sources-its-llm-grok-1\/\">Grok<\/a> \u00e4r mindre bekymrad \u00f6ver dessa etiska fr\u00e5gor och mer entusiastisk \u00f6ver utsikterna till AI-utveckling.<\/p>\n<p>&nbsp;<\/p>","protected":false},"excerpt":{"rendered":"<p>Forskare fr\u00e5n Stanford University och Notbad AI har utvecklat Quiet-STaR, en teknik som tr\u00e4nar en spr\u00e5kmodell (LM) att resonera internt innan den genererar en output. N\u00e4r m\u00e4nniskor talar har vi normalt en inre dialog som formar de ord vi s\u00e5 sm\u00e5ningom verbaliserar. Ju mer vi t\u00e4nker innan vi talar, desto b\u00e4ttre blir kvaliteten p\u00e5 v\u00e5ra talade ord. I sin artikel beskriver forskarna hur de tr\u00e4nat en LM (Mistral-7B) att l\u00e4ra sig imitera denna process p\u00e5 ett generaliserat s\u00e4tt. Quiet-STaR \u00e4r en vidareutveckling av en annan teknik som kallas STaR, eller Self-Taught Reasoner. STaR \u00e4r en metod f\u00f6r att tr\u00e4na en modell med ett f\u00e5tal<\/p>","protected":false},"author":6,"featured_media":10869,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118],"class_list":["post-10866","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Quiet-STaR teaches language models to think before they speak | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Quiet-STaR teaches language models to think before they speak | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Researchers from Stanford University and Notbad AI developed Quiet-STaR, a technique that trains a language model (LM) to reason internally before generating an output. When humans speak, we normally have an inner dialogue that shapes the words we eventually verbalize. The more we think before speaking, the better the quality of our spoken words. In their paper, the researchers describe how they trained an LM (Mistral-7B) to learn how to imitate this process in a generalized way. Quiet-STaR is a progression of another technique called STaR, or Self-Taught Reasoner. STaR is a method of training a model with a few\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-22T10:03:11+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T09:32:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Quiet-STaR teaches language models to think before they speak\",\"datePublished\":\"2024-03-22T10:03:11+00:00\",\"dateModified\":\"2024-03-28T09:32:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"},\"wordCount\":808,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"keywords\":[\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\",\"name\":\"Quiet-STaR teaches language models to think before they speak | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"datePublished\":\"2024-03-22T10:03:11+00:00\",\"dateModified\":\"2024-03-28T09:32:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/the-thinker.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/quiet-star-teaches-language-models-to-think-before-they-speak\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Quiet-STaR teaches language models to think before they speak\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Quiet-STaR l\u00e4r spr\u00e5kmodeller att t\u00e4nka innan de talar | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","og_locale":"sv_SE","og_type":"article","og_title":"Quiet-STaR teaches language models to think before they speak | DailyAI","og_description":"Researchers from Stanford University and Notbad AI developed Quiet-STaR, a technique that trains a language model (LM) to reason internally before generating an output. When humans speak, we normally have an inner dialogue that shapes the words we eventually verbalize. The more we think before speaking, the better the quality of our spoken words. In their paper, the researchers describe how they trained an LM (Mistral-7B) to learn how to imitate this process in a generalized way. Quiet-STaR is a progression of another technique called STaR, or Self-Taught Reasoner. STaR is a method of training a model with a few","og_url":"https:\/\/dailyai.com\/sv\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","og_site_name":"DailyAI","article_published_time":"2024-03-22T10:03:11+00:00","article_modified_time":"2024-03-28T09:32:30+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Eugene van der Watt","Ber\u00e4knad l\u00e4stid":"4 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Quiet-STaR teaches language models to think before they speak","datePublished":"2024-03-22T10:03:11+00:00","dateModified":"2024-03-28T09:32:30+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"},"wordCount":808,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","keywords":["LLMS"],"articleSection":["Industry"],"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","url":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/","name":"Quiet-STaR l\u00e4r spr\u00e5kmodeller att t\u00e4nka innan de talar | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","datePublished":"2024-03-22T10:03:11+00:00","dateModified":"2024-03-28T09:32:30+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/the-thinker.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/03\/quiet-star-teaches-language-models-to-think-before-they-speak\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Quiet-STaR teaches language models to think before they speak"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommer fr\u00e5n en bakgrund som elektronikingenj\u00f6r och \u00e4lskar allt som har med teknik att g\u00f6ra. N\u00e4r han tar en paus fr\u00e5n att konsumera AI-nyheter hittar du honom vid snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/sv\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10866","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=10866"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10866\/revisions"}],"predecessor-version":[{"id":10873,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10866\/revisions\/10873"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/10869"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=10866"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=10866"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=10866"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}