{"id":10434,"date":"2024-02-29T21:55:56","date_gmt":"2024-02-29T21:55:56","guid":{"rendered":"https:\/\/dailyai.com\/?p=10434"},"modified":"2024-03-07T07:21:27","modified_gmt":"2024-03-07T07:21:27","slug":"llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","title":{"rendered":"LLM:er ger mer felaktiga och partiska resultat med l\u00e4ngre inmatningar"},"content":{"rendered":"<p><strong>Trots de snabba framstegen inom LLM \u00e4r v\u00e5r f\u00f6rst\u00e5else f\u00f6r hur dessa modeller hanterar l\u00e4ngre inmatningar fortfarande d\u00e5lig.<\/strong><\/p>\n<p>Mosh Levy, Alon Jacoby och Yoav Goldberg, fr\u00e5n Bar-Ilan University och Allen Institute for AI, unders\u00f6kte hur prestandan hos stora spr\u00e5kmodeller (LLM) varierar med f\u00f6r\u00e4ndringar i l\u00e4ngden p\u00e5 den inmatade text som de f\u00e5r bearbeta.<\/p>\n<p><span style=\"font-weight: 400;\">De utvecklade ett ramverk f\u00f6r resonemang specifikt f\u00f6r detta \u00e4ndam\u00e5l, vilket gjorde det m\u00f6jligt f\u00f6r dem att analysera hur inputl\u00e4ngden p\u00e5verkar LLM-resonemang i en kontrollerad milj\u00f6.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Fr\u00e5geramen f\u00f6reslog olika versioner av samma fr\u00e5ga, var och en inneh\u00e5llande den information som var n\u00f6dv\u00e4ndig f\u00f6r att besvara fr\u00e5gan, utfylld med ytterligare, irrelevant text av varierande l\u00e4ngd och typ.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Detta g\u00f6r det m\u00f6jligt att isolera ing\u00e5ngsl\u00e4ngden som en variabel, vilket s\u00e4kerst\u00e4ller att f\u00f6r\u00e4ndringar i modellens prestanda kan h\u00e4nf\u00f6ras direkt till ing\u00e5ngsl\u00e4ngden.<\/span><\/p>\n<h3><b>Viktiga resultat<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Levy, Jacoby och Goldberg uppt\u00e4ckte att LLM:er uppvisar en anm\u00e4rkningsv\u00e4rd nedg\u00e5ng i resonemangsprestanda vid indatal\u00e4ngder l\u00e5ngt under vad utvecklarna h\u00e4vdar att de kan hantera. De dokumenterade sina resultat <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\" target=\"_blank\" rel=\"noopener\">i denna studie<\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Nedg\u00e5ngen observerades konsekvent i alla versioner av datasetet, vilket tyder p\u00e5 ett systemfel i hanteringen av l\u00e4ngre inmatningar snarare \u00e4n ett problem som \u00e4r knutet till specifika datasamplingar eller modellarkitekturer.\u00a0<\/span><\/p>\n<p>Som forskarna beskriver: \"V\u00e5ra resultat visar en m\u00e4rkbar f\u00f6rs\u00e4mring av LLM:s resonemangsprestanda vid mycket kortare inmatningsl\u00e4ngder \u00e4n deras tekniska maximum. Vi visar att f\u00f6rs\u00e4mringstrenden f\u00f6rekommer i alla versioner av v\u00e5rt dataset, \u00e4ven om det \u00e4r olika intensivt.\"<\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_10436\" aria-describedby=\"caption-attachment-10436\" style=\"width: 569px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10436\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4.png\" alt=\"\" width=\"569\" height=\"469\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4.png 733w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-300x247.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-370x305.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-20x16.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-58x48.png 58w\" sizes=\"auto, (max-width: 569px) 100vw, 569px\" \/><figcaption id=\"caption-attachment-10436\" class=\"wp-caption-text\">I takt med att storleken p\u00e5 inmatningen \u00f6kar minskar f\u00f6rm\u00e5gan att utf\u00f6ra resonemangsuppgifter. Dessa inmatningar best\u00e5r av relevant (r\u00f6dmarkerad) och irrelevant (gr\u00e5markerad) text, som h\u00e4mtas fr\u00e5n olika platser och ut\u00f6kas stegvis. F\u00f6r att kunna ge korrekta svar m\u00e5ste man identifiera tv\u00e5 specifika textsegment, som kan vara slumpm\u00e4ssigt placerade i inmatningen. Prestandauppgifterna \u00e4r aggregerade fr\u00e5n 600 prov. K\u00e4llan till uppgifterna: Via <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\">ArXiv.<\/a><\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">Studien visar ocks\u00e5 att traditionella m\u00e5tt som perplexitet, som ofta anv\u00e4nds f\u00f6r att utv\u00e4rdera LLM-modeller, inte korrelerar med modellernas prestanda i resonemangsuppgifter med l\u00e5nga inmatningar.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ytterligare unders\u00f6kningar visade att den f\u00f6rs\u00e4mrade prestandan inte enbart berodde p\u00e5 f\u00f6rekomsten av irrelevant information (utfyllnad), utan observerades \u00e4ven n\u00e4r utfyllnaden bestod av duplicerad relevant information.<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">N\u00e4r vi h\u00e5ller ihop de tv\u00e5 k\u00e4rnspannen och l\u00e4gger till text runt dem sjunker noggrannheten redan. N\u00e4r vi l\u00e4gger till stycken mellan sp\u00e4nnena sjunker resultaten \u00e4nnu mer. Nedg\u00e5ngen intr\u00e4ffar b\u00e5de n\u00e4r de texter vi l\u00e4gger till liknar uppgiftstexterna och n\u00e4r de \u00e4r helt annorlunda. 3\/7 <a href=\"https:\/\/t.co\/c91l9uzyme\">pic.twitter.com\/c91l9uzyme<\/a><\/p>\n<p>- Mosh Levy (@mosh_levy) <a href=\"https:\/\/twitter.com\/mosh_levy\/status\/1762027631837368416?ref_src=twsrc%5Etfw\">26 februari 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\n<span style=\"font-weight: 400;\">Detta tyder p\u00e5 att utmaningen f\u00f6r LLM ligger i att filtrera bort brus och den inneboende bearbetningen av l\u00e4ngre textsekvenser.<\/span><\/p>\n<h2><b>Ignorera instruktioner<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Ett kritiskt felomr\u00e5de som lyfts fram i studien \u00e4r LLM:s tendens att ignorera instruktioner som \u00e4r inb\u00e4ddade i indata n\u00e4r indatans l\u00e4ngd \u00f6kar.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Modellerna kan ocks\u00e5 ibland generera svar som indikerar os\u00e4kerhet eller brist p\u00e5 tillr\u00e4cklig information, till exempel \"Det finns inte tillr\u00e4ckligt med information i texten\", trots att all n\u00f6dv\u00e4ndig information finns.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00d6verlag verkar LLM:erna konsekvent ha sv\u00e5rt att prioritera och fokusera p\u00e5 viktiga informationsdelar, inklusive direkta instruktioner, n\u00e4r l\u00e4ngden p\u00e5 inmatningen \u00f6kar.\u00a0<\/span><\/p>\n<h2>Uppvisande av f\u00f6rdomar i svaren<\/h2>\n<p><span style=\"font-weight: 400;\">Ett annat anm\u00e4rkningsv\u00e4rt problem var att modellernas svar blev alltmer snedvridna n\u00e4r indata blev l\u00e4ngre.\u00a0 <\/span><\/p>\n<p><span style=\"font-weight: 400;\">I synnerhet var LLM:erna partiska mot att svara \"Falskt\" n\u00e4r l\u00e4ngden p\u00e5 indata \u00f6kade. Denna f\u00f6rskjutning indikerar en skevhet i sannolikhetsuppskattning eller beslutsprocesser inom modellen, m\u00f6jligen som en defensiv mekanism som svar p\u00e5 \u00f6kad os\u00e4kerhet p\u00e5 grund av l\u00e4ngre inmatningsl\u00e4ngder.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ben\u00e4genheten att gynna \"falska\" svar kan ocks\u00e5 \u00e5terspegla en underliggande obalans i tr\u00e4ningsdata eller en artefakt i modellernas tr\u00e4ningsprocess, d\u00e4r negativa svar kan vara \u00f6verrepresenterade eller associerade med sammanhang av os\u00e4kerhet och tvetydighet.\u00a0<\/span><\/p>\n<figure id=\"attachment_10437\" aria-describedby=\"caption-attachment-10437\" style=\"width: 477px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10437 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai.png\" alt=\"modeller AI\" width=\"477\" height=\"772\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai.png 477w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-185x300.png 185w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-370x599.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-20x32.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-30x48.png 30w\" sizes=\"auto, (max-width: 477px) 100vw, 477px\" \/><figcaption id=\"caption-attachment-10437\" class=\"wp-caption-text\">Modellerna uppvisade en tendens att besvara bin\u00e4ra fr\u00e5gor som \"falska\" n\u00e4r l\u00e4ngden p\u00e5 indata \u00f6kade. K\u00e4llan \u00e4r: Via <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\">ArXiv<\/a>.<\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">Denna partiskhet p\u00e5verkar noggrannheten i modellernas resultat och v\u00e4cker farh\u00e5gor om LLM:s tillf\u00f6rlitlighet och r\u00e4ttvisa i till\u00e4mpningar som kr\u00e4ver nyanserad f\u00f6rst\u00e5else och opartiskhet.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det \u00e4r viktigt att implementera robusta strategier f\u00f6r uppt\u00e4ckt och begr\u00e4nsning av bias under modellens tr\u00e4nings- och finjusteringsfaser f\u00f6r att minska omotiverade bias i modellsvar. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">E<\/span><span style=\"font-weight: 400;\">tt se till att tr\u00e4ningsdataset \u00e4r m\u00e5ngsidiga, balanserade och representativa f\u00f6r ett brett spektrum av scenarier kan ocks\u00e5 bidra till att minimera felaktigheter och f\u00f6rb\u00e4ttra modellgeneraliseringen.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Detta bidrar till <\/span><a href=\"https:\/\/dailyai.com\/sv\/2024\/02\/generative-ai-systems-hallucinations-and-mounting-technical-debt\/\"><span style=\"font-weight: 400;\">andra nyligen genomf\u00f6rda studier<\/span><\/a><span style=\"font-weight: 400;\"> som p\u00e5 liknande s\u00e4tt belyser grundl\u00e4ggande problem i hur LLM-programmen fungerar, vilket leder till en situation d\u00e4r den \"tekniska skulden\" kan hota modellens funktionalitet och integritet \u00f6ver tid.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Trots snabba framsteg inom LLM \u00e4r v\u00e5r f\u00f6rst\u00e5else f\u00f6r hur dessa modeller hanterar l\u00e4ngre inmatningar fortfarande d\u00e5lig. Mosh Levy, Alon Jacoby och Yoav Goldberg, fr\u00e5n Bar-Ilan University och Allen Institute for AI, unders\u00f6kte hur prestandan hos stora spr\u00e5kmodeller (LLM) varierar med f\u00f6r\u00e4ndringar i l\u00e4ngden p\u00e5 den inmatningstext som de f\u00e5r bearbeta. De utvecklade ett ramverk f\u00f6r resonemang specifikt f\u00f6r detta \u00e4ndam\u00e5l, vilket gjorde det m\u00f6jligt f\u00f6r dem att dissekera inverkan av inmatningsl\u00e4ngden p\u00e5 LLM-resonemang i en kontrollerad milj\u00f6. Fr\u00e5geramen f\u00f6reslog olika versioner av samma fr\u00e5ga, som var och en inneh\u00f6ll den information som kr\u00e4vdes f\u00f6r att besvara fr\u00e5gan.<\/p>","protected":false},"author":2,"featured_media":10438,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,110],"class_list":["post-10434","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-open-source"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI<\/title>\n<meta name=\"description\" content=\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-29T21:55:56+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-07T07:21:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"LLMs produce more inaccurate and biased outputs with longer inputs\",\"datePublished\":\"2024-02-29T21:55:56+00:00\",\"dateModified\":\"2024-03-07T07:21:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"},\"wordCount\":760,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"keywords\":[\"LLMS\",\"Open-source\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\",\"name\":\"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"datePublished\":\"2024-02-29T21:55:56+00:00\",\"dateModified\":\"2024-03-07T07:21:27+00:00\",\"description\":\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"width\":1000,\"height\":667,\"caption\":\"LLM\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LLMs produce more inaccurate and biased outputs with longer inputs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"LLM:er producerar mer felaktiga och partiska resultat med l\u00e4ngre ing\u00e5ngar | DailyAI","description":"Mosh Levy, Alon Jacoby och Yoav Goldberg, fr\u00e5n Bar-Ilan University och Allen Institute for AI, unders\u00f6kte hur prestandan hos stora spr\u00e5kmodeller (LLM) varierar med f\u00f6r\u00e4ndringar i l\u00e4ngden p\u00e5 den inmatade text som de f\u00e5r bearbeta.\u00a0","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","og_locale":"sv_SE","og_type":"article","og_title":"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI","og_description":"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0","og_url":"https:\/\/dailyai.com\/sv\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","og_site_name":"DailyAI","article_published_time":"2024-02-29T21:55:56+00:00","article_modified_time":"2024-03-07T07:21:27+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Sam Jeans","Ber\u00e4knad l\u00e4stid":"4 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"LLMs produce more inaccurate and biased outputs with longer inputs","datePublished":"2024-02-29T21:55:56+00:00","dateModified":"2024-03-07T07:21:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"},"wordCount":760,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","keywords":["LLMS","Open-source"],"articleSection":["Industry"],"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","url":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","name":"LLM:er producerar mer felaktiga och partiska resultat med l\u00e4ngre ing\u00e5ngar | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","datePublished":"2024-02-29T21:55:56+00:00","dateModified":"2024-03-07T07:21:27+00:00","description":"Mosh Levy, Alon Jacoby och Yoav Goldberg, fr\u00e5n Bar-Ilan University och Allen Institute for AI, unders\u00f6kte hur prestandan hos stora spr\u00e5kmodeller (LLM) varierar med f\u00f6r\u00e4ndringar i l\u00e4ngden p\u00e5 den inmatade text som de f\u00e5r bearbeta.\u00a0","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","width":1000,"height":667,"caption":"LLM"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"LLMs produce more inaccurate and biased outputs with longer inputs"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam \u00e4r en vetenskaps- och teknikskribent som har arbetat i olika AI-startups. N\u00e4r han inte skriver l\u00e4ser han medicinska tidskrifter eller gr\u00e4ver igenom l\u00e5dor med vinylskivor.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/sv\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10434","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=10434"}],"version-history":[{"count":6,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10434\/revisions"}],"predecessor-version":[{"id":10444,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10434\/revisions\/10444"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/10438"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=10434"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=10434"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=10434"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}