{"id":10434,"date":"2024-02-29T21:55:56","date_gmt":"2024-02-29T21:55:56","guid":{"rendered":"https:\/\/dailyai.com\/?p=10434"},"modified":"2024-03-07T07:21:27","modified_gmt":"2024-03-07T07:21:27","slug":"llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","title":{"rendered":"Les LLM produisent des r\u00e9sultats plus impr\u00e9cis et biais\u00e9s avec des entr\u00e9es plus longues."},"content":{"rendered":"<p><strong>Malgr\u00e9 les progr\u00e8s rapides des LLM, notre compr\u00e9hension de la mani\u00e8re dont ces mod\u00e8les g\u00e8rent des entr\u00e9es plus longues reste faible.<\/strong><\/p>\n<p>Mosh Levy, Alon Jacoby et Yoav Goldberg, de l'universit\u00e9 Bar-Ilan et de l'Allen Institute for AI, ont \u00e9tudi\u00e9 la mani\u00e8re dont les performances des grands mod\u00e8les de langage (LLM) varient en fonction de la longueur du texte d'entr\u00e9e qu'ils doivent traiter.<\/p>\n<p><span style=\"font-weight: 400;\">Ils ont d\u00e9velopp\u00e9 un cadre de raisonnement sp\u00e9cialement \u00e0 cette fin, ce qui leur a permis de diss\u00e9quer l'influence de la longueur de l'entr\u00e9e sur le raisonnement LLM dans un environnement contr\u00f4l\u00e9.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Le cadre de questionnement proposait diff\u00e9rentes versions de la m\u00eame question, chacune contenant les informations n\u00e9cessaires pour r\u00e9pondre \u00e0 la question, compl\u00e9t\u00e9es par un texte suppl\u00e9mentaire non pertinent de longueur et de type variables.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Cela permet d'isoler la longueur de l'entr\u00e9e en tant que variable, garantissant que les changements dans la performance du mod\u00e8le peuvent \u00eatre attribu\u00e9s directement \u00e0 la longueur de l'entr\u00e9e.<\/span><\/p>\n<h3><b>Principales conclusions<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Levy, Jacoby et Goldberg ont d\u00e9couvert que les LLM pr\u00e9sentent une baisse notable des performances de raisonnement \u00e0 des longueurs d'entr\u00e9e bien inf\u00e9rieures \u00e0 ce que les d\u00e9veloppeurs affirment qu'ils peuvent g\u00e9rer. Ils ont document\u00e9 leurs r\u00e9sultats <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\" target=\"_blank\" rel=\"noopener\">dans cette \u00e9tude<\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Le d\u00e9clin a \u00e9t\u00e9 observ\u00e9 de mani\u00e8re coh\u00e9rente dans toutes les versions de l'ensemble de donn\u00e9es, ce qui indique un probl\u00e8me syst\u00e9mique li\u00e9 au traitement d'entr\u00e9es plus longues plut\u00f4t qu'un probl\u00e8me li\u00e9 \u00e0 des \u00e9chantillons de donn\u00e9es ou \u00e0 des architectures de mod\u00e8les sp\u00e9cifiques.\u00a0<\/span><\/p>\n<p>Comme le d\u00e9crivent les chercheurs, \"nos r\u00e9sultats montrent une d\u00e9gradation notable des performances de raisonnement des LLM \u00e0 des longueurs d'entr\u00e9e beaucoup plus courtes que leur maximum technique. Nous montrons que la tendance \u00e0 la d\u00e9gradation appara\u00eet dans chaque version de notre ensemble de donn\u00e9es, bien qu'\u00e0 des intensit\u00e9s diff\u00e9rentes\".<\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_10436\" aria-describedby=\"caption-attachment-10436\" style=\"width: 569px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10436\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4.png\" alt=\"\" width=\"569\" height=\"469\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4.png 733w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-300x247.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-370x305.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-20x16.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-58x48.png 58w\" sizes=\"auto, (max-width: 569px) 100vw, 569px\" \/><figcaption id=\"caption-attachment-10436\" class=\"wp-caption-text\">Au fur et \u00e0 mesure que la taille de l'entr\u00e9e augmente, la capacit\u00e9 \u00e0 effectuer des t\u00e2ches de raisonnement diminue. Ces entr\u00e9es se composent de textes pertinents (en rouge) et non pertinents (en gris), qui proviennent de diff\u00e9rents endroits et sont d\u00e9velopp\u00e9s progressivement. L'identification de deux segments de texte sp\u00e9cifiques, qui peuvent \u00eatre situ\u00e9s au hasard dans l'entr\u00e9e, est n\u00e9cessaire pour r\u00e9pondre avec pr\u00e9cision. Les donn\u00e9es de performance sont agr\u00e9g\u00e9es \u00e0 partir de 600 \u00e9chantillons. Source de donn\u00e9es : Via <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\">ArXiv.<\/a><\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">En outre, l'\u00e9tude met en \u00e9vidence la fa\u00e7on dont les mesures traditionnelles telles que la perplexit\u00e9, couramment utilis\u00e9es pour \u00e9valuer les LLM, ne sont pas en corr\u00e9lation avec la performance des mod\u00e8les sur les t\u00e2ches de raisonnement impliquant de longues entr\u00e9es.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Un examen plus approfondi a permis de constater que la d\u00e9gradation des performances ne d\u00e9pendait pas uniquement de la pr\u00e9sence d'informations non pertinentes (remplissage), mais qu'elle \u00e9tait observ\u00e9e m\u00eame lorsque le remplissage consistait en une duplication d'informations pertinentes.<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Lorsque nous conservons les deux trav\u00e9es principales ensemble et que nous ajoutons du texte autour d'elles, la pr\u00e9cision diminue d\u00e9j\u00e0. En introduisant des paragraphes entre les trav\u00e9es, les r\u00e9sultats chutent encore plus. La baisse se produit \u00e0 la fois lorsque les textes que nous ajoutons sont similaires aux textes de la t\u00e2che et lorsqu'ils sont compl\u00e8tement diff\u00e9rents. 3\/7 <a href=\"https:\/\/t.co\/c91l9uzyme\">pic.twitter.com\/c91l9uzyme<\/a><\/p>\n<p>- Mosh Levy (@mosh_levy) <a href=\"https:\/\/twitter.com\/mosh_levy\/status\/1762027631837368416?ref_src=twsrc%5Etfw\">26 f\u00e9vrier 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\n<span style=\"font-weight: 400;\">Cela sugg\u00e8re que le d\u00e9fi pour les LLM r\u00e9side dans le filtrage du bruit et le traitement inh\u00e9rent des s\u00e9quences de texte plus longues.<\/span><\/p>\n<h2><b>Ignorer les instructions<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Un domaine critique du mode de d\u00e9faillance mis en \u00e9vidence dans l'\u00e9tude est la tendance des LLM \u00e0 ignorer les instructions int\u00e9gr\u00e9es dans l'entr\u00e9e \u00e0 mesure que la longueur de l'entr\u00e9e augmente.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Les mod\u00e8les g\u00e9n\u00e8rent \u00e9galement parfois des r\u00e9ponses indiquant une incertitude ou un manque d'informations suffisantes, telles que \"Il n'y a pas assez d'informations dans le texte\", en d\u00e9pit de toutes les informations n\u00e9cessaires.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dans l'ensemble, les LLM semblent avoir du mal \u00e0 hi\u00e9rarchiser et \u00e0 se concentrer sur les \u00e9l\u00e9ments d'information cl\u00e9s, y compris les instructions directes, au fur et \u00e0 mesure que la longueur des donn\u00e9es augmente.\u00a0<\/span><\/p>\n<h2>R\u00e9ponses biais\u00e9es<\/h2>\n<p><span style=\"font-weight: 400;\">Un autre probl\u00e8me notable a \u00e9t\u00e9 l'augmentation des biais dans les r\u00e9ponses des mod\u00e8les \u00e0 mesure que les entr\u00e9es devenaient plus longues.\u00a0 <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Plus pr\u00e9cis\u00e9ment, les LLM avaient tendance \u00e0 r\u00e9pondre \"Faux\" \u00e0 mesure que la longueur de l'entr\u00e9e augmentait. Ce biais indique une distorsion dans l'estimation des probabilit\u00e9s ou les processus de prise de d\u00e9cision au sein du mod\u00e8le, peut-\u00eatre comme un m\u00e9canisme d\u00e9fensif en r\u00e9ponse \u00e0 l'incertitude accrue due \u00e0 des entr\u00e9es plus longues.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">La tendance \u00e0 favoriser les r\u00e9ponses \"Faux\" pourrait \u00e9galement refl\u00e9ter un d\u00e9s\u00e9quilibre sous-jacent dans les donn\u00e9es d'apprentissage ou un artefact du processus d'apprentissage des mod\u00e8les, o\u00f9 les r\u00e9ponses n\u00e9gatives peuvent \u00eatre surrepr\u00e9sent\u00e9es ou associ\u00e9es \u00e0 des contextes d'incertitude et d'ambigu\u00eft\u00e9.\u00a0<\/span><\/p>\n<figure id=\"attachment_10437\" aria-describedby=\"caption-attachment-10437\" style=\"width: 477px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10437 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai.png\" alt=\"mod\u00e8les AI\" width=\"477\" height=\"772\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai.png 477w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-185x300.png 185w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-370x599.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-20x32.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-30x48.png 30w\" sizes=\"auto, (max-width: 477px) 100vw, 477px\" \/><figcaption id=\"caption-attachment-10437\" class=\"wp-caption-text\">Les mod\u00e8les ont tendance \u00e0 r\u00e9pondre aux questions binaires par \"faux\" \u00e0 mesure que la longueur de l'entr\u00e9e augmente. Source : Via <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\">ArXiv<\/a>.<\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">Ce biais affecte la pr\u00e9cision des r\u00e9sultats des mod\u00e8les et soul\u00e8ve des inqui\u00e9tudes quant \u00e0 la fiabilit\u00e9 et \u00e0 l'\u00e9quit\u00e9 des LLM dans des applications n\u00e9cessitant une compr\u00e9hension nuanc\u00e9e et de l'impartialit\u00e9.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">La mise en \u0153uvre de strat\u00e9gies robustes de d\u00e9tection et d'att\u00e9nuation des biais au cours des phases de formation et d'ajustement des mod\u00e8les est essentielle pour r\u00e9duire les biais injustifi\u00e9s dans les r\u00e9ponses des mod\u00e8les. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">E<\/span><span style=\"font-weight: 400;\">e fait de s'assurer que les ensembles de donn\u00e9es d'entra\u00eenement sont diversifi\u00e9s, \u00e9quilibr\u00e9s et repr\u00e9sentatifs d'un large \u00e9ventail de sc\u00e9narios peut \u00e9galement contribuer \u00e0 minimiser les biais et \u00e0 am\u00e9liorer la g\u00e9n\u00e9ralisation des mod\u00e8les.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Cela contribue \u00e0 <\/span><a href=\"https:\/\/dailyai.com\/fr\/2024\/02\/generative-ai-systems-hallucinations-and-mounting-technical-debt\/\"><span style=\"font-weight: 400;\">autres \u00e9tudes r\u00e9centes<\/span><\/a><span style=\"font-weight: 400;\"> qui, de la m\u00eame mani\u00e8re, mettent en \u00e9vidence des probl\u00e8mes fondamentaux dans le fonctionnement des LLM, conduisant ainsi \u00e0 une situation o\u00f9 cette \"dette technique\" pourrait menacer la fonctionnalit\u00e9 et l'int\u00e9grit\u00e9 du mod\u00e8le au fil du temps.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Malgr\u00e9 les progr\u00e8s rapides des LLM, notre compr\u00e9hension de la mani\u00e8re dont ces mod\u00e8les s'adaptent \u00e0 des entr\u00e9es plus longues reste faible. Mosh Levy, Alon Jacoby et Yoav Goldberg, de l'universit\u00e9 Bar-Ilan et de l'Allen Institute for AI, ont \u00e9tudi\u00e9 la mani\u00e8re dont les performances des grands mod\u00e8les de langage (LLM) varient en fonction de la longueur du texte d'entr\u00e9e qu'ils doivent traiter. Ils ont d\u00e9velopp\u00e9 un cadre de raisonnement sp\u00e9cialement \u00e0 cette fin, ce qui leur a permis de diss\u00e9quer l'influence de la longueur du texte d'entr\u00e9e sur le raisonnement des LLM dans un environnement contr\u00f4l\u00e9. Le cadre de questionnement proposait diff\u00e9rentes versions de la m\u00eame question, chacune contenant les informations n\u00e9cessaires pour r\u00e9pondre \u00e0 la question.<\/p>","protected":false},"author":2,"featured_media":10438,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,110],"class_list":["post-10434","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-open-source"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI<\/title>\n<meta name=\"description\" content=\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-29T21:55:56+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-07T07:21:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"LLMs produce more inaccurate and biased outputs with longer inputs\",\"datePublished\":\"2024-02-29T21:55:56+00:00\",\"dateModified\":\"2024-03-07T07:21:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"},\"wordCount\":760,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"keywords\":[\"LLMS\",\"Open-source\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\",\"name\":\"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"datePublished\":\"2024-02-29T21:55:56+00:00\",\"dateModified\":\"2024-03-07T07:21:27+00:00\",\"description\":\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"width\":1000,\"height\":667,\"caption\":\"LLM\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LLMs produce more inaccurate and biased outputs with longer inputs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Les LLM produisent des r\u00e9sultats plus impr\u00e9cis et biais\u00e9s avec des entr\u00e9es plus longues | DailyAI","description":"Mosh Levy, Alon Jacoby et Yoav Goldberg, de l'universit\u00e9 Bar-Ilan et de l'Allen Institute for AI, ont \u00e9tudi\u00e9 la mani\u00e8re dont les performances des grands mod\u00e8les de langage (LLM) varient en fonction de la longueur du texte d'entr\u00e9e qu'ils doivent traiter.\u00a0","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","og_locale":"fr_FR","og_type":"article","og_title":"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI","og_description":"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0","og_url":"https:\/\/dailyai.com\/fr\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","og_site_name":"DailyAI","article_published_time":"2024-02-29T21:55:56+00:00","article_modified_time":"2024-03-07T07:21:27+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Sam Jeans","Dur\u00e9e de lecture estim\u00e9e":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"LLMs produce more inaccurate and biased outputs with longer inputs","datePublished":"2024-02-29T21:55:56+00:00","dateModified":"2024-03-07T07:21:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"},"wordCount":760,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","keywords":["LLMS","Open-source"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","url":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","name":"Les LLM produisent des r\u00e9sultats plus impr\u00e9cis et biais\u00e9s avec des entr\u00e9es plus longues | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","datePublished":"2024-02-29T21:55:56+00:00","dateModified":"2024-03-07T07:21:27+00:00","description":"Mosh Levy, Alon Jacoby et Yoav Goldberg, de l'universit\u00e9 Bar-Ilan et de l'Allen Institute for AI, ont \u00e9tudi\u00e9 la mani\u00e8re dont les performances des grands mod\u00e8les de langage (LLM) varient en fonction de la longueur du texte d'entr\u00e9e qu'ils doivent traiter.\u00a0","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","width":1000,"height":667,"caption":"LLM"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"LLMs produce more inaccurate and biased outputs with longer inputs"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam est un r\u00e9dacteur scientifique et technologique qui a travaill\u00e9 dans diverses start-ups sp\u00e9cialis\u00e9es dans l'IA. Lorsqu'il n'\u00e9crit pas, on peut le trouver en train de lire des revues m\u00e9dicales ou de fouiller dans des bo\u00eetes de disques vinyles.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/fr\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10434","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=10434"}],"version-history":[{"count":6,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10434\/revisions"}],"predecessor-version":[{"id":10444,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/10434\/revisions\/10444"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/10438"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=10434"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=10434"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=10434"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}