{"id":10434,"date":"2024-02-29T21:55:56","date_gmt":"2024-02-29T21:55:56","guid":{"rendered":"https:\/\/dailyai.com\/?p=10434"},"modified":"2024-03-07T07:21:27","modified_gmt":"2024-03-07T07:21:27","slug":"llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","title":{"rendered":"Os LLMs produzem resultados mais imprecisos e tendenciosos com entradas mais longas"},"content":{"rendered":"<p><strong>Apesar dos r\u00e1pidos avan\u00e7os nos LLMs, a nossa compreens\u00e3o de como estes modelos lidam com entradas mais longas continua a ser fraca.<\/strong><\/p>\n<p>Mosh Levy, Alon Jacoby e Yoav Goldberg, da Universidade de Bar-Ilan e do Allen Institute for AI, investigaram a forma como o desempenho dos modelos de linguagem de grande dimens\u00e3o (LLM) varia com as altera\u00e7\u00f5es no comprimento do texto de entrada que lhes \u00e9 dado para processar.<\/p>\n<p><span style=\"font-weight: 400;\">Desenvolveram uma estrutura de racioc\u00ednio especificamente para este fim, permitindo-lhes dissecar a influ\u00eancia do comprimento da entrada no racioc\u00ednio LLM num ambiente controlado.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O quadro de perguntas propunha diferentes vers\u00f5es da mesma pergunta, cada uma contendo a informa\u00e7\u00e3o necess\u00e1ria para responder \u00e0 pergunta, complementada com texto adicional irrelevante de diferentes comprimentos e tipos.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Isto permite isolar o comprimento da entrada como uma vari\u00e1vel, assegurando que as altera\u00e7\u00f5es no desempenho do modelo podem ser atribu\u00eddas diretamente ao comprimento da entrada.<\/span><\/p>\n<h3><b>Principais conclus\u00f5es<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Levy, Jacoby e Goldberg descobriram que os LLMs exibem um decl\u00ednio not\u00e1vel no desempenho do racioc\u00ednio em comprimentos de entrada muito abaixo do que os programadores afirmam que eles podem lidar. Eles documentaram as suas descobertas <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\" target=\"_blank\" rel=\"noopener\">neste estudo<\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">O decl\u00ednio foi observado de forma consistente em todas as vers\u00f5es do conjunto de dados, indicando um problema sist\u00e9mico com o tratamento de entradas mais longas e n\u00e3o um problema relacionado com amostras de dados ou arquitecturas de modelos espec\u00edficos.\u00a0<\/span><\/p>\n<p>Como descrevem os investigadores, \"as nossas descobertas mostram uma degrada\u00e7\u00e3o not\u00e1vel no desempenho de racioc\u00ednio dos LLMs com comprimentos de entrada muito mais curtos do que o seu m\u00e1ximo t\u00e9cnico. Mostramos que a tend\u00eancia de degrada\u00e7\u00e3o aparece em todas as vers\u00f5es do nosso conjunto de dados, embora com intensidades diferentes.\"<\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_10436\" aria-describedby=\"caption-attachment-10436\" style=\"width: 569px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10436\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4.png\" alt=\"\" width=\"569\" height=\"469\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4.png 733w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-300x247.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-370x305.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-20x16.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/model4-58x48.png 58w\" sizes=\"auto, (max-width: 569px) 100vw, 569px\" \/><figcaption id=\"caption-attachment-10436\" class=\"wp-caption-text\">\u00c0 medida que o tamanho da entrada aumenta, a capacidade de efetuar tarefas de racioc\u00ednio diminui. Estas entradas consistem em texto relevante (real\u00e7ado a vermelho) e irrelevante (mostrado a cinzento), que s\u00e3o obtidos de v\u00e1rios locais e expandidos de forma incremental. Para responder com exatid\u00e3o, \u00e9 necess\u00e1rio identificar dois segmentos de texto espec\u00edficos, que podem estar situados aleatoriamente na entrada. Os dados de desempenho s\u00e3o agregados a partir de 600 amostras. Fonte: Via <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\">ArXiv.<\/a><\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">Al\u00e9m disso, o estudo destaca o facto de as m\u00e9tricas tradicionais, como a perplexidade, normalmente utilizadas para avaliar os LLM, n\u00e3o estarem correlacionadas com o desempenho dos modelos em tarefas de racioc\u00ednio que envolvem entradas longas.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Uma an\u00e1lise mais aprofundada revelou que a degrada\u00e7\u00e3o do desempenho n\u00e3o dependia apenas da presen\u00e7a de informa\u00e7\u00e3o irrelevante (padding), mas era observada mesmo quando esse padding consistia em informa\u00e7\u00e3o relevante duplicada.<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Quando mantemos os dois v\u00e3os principais juntos e adicionamos texto \u00e0 volta deles, a precis\u00e3o j\u00e1 diminui. Se introduzirmos par\u00e1grafos entre os intervalos, os resultados baixam muito mais. A queda ocorre tanto quando os textos que adicionamos s\u00e3o semelhantes aos textos da tarefa, como quando s\u00e3o completamente diferentes. 3\/7 <a href=\"https:\/\/t.co\/c91l9uzyme\">pic.twitter.com\/c91l9uzyme<\/a><\/p>\n<p>- Mosh Levy (@mosh_levy) <a href=\"https:\/\/twitter.com\/mosh_levy\/status\/1762027631837368416?ref_src=twsrc%5Etfw\">26 de fevereiro de 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\n<span style=\"font-weight: 400;\">Isto sugere que o desafio para os LLMs reside na filtragem do ru\u00eddo e no processamento inerente de sequ\u00eancias de texto mais longas.<\/span><\/p>\n<h2><b>Ignorar instru\u00e7\u00f5es<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Uma \u00e1rea cr\u00edtica do modo de falha destacada no estudo \u00e9 a tend\u00eancia dos LLMs de ignorar instru\u00e7\u00f5es embutidas na entrada \u00e0 medida que o comprimento da entrada aumenta.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Por vezes, os modelos tamb\u00e9m geravam respostas que indicavam incerteza ou falta de informa\u00e7\u00e3o suficiente, como \"N\u00e3o h\u00e1 informa\u00e7\u00e3o suficiente no texto\", apesar de toda a informa\u00e7\u00e3o necess\u00e1ria.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De um modo geral, os LLM parecem ter dificuldade em dar prioridade e concentrar-se em informa\u00e7\u00f5es importantes, incluindo instru\u00e7\u00f5es directas, \u00e0 medida que o comprimento dos dados aumenta.\u00a0<\/span><\/p>\n<h2>Apresentar enviesamentos nas respostas<\/h2>\n<p><span style=\"font-weight: 400;\">Outra quest\u00e3o not\u00e1vel foi o aumento dos enviesamentos nas respostas dos modelos \u00e0 medida que as entradas se tornavam mais longas.\u00a0 <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Especificamente, os LLMs foram tendenciosos a responder \"Falso\" \u00e0 medida que o comprimento da entrada aumentava. Este enviesamento indica uma distor\u00e7\u00e3o na estimativa de probabilidades ou nos processos de tomada de decis\u00e3o no modelo, possivelmente como um mecanismo de defesa em resposta ao aumento da incerteza devido a comprimentos de entrada mais longos.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A tend\u00eancia para favorecer as respostas \"falsas\" tamb\u00e9m pode refletir um desequil\u00edbrio subjacente nos dados de treino ou um artefacto do processo de treino dos modelos, em que as respostas negativas podem estar sobre-representadas ou associadas a contextos de incerteza e ambiguidade.\u00a0<\/span><\/p>\n<figure id=\"attachment_10437\" aria-describedby=\"caption-attachment-10437\" style=\"width: 477px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10437 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai.png\" alt=\"modelos IA\" width=\"477\" height=\"772\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai.png 477w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-185x300.png 185w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-370x599.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-20x32.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/biasai-30x48.png 30w\" sizes=\"auto, (max-width: 477px) 100vw, 477px\" \/><figcaption id=\"caption-attachment-10437\" class=\"wp-caption-text\">Os modelos mostraram uma tend\u00eancia para responder a perguntas bin\u00e1rias como \"falsas\" \u00e0 medida que o comprimento da entrada aumentava. Fonte: Via <a href=\"https:\/\/arxiv.org\/pdf\/2402.14848.pdf\">ArXiv<\/a>.<\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">Este enviesamento afecta a exatid\u00e3o dos resultados dos modelos e suscita preocupa\u00e7\u00f5es quanto \u00e0 fiabilidade e justi\u00e7a dos MLT em aplica\u00e7\u00f5es que requerem uma compreens\u00e3o diferenciada e imparcialidade.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A implementa\u00e7\u00e3o de estrat\u00e9gias robustas de dete\u00e7\u00e3o e atenua\u00e7\u00e3o de enviesamentos durante as fases de treino e afina\u00e7\u00e3o do modelo \u00e9 essencial para reduzir enviesamentos injustificados nas respostas do modelo. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">E<\/span><span style=\"font-weight: 400;\">ssegurar que os conjuntos de dados de treino s\u00e3o diversificados, equilibrados e representativos de uma vasta gama de cen\u00e1rios tamb\u00e9m pode ajudar a minimizar os enviesamentos e a melhorar a generaliza\u00e7\u00e3o do modelo.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Este facto contribui para <\/span><a href=\"https:\/\/dailyai.com\/pt\/2024\/02\/generative-ai-systems-hallucinations-and-mounting-technical-debt\/\"><span style=\"font-weight: 400;\">outros estudos recentes<\/span><\/a><span style=\"font-weight: 400;\"> que, da mesma forma, evidenciam quest\u00f5es fundamentais no funcionamento dos LLM, levando a uma situa\u00e7\u00e3o em que essa \"d\u00edvida t\u00e9cnica\" pode amea\u00e7ar a funcionalidade e a integridade do modelo ao longo do tempo.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Apesar dos r\u00e1pidos avan\u00e7os nos LLMs, a nossa compreens\u00e3o de como estes modelos lidam com entradas mais longas continua a ser fraca. Mosh Levy, Alon Jacoby e Yoav Goldberg, da Universidade de Bar-Ilan e do Allen Institute for AI, investigaram a forma como o desempenho dos modelos de linguagem de grande dimens\u00e3o (LLM) varia com as altera\u00e7\u00f5es no comprimento do texto de entrada que lhes \u00e9 dado para processar. Desenvolveram um quadro de racioc\u00ednio especificamente para este fim, que lhes permitiu dissecar a influ\u00eancia do comprimento da entrada no racioc\u00ednio dos LLM num ambiente controlado. O quadro de questionamento propunha diferentes vers\u00f5es da mesma pergunta, cada uma contendo as informa\u00e7\u00f5es necess\u00e1rias para responder \u00e0 pergunta.<\/p>","protected":false},"author":2,"featured_media":10438,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[118,110],"class_list":["post-10434","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-llms","tag-open-source"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI<\/title>\n<meta name=\"description\" content=\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-29T21:55:56+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-07T07:21:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"LLMs produce more inaccurate and biased outputs with longer inputs\",\"datePublished\":\"2024-02-29T21:55:56+00:00\",\"dateModified\":\"2024-03-07T07:21:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"},\"wordCount\":760,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"keywords\":[\"LLMS\",\"Open-source\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\",\"name\":\"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"datePublished\":\"2024-02-29T21:55:56+00:00\",\"dateModified\":\"2024-03-07T07:21:27+00:00\",\"description\":\"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/shutterstock_2328020525.jpg\",\"width\":1000,\"height\":667,\"caption\":\"LLM\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LLMs produce more inaccurate and biased outputs with longer inputs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Os LLM produzem resultados mais imprecisos e tendenciosos com entradas mais longas | DailyAI","description":"Mosh Levy, Alon Jacoby e Yoav Goldberg, da Universidade de Bar-Ilan e do Allen Institute for AI, investigaram a forma como o desempenho dos modelos de linguagem de grande dimens\u00e3o (LLM) varia com as altera\u00e7\u00f5es no comprimento do texto de entrada que lhes \u00e9 dado para processar.\u00a0","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","og_locale":"pt_PT","og_type":"article","og_title":"LLMs produce more inaccurate and biased outputs with longer inputs | DailyAI","og_description":"Mosh Levy, Alon Jacoby, and Yoav Goldberg, from the Bar-Ilan University and Allen Institute for AI, investigated how the performance of large language models (LLMs) varies with changes in the length of the input text they are given to process.\u00a0","og_url":"https:\/\/dailyai.com\/pt\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","og_site_name":"DailyAI","article_published_time":"2024-02-29T21:55:56+00:00","article_modified_time":"2024-03-07T07:21:27+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Sam Jeans","Tempo estimado de leitura":"4 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"LLMs produce more inaccurate and biased outputs with longer inputs","datePublished":"2024-02-29T21:55:56+00:00","dateModified":"2024-03-07T07:21:27+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"},"wordCount":760,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","keywords":["LLMS","Open-source"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","url":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/","name":"Os LLM produzem resultados mais imprecisos e tendenciosos com entradas mais longas | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","datePublished":"2024-02-29T21:55:56+00:00","dateModified":"2024-03-07T07:21:27+00:00","description":"Mosh Levy, Alon Jacoby e Yoav Goldberg, da Universidade de Bar-Ilan e do Allen Institute for AI, investigaram a forma como o desempenho dos modelos de linguagem de grande dimens\u00e3o (LLM) varia com as altera\u00e7\u00f5es no comprimento do texto de entrada que lhes \u00e9 dado para processar.\u00a0","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/shutterstock_2328020525.jpg","width":1000,"height":667,"caption":"LLM"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/llms-produce-more-inaccurate-and-biased-outputs-with-longer-inputs\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"LLMs produce more inaccurate and biased outputs with longer inputs"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Cal\u00e7as de ganga Sam","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam \u00e9 um escritor de ci\u00eancia e tecnologia que trabalhou em v\u00e1rias startups de IA. Quando n\u00e3o est\u00e1 a escrever, pode ser encontrado a ler revistas m\u00e9dicas ou a vasculhar caixas de discos de vinil.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/pt\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10434","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=10434"}],"version-history":[{"count":6,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10434\/revisions"}],"predecessor-version":[{"id":10444,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10434\/revisions\/10444"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/10438"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=10434"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=10434"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=10434"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}