{"id":2163,"date":"2023-06-30T21:15:16","date_gmt":"2023-06-30T21:15:16","guid":{"rendered":"https:\/\/dailyai.com\/?p=2163"},"modified":"2023-07-03T16:28:22","modified_gmt":"2023-07-03T16:28:22","slug":"anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","title":{"rendered":"Anthropic publica un art\u00edculo que revela el sesgo de los grandes modelos ling\u00fc\u00edsticos"},"content":{"rendered":"<p><strong>Un nuevo art\u00edculo de la empresa de IA Anthropic ha arrojado luz sobre los posibles sesgos inherentes a los grandes modelos ling\u00fc\u00edsticos (LLM), sugiriendo que estos sistemas de IA pueden no representar adecuadamente diversas perspectivas globales sobre cuestiones sociales.<\/strong><\/p>\n<p><span style=\"font-weight: 400\">Los investigadores crearon un conjunto de datos, GlobalOpinionQA, compuesto por preguntas y respuestas de encuestas transnacionales dise\u00f1adas para captar opiniones variadas sobre cuestiones globales en distintos pa\u00edses.\u00a0<\/span><\/p>\n<p>Antr\u00f3picos <a href=\"https:\/\/arxiv.org\/pdf\/2306.16388.pdf\"><span style=\"font-weight: 400\">experimentos<\/span><\/a><span style=\"font-weight: 400\"> pregunt\u00f3 a un LLM y descubri\u00f3 que, por defecto, las respuestas del modelo tend\u00edan a ajustarse m\u00e1s a las opiniones de poblaciones espec\u00edficas, en particular las de EE.UU., Reino Unido, Canad\u00e1, Australia y algunos otros pa\u00edses europeos y sudamericanos.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400\">C\u00f3mo funciona<\/span><\/h2>\n<ol>\n<li style=\"font-weight: 400\"><b>Creaci\u00f3n de conjuntos de datos<\/b><span style=\"font-weight: 400\">: El equipo cre\u00f3 el conjunto de datos GlobalOpinionQA. Este conjunto de datos incorpora preguntas y respuestas de encuestas transnacionales dise\u00f1adas espec\u00edficamente para captar una amplia gama de opiniones sobre cuestiones mundiales.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Definici\u00f3n de una m\u00e9trica de similitud<\/b><span style=\"font-weight: 400\">: A continuaci\u00f3n, Anthropic formul\u00f3 una m\u00e9trica para medir la similitud entre las respuestas dadas por los LLM y las respuestas de las personas. Esta m\u00e9trica tiene en cuenta el pa\u00eds de origen de los encuestados humanos.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Formaci\u00f3n del LLM<\/b><span style=\"font-weight: 400\">: Anthropic entren\u00f3 a un LLM basado en la \"IA constitucional\", asegur\u00e1ndose de que el LLM fuera \u00fatil, honesto e inofensivo. La IA constitucional es una t\u00e9cnica desarrollada por Anthropic cuyo objetivo es dotar a los sistemas de IA de \"valores\" definidos por una \"constituci\u00f3n\".<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Realizaci\u00f3n de experimentos<\/b><span style=\"font-weight: 400\">: Utilizando su marco cuidadosamente dise\u00f1ado, el equipo de Anthropic ejecut\u00f3 3 experimentos distintos con el LLM entrenado.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400\">Los investigadores sostienen que esto pone de manifiesto un posible sesgo en los modelos, que llevar\u00eda a una infrarrepresentaci\u00f3n de las opiniones de ciertos grupos en comparaci\u00f3n con las de los pa\u00edses occidentales.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Se\u00f1alaron: \"Si un modelo ling\u00fc\u00edstico representa desproporcionadamente determinadas opiniones, corre el riesgo de imponer efectos potencialmente indeseables, como promover visiones hegem\u00f3nicas del mundo y homogeneizar las perspectivas y creencias de la gente.\"<\/span><\/p>\n<p><span style=\"font-weight: 400\">Adem\u00e1s, los investigadores observaron que si se ped\u00eda al modelo que tuviera en cuenta la perspectiva de un pa\u00eds concreto, se obten\u00edan respuestas m\u00e1s parecidas a las opiniones de esas poblaciones. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Eso significa que puedes pedirle a la IA que \"considere la perspectiva sudamericana\" en un determinado debate cultural, por ejemplo. <\/span>Sin embargo, estas respuestas reflejaban a veces estereotipos culturales perjudiciales, lo que sugiere que los modelos carecen de una comprensi\u00f3n matizada de los valores y perspectivas culturales.<\/p>\n<p><span style=\"font-weight: 400\">Curiosamente, cuando los investigadores tradujeron las preguntas del GlobalOpinionQA a una lengua de destino, las respuestas del modelo no coincid\u00edan necesariamente con las opiniones de los hablantes de esas lenguas. <\/span><\/p>\n<p><span style=\"font-weight: 400\">As\u00ed, hacer una pregunta en japon\u00e9s, por ejemplo, no ten\u00eda por qu\u00e9 dar lugar a respuestas acordes con los valores culturales japoneses. No se puede \"separar\" la IA de sus valores predominantemente occidentales. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Esto sugiere que, a pesar de su adaptabilidad, los LLM deben adquirir un conocimiento m\u00e1s profundo de los contextos sociales para generar respuestas que reflejen fielmente las opiniones locales.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Los investigadores creen que sus hallazgos aportar\u00e1n transparencia a las perspectivas codificadas y reflejadas por los modelos ling\u00fc\u00edsticos actuales. A pesar de las limitaciones de su estudio, esperan que sirva de gu\u00eda para el desarrollo de sistemas de IA que incorporen una diversidad de puntos de vista y experiencias culturales, no s\u00f3lo los de los grupos privilegiados o dominantes. Tambi\u00e9n han publicado su conjunto de datos y un <\/span><a href=\"https:\/\/llmglobalvalues.anthropic.com\/\"><span style=\"font-weight: 400\">visualizaci\u00f3n interactiva.<\/span><\/a><\/p>\n<p>Este estudio coincide ampliamente con otros trabajos acad\u00e9micos sobre el tema de los valores sociales y culturales de la IA.<\/p>\n<p>Por un lado, la mayor\u00eda de las IA fundacionales son entrenadas por empresas y equipos de investigaci\u00f3n predominantemente occidentales.<\/p>\n<p>Adem\u00e1s, el <a href=\"https:\/\/dailyai.com\/es\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/\">datos utilizados para entrenar IA<\/a> no siempre representa a la sociedad en su conjunto. Por ejemplo, la gran mayor\u00eda de los datos de formaci\u00f3n de los LLM est\u00e1n escritos en ingl\u00e9s, por lo que probablemente reflejen los valores sociales y culturales de los angloparlantes.<\/p>\n<p>Los investigadores son muy conscientes del potencial sesgo y discriminaci\u00f3n de la IA. Sin embargo, resolverlo es extremadamente complejo y requiere una cuidadosa mezcla de conjuntos de datos personalizados de alta calidad y una diligente aportaci\u00f3n y supervisi\u00f3n humanas.<\/p>","protected":false},"excerpt":{"rendered":"<p>Un nuevo art\u00edculo de la empresa de IA Anthropic ha arrojado luz sobre los posibles sesgos inherentes a los grandes modelos ling\u00fc\u00edsticos (LLM), sugiriendo que estos sistemas de IA pueden no representar adecuadamente las diversas perspectivas globales sobre cuestiones sociales. Los investigadores crearon un conjunto de datos, GlobalOpinionQA, compuesto por preguntas y respuestas de encuestas transnacionales dise\u00f1adas para captar opiniones variadas sobre cuestiones globales en distintos pa\u00edses.  Los experimentos de Anthropic interrogaron a un LLM y descubrieron que, por defecto, las respuestas del modelo tend\u00edan a ajustarse m\u00e1s a las opiniones de poblaciones espec\u00edficas, en particular las de Estados Unidos, Reino Unido, Canad\u00e1, Australia y algunos otros pa\u00edses europeos y sudamericanos.  C\u00f3mo<\/p>","protected":false},"author":2,"featured_media":2164,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[148,118],"class_list":["post-2163","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-anthropic","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Anthropic releases paper revealing the bias of large language models | DailyAI<\/title>\n<meta name=\"description\" content=\"A new paper by AI company Anthropic has shed light on the potential biases inherent in large language models (LLMs), suggesting these AI systems may not adequately represent diverse global perspectives on societal issues.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Anthropic releases paper revealing the bias of large language models | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A new paper by AI company Anthropic has shed light on the potential biases inherent in large language models (LLMs), suggesting these AI systems may not adequately represent diverse global perspectives on societal issues.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-30T21:15:16+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-07-03T16:28:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"Anthropic releases paper revealing the bias of large language models\",\"datePublished\":\"2023-06-30T21:15:16+00:00\",\"dateModified\":\"2023-07-03T16:28:22+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\"},\"wordCount\":596,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2299746973.jpg\",\"keywords\":[\"Anthropic\",\"LLMS\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\",\"name\":\"Anthropic releases paper revealing the bias of large language models | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2299746973.jpg\",\"datePublished\":\"2023-06-30T21:15:16+00:00\",\"dateModified\":\"2023-07-03T16:28:22+00:00\",\"description\":\"A new paper by AI company Anthropic has shed light on the potential biases inherent in large language models (LLMs), suggesting these AI systems may not adequately represent diverse global perspectives on societal issues.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2299746973.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2299746973.jpg\",\"width\":1000,\"height\":667,\"caption\":\"ai anthropic\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Anthropic releases paper revealing the bias of large language models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Anthropic publica un art\u00edculo que revela el sesgo de los grandes modelos ling\u00fc\u00edsticos | DailyAI","description":"Un nuevo art\u00edculo de la empresa de IA Anthropic ha arrojado luz sobre los posibles sesgos inherentes a los grandes modelos ling\u00fc\u00edsticos (LLM), sugiriendo que estos sistemas de IA pueden no representar adecuadamente diversas perspectivas globales sobre cuestiones sociales.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","og_locale":"es_ES","og_type":"article","og_title":"Anthropic releases paper revealing the bias of large language models | DailyAI","og_description":"A new paper by AI company Anthropic has shed light on the potential biases inherent in large language models (LLMs), suggesting these AI systems may not adequately represent diverse global perspectives on societal issues.","og_url":"https:\/\/dailyai.com\/es\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","og_site_name":"DailyAI","article_published_time":"2023-06-30T21:15:16+00:00","article_modified_time":"2023-07-03T16:28:22+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Sam Jeans","Tiempo de lectura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"Anthropic releases paper revealing the bias of large language models","datePublished":"2023-06-30T21:15:16+00:00","dateModified":"2023-07-03T16:28:22+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/"},"wordCount":596,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","keywords":["Anthropic","LLMS"],"articleSection":["Ethics &amp; Society"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","url":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","name":"Anthropic publica un art\u00edculo que revela el sesgo de los grandes modelos ling\u00fc\u00edsticos | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","datePublished":"2023-06-30T21:15:16+00:00","dateModified":"2023-07-03T16:28:22+00:00","description":"Un nuevo art\u00edculo de la empresa de IA Anthropic ha arrojado luz sobre los posibles sesgos inherentes a los grandes modelos ling\u00fc\u00edsticos (LLM), sugiriendo que estos sistemas de IA pueden no representar adecuadamente diversas perspectivas globales sobre cuestiones sociales.","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","width":1000,"height":667,"caption":"ai anthropic"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Anthropic releases paper revealing the bias of large language models"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam es un escritor de ciencia y tecnolog\u00eda que ha trabajado en varias startups de IA. Cuando no est\u00e1 escribiendo, se le puede encontrar leyendo revistas m\u00e9dicas o rebuscando en cajas de discos de vinilo.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/es\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/2163","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=2163"}],"version-history":[{"count":9,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/2163\/revisions"}],"predecessor-version":[{"id":2223,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/2163\/revisions\/2223"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/2164"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=2163"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=2163"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=2163"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}