{"id":2163,"date":"2023-06-30T21:15:16","date_gmt":"2023-06-30T21:15:16","guid":{"rendered":"https:\/\/dailyai.com\/?p=2163"},"modified":"2023-07-03T16:28:22","modified_gmt":"2023-07-03T16:28:22","slug":"anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","title":{"rendered":"Antropiske udgivelser afsl\u00f8rer bias i store sprogmodeller"},"content":{"rendered":"<p><strong>En ny artikel fra AI-virksomheden Anthropic har kastet lys over de potentielle sk\u00e6vheder, der ligger i store sprogmodeller (LLM'er), og antyder, at disse AI-systemer m\u00e5ske ikke i tilstr\u00e6kkelig grad repr\u00e6senterer forskellige globale perspektiver p\u00e5 samfundsm\u00e6ssige sp\u00f8rgsm\u00e5l.<\/strong><\/p>\n<p><span style=\"font-weight: 400\">Forskerne opbyggede et datas\u00e6t, GlobalOpinionQA, med sp\u00f8rgsm\u00e5l og svar fra tv\u00e6rnationale unders\u00f8gelser, der var designet til at indfange forskellige meninger om globale emner p\u00e5 tv\u00e6rs af forskellige lande.\u00a0<\/span><\/p>\n<p>Anthropic's <a href=\"https:\/\/arxiv.org\/pdf\/2306.16388.pdf\"><span style=\"font-weight: 400\">eksperimenter<\/span><\/a><span style=\"font-weight: 400\"> spurgte en LLM'er og fandt ud af, at modellens svar som standard havde en tendens til at ligge t\u00e6ttere p\u00e5 holdningerne hos bestemte befolkningsgrupper, is\u00e6r dem fra USA, Storbritannien, Canada, Australien og nogle f\u00e5 andre europ\u00e6iske og sydamerikanske lande.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400\">S\u00e5dan fungerer det<\/span><\/h2>\n<ol>\n<li style=\"font-weight: 400\"><b>Oprettelse af datas\u00e6t<\/b><span style=\"font-weight: 400\">: Teamet skabte GlobalOpinionQA-datas\u00e6ttet. Dette datas\u00e6t indeholder sp\u00f8rgsm\u00e5l og svar fra tv\u00e6rnationale unders\u00f8gelser, der er specielt designet til at indfange en bred vifte af meninger om globale emner.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Definition af en lighedsmetrik<\/b><span style=\"font-weight: 400\">: Dern\u00e6st formulerede Anthropic en metrik til at m\u00e5le ligheden mellem svarene fra LLM'er og menneskers svar. Denne metrik tager h\u00f8jde for de menneskelige respondenters oprindelsesland.<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Uddannelse af LLM<\/b><span style=\"font-weight: 400\">: Anthropic uddannede en LLM baseret p\u00e5 \"konstitutionel AI\" og sikrede, at LLM'en var hj\u00e6lpsom, \u00e6rlig og harml\u00f8s. Konstitutionel AI er en teknik udviklet af Anthropic, som har til form\u00e5l at give AI-systemer \"v\u00e6rdier\", der er defineret af en \"forfatning\".<\/span><\/li>\n<li style=\"font-weight: 400\"><b>Udf\u00f8relse af eksperimenter<\/b><span style=\"font-weight: 400\">: Ved hj\u00e6lp af deres omhyggeligt designede rammer udf\u00f8rte teamet hos Anthropic 3 separate eksperimenter p\u00e5 den tr\u00e6nede LLM.<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400\">Forskerne h\u00e6vder, at dette fremh\u00e6ver potentielle sk\u00e6vheder i modellerne, hvilket f\u00f8rer til underrepr\u00e6sentation af visse gruppers meninger sammenlignet med dem fra vestlige lande.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">De bem\u00e6rkede: \"Hvis en sprogmodel i uforholdsm\u00e6ssig h\u00f8j grad repr\u00e6senterer visse meninger, risikerer den at f\u00e5 potentielt u\u00f8nskede virkninger, s\u00e5som at fremme hegemoniske verdensbilleder og homogenisere folks perspektiver og overbevisninger.\"<\/span><\/p>\n<p><span style=\"font-weight: 400\">Derudover observerede forskerne, at hvis man bad modellen om at overveje et bestemt lands perspektiv, f\u00f8rte det til svar, der var mere lig disse befolkningers meninger. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Det betyder, at du f.eks. kan bede AI om at \"overveje det sydamerikanske perspektiv\" p\u00e5 en bestemt kulturel debat. <\/span>Men disse svar afspejlede nogle gange skadelige kulturelle stereotyper, hvilket tyder p\u00e5, at modellerne mangler en nuanceret forst\u00e5else af kulturelle v\u00e6rdier og perspektiver.<\/p>\n<p><span style=\"font-weight: 400\">Det er interessant, at n\u00e5r forskerne oversatte GlobalOpinionQA-sp\u00f8rgsm\u00e5lene til et m\u00e5lsprog, stemte modellens svar ikke n\u00f8dvendigvis overens med holdningerne hos dem, der taler de p\u00e5g\u00e6ldende sprog. <\/span><\/p>\n<p><span style=\"font-weight: 400\">S\u00e5 hvis man stillede et sp\u00f8rgsm\u00e5l p\u00e5 f.eks. japansk, gav det ikke n\u00f8dvendigvis svar, der var i overensstemmelse med japanske kulturelle v\u00e6rdier. Man kan ikke \"adskille\" AI'en fra dens overvejende vestlige v\u00e6rdier. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Det tyder p\u00e5, at LLM'er p\u00e5 trods af deres tilpasningsevne skal tilegne sig en dybere forst\u00e5else af sociale sammenh\u00e6nge for at generere svar, der n\u00f8jagtigt afspejler lokale meninger.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Forskerne mener, at deres resultater vil give gennemsigtighed i de perspektiver, der er indkodet og afspejlet i de nuv\u00e6rende sprogmodeller. P\u00e5 trods af begr\u00e6nsningerne i deres unders\u00f8gelse h\u00e5ber de, at den vil guide udviklingen af AI-systemer, der repr\u00e6senterer en mangfoldighed af kulturelle synspunkter og erfaringer, ikke kun dem fra privilegerede eller dominerende grupper. De har ogs\u00e5 udgivet deres datas\u00e6t og en <\/span><a href=\"https:\/\/llmglobalvalues.anthropic.com\/\"><span style=\"font-weight: 400\">interaktiv visualisering.<\/span><\/a><\/p>\n<p>Denne unders\u00f8gelse er stort set i overensstemmelse med andet akademisk arbejde om emnet AI's sociale og kulturelle v\u00e6rdier.<\/p>\n<p>For det f\u00f8rste er de fleste grundl\u00e6ggende AI'er tr\u00e6net af overvejende vestlige virksomheder og forskerteams.<\/p>\n<p>Derudover er <a href=\"https:\/\/dailyai.com\/da\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/\">data brugt til at tr\u00e6ne AI'er<\/a> repr\u00e6senterer ikke altid samfundet som helhed. For eksempel er langt st\u00f8rstedelen af tr\u00e6ningsdataene til LLM'er skrevet p\u00e5 engelsk, hvilket sandsynligvis afspejler engelsktalende samfundsm\u00e6ssige og kulturelle v\u00e6rdier.<\/p>\n<p>Forskere er meget opm\u00e6rksomme p\u00e5 potentiel bias og diskrimination i AI. Men at l\u00f8se det er ekstremt komplekst og kr\u00e6ver en omhyggelig blanding af tilpassede datas\u00e6t af h\u00f8j kvalitet og flittig menneskelig input og overv\u00e5gning.<\/p>","protected":false},"excerpt":{"rendered":"<p>En ny artikel fra AI-virksomheden Anthropic har kastet lys over de potentielle bias, der ligger i store sprogmodeller (LLM'er), og antyder, at disse AI-systemer m\u00e5ske ikke i tilstr\u00e6kkelig grad repr\u00e6senterer forskellige globale perspektiver p\u00e5 samfundsm\u00e6ssige sp\u00f8rgsm\u00e5l. Forskerne byggede et datas\u00e6t, GlobalOpinionQA, med sp\u00f8rgsm\u00e5l og svar fra tv\u00e6rnationale unders\u00f8gelser, der var designet til at indfange forskellige meninger om globale emner p\u00e5 tv\u00e6rs af forskellige lande.  Anthropics eksperimenter udspurgte en LLM og fandt ud af, at modellens svar som standard havde en tendens til at ligge t\u00e6ttere p\u00e5 holdningerne hos specifikke befolkninger, is\u00e6r dem fra USA, Storbritannien, Canada, Australien og nogle f\u00e5 andre europ\u00e6iske og sydamerikanske lande.  Hvordan<\/p>","protected":false},"author":2,"featured_media":2164,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[148,118],"class_list":["post-2163","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-anthropic","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Anthropic releases paper revealing the bias of large language models | DailyAI<\/title>\n<meta name=\"description\" content=\"A new paper by AI company Anthropic has shed light on the potential biases inherent in large language models (LLMs), suggesting these AI systems may not adequately represent diverse global perspectives on societal issues.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Anthropic releases paper revealing the bias of large language models | DailyAI\" \/>\n<meta property=\"og:description\" content=\"A new paper by AI company Anthropic has shed light on the potential biases inherent in large language models (LLMs), suggesting these AI systems may not adequately represent diverse global perspectives on societal issues.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-30T21:15:16+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-07-03T16:28:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"Anthropic releases paper revealing the bias of large language models\",\"datePublished\":\"2023-06-30T21:15:16+00:00\",\"dateModified\":\"2023-07-03T16:28:22+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\"},\"wordCount\":596,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2299746973.jpg\",\"keywords\":[\"Anthropic\",\"LLMS\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\",\"name\":\"Anthropic releases paper revealing the bias of large language models | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2299746973.jpg\",\"datePublished\":\"2023-06-30T21:15:16+00:00\",\"dateModified\":\"2023-07-03T16:28:22+00:00\",\"description\":\"A new paper by AI company Anthropic has shed light on the potential biases inherent in large language models (LLMs), suggesting these AI systems may not adequately represent diverse global perspectives on societal issues.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2299746973.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2299746973.jpg\",\"width\":1000,\"height\":667,\"caption\":\"ai anthropic\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Anthropic releases paper revealing the bias of large language models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Anthropic udgiver artikel, der afsl\u00f8rer bias i store sprogmodeller | DailyAI","description":"En ny artikel fra AI-virksomheden Anthropic har kastet lys over de potentielle sk\u00e6vheder, der ligger i store sprogmodeller (LLM'er), og antyder, at disse AI-systemer m\u00e5ske ikke i tilstr\u00e6kkelig grad repr\u00e6senterer forskellige globale perspektiver p\u00e5 samfundsm\u00e6ssige sp\u00f8rgsm\u00e5l.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","og_locale":"da_DK","og_type":"article","og_title":"Anthropic releases paper revealing the bias of large language models | DailyAI","og_description":"A new paper by AI company Anthropic has shed light on the potential biases inherent in large language models (LLMs), suggesting these AI systems may not adequately represent diverse global perspectives on societal issues.","og_url":"https:\/\/dailyai.com\/da\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","og_site_name":"DailyAI","article_published_time":"2023-06-30T21:15:16+00:00","article_modified_time":"2023-07-03T16:28:22+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Sam Jeans","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"Anthropic releases paper revealing the bias of large language models","datePublished":"2023-06-30T21:15:16+00:00","dateModified":"2023-07-03T16:28:22+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/"},"wordCount":596,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","keywords":["Anthropic","LLMS"],"articleSection":["Ethics &amp; Society"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","url":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/","name":"Anthropic udgiver artikel, der afsl\u00f8rer bias i store sprogmodeller | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","datePublished":"2023-06-30T21:15:16+00:00","dateModified":"2023-07-03T16:28:22+00:00","description":"En ny artikel fra AI-virksomheden Anthropic har kastet lys over de potentielle sk\u00e6vheder, der ligger i store sprogmodeller (LLM'er), og antyder, at disse AI-systemer m\u00e5ske ikke i tilstr\u00e6kkelig grad repr\u00e6senterer forskellige globale perspektiver p\u00e5 samfundsm\u00e6ssige sp\u00f8rgsm\u00e5l.","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2299746973.jpg","width":1000,"height":667,"caption":"ai anthropic"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Anthropic releases paper revealing the bias of large language models"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er videnskabs- og teknologiforfatter og har arbejdet i forskellige AI-startups. N\u00e5r han ikke skriver, kan han finde p\u00e5 at l\u00e6se medicinske tidsskrifter eller grave i kasser med vinylplader.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/da\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/2163","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=2163"}],"version-history":[{"count":9,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/2163\/revisions"}],"predecessor-version":[{"id":2223,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/2163\/revisions\/2223"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/2164"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=2163"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=2163"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=2163"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}