{"id":3343,"date":"2023-07-28T20:34:47","date_gmt":"2023-07-28T20:34:47","guid":{"rendered":"https:\/\/dailyai.com\/?p=3343"},"modified":"2024-03-28T00:46:33","modified_gmt":"2024-03-28T00:46:33","slug":"unmasking-the-deep-seated-biases-in-ai-systems","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/","title":{"rendered":"Afsl\u00f8ring af de dybtliggende fordomme i AI-systemer"},"content":{"rendered":"<p><b>AI-alderen byder p\u00e5 et komplekst samspil mellem teknologi og samfundets holdninger.\u00a0<\/b><\/p>\n<p><span style=\"font-weight: 400;\">Den \u00f8gede sofistikering af AI-systemer sl\u00f8rer gr\u00e6nserne mellem mennesker og maskiner - er AI-teknologi adskilt fra os selv? I hvor h\u00f8j grad arver AI menneskelige fejl og mangler sammen med f\u00e6rdigheder og viden?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det er m\u00e5ske fristende at forestille sig AI som en empirisk teknologi, der understreges af matematikkens, kodens og beregningernes objektivitet.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Men vi har indset, at de beslutninger, som AI-systemerne tr\u00e6ffer, er meget subjektive baseret p\u00e5 de data, de uds\u00e6ttes for - og det er mennesker, der beslutter, hvordan disse data skal udv\u00e6lges og samles.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Heri ligger en udfordring, da AI-tr\u00e6ningsdata ofte er udtryk for de fordomme og den diskrimination, som menneskeheden k\u00e6mper mod.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"> Selv tilsyneladende subtile former for ubevidst bias kan forst\u00e6rkes af modeltr\u00e6ningsprocessen og i sidste ende afsl\u00f8re sig i form af forkerte ansigtsmatch i retsh\u00e5ndh\u00e6velsessammenh\u00e6nge, afslag p\u00e5 kredit, fejldiagnosticering af sygdomme og forringede sikkerhedsmekanismer for selvk\u00f8rende k\u00f8ret\u00f8jer, blandt andre ting.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Menneskehedens fors\u00f8g p\u00e5 at forhindre diskrimination p\u00e5 tv\u00e6rs af samfundet er stadig et igangv\u00e6rende arbejde, men AI driver kritisk beslutningstagning lige nu.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Kan vi arbejde hurtigt nok til at synkronisere AI med moderne v\u00e6rdier og forhindre forudindtagede livs\u00e6ndrende beslutninger og adf\u00e6rd?\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Afd\u00e6kning af bias i AI<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">I det sidste \u00e5rti har AI-systemer vist sig at afspejle samfundets fordomme. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Disse systemer er ikke i sig selv forudindtagede - i stedet absorberer de deres skaberes forudindtagethed og de data, de er tr\u00e6net p\u00e5.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">AI-systemer l\u00e6rer ligesom mennesker ved at blive eksponeret. Den menneskelige hjerne er et tilsyneladende endel\u00f8st informationsregister - et bibliotek med n\u00e6sten ubegr\u00e6nsede hylder, hvor vi opbevarer oplevelser, viden og minder.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Neurovidenskabelig <\/span><a href=\"https:\/\/www.scientificamerican.com\/article\/can-your-brain-really-be-full\/\"><span style=\"font-weight: 400;\">studier<\/span><\/a><span style=\"font-weight: 400;\"> viser, at hjernen ikke har nogen \"maksimal kapacitet\" og forts\u00e6tter med at sortere og lagre information langt op i alderen.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Selv om den langt fra er perfekt, hj\u00e6lper hjernens progressive, iterative l\u00e6ringsproces os med at tilpasse os nye kulturelle og samfundsm\u00e6ssige v\u00e6rdier, fra at give kvinder stemmeret og acceptere forskellige identiteter til at g\u00f8re en ende p\u00e5 slaveri og andre former for bevidste fordomme. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">W<\/span><span style=\"font-weight: 400;\">e lever nu i en tid, hvor AI-v\u00e6rkt\u00f8jer bruges til kritisk beslutningstagning i stedet for menneskelig d\u00f8mmekraft.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mange maskinl\u00e6ringsmodeller (ML) l\u00e6rer af tr\u00e6ningsdata, der danner grundlag for deres beslutningstagning, og kan ikke inddrage nye oplysninger lige s\u00e5 effektivt som den menneskelige hjerne. Derfor er de ofte ikke i stand til at producere de opdaterede beslutninger, som vi er kommet til at stole p\u00e5.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For eksempel bruges AI-modeller til at identificere ansigtsmatch til retsh\u00e5ndh\u00e6velsesform\u00e5l, <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/07\/ai-in-recruitment-are-the-risks-worth-the-rewards-of-speed-and-efficiency\/\"><span style=\"font-weight: 400;\">analysere CV'er til jobans\u00f8gninger<\/span><\/a><span style=\"font-weight: 400;\">og tr\u00e6ffe sundhedskritiske beslutninger i kliniske sammenh\u00e6nge.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">N\u00e5r samfundet forts\u00e6tter med at integrere AI i vores hverdag, skal vi sikre, at det er lige og pr\u00e6cist for alle. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det er ikke tilf\u00e6ldet i \u00f8jeblikket.\u00a0\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Casestudier i AI-bias<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Der er mange eksempler fra den virkelige verden p\u00e5 AI-relateret bias, fordomme og diskrimination.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I nogle tilf\u00e6lde er konsekvenserne af AI-bias livsforandrende, mens de i andre tilf\u00e6lde bliver h\u00e6ngende i baggrunden og p\u00e5virker beslutninger p\u00e5 en subtil m\u00e5de.\u00a0<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">1. MIT's sk\u00e6vhed i datas\u00e6ttet<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Et MIT-tr\u00e6ningsdatas\u00e6t bygget i 2008 kaldet <\/span><a href=\"https:\/\/groups.csail.mit.edu\/vision\/TinyImages\/\"><span style=\"font-weight: 400;\">Sm\u00e5 billeder<\/span><\/a><span style=\"font-weight: 400;\"> indeholdt ca. 80.000.000 billeder fordelt p\u00e5 ca. 75.000 kategorier.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det blev oprindeligt udt\u00e6nkt til at l\u00e6re AI-systemer at genkende mennesker og objekter i billeder og blev et popul\u00e6rt benchmarking-datas\u00e6t til forskellige anvendelser inden for computersyn (CV).\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A 2020 <a href=\"https:\/\/www.theregister.com\/2020\/07\/01\/mit_dataset_removed\/\">analyse af The Register<\/a> fandt ud af, at mange <\/span><span style=\"font-weight: 400;\">Tiny Images indeholdt uanst\u00e6ndige, racistiske og sexistiske etiketter.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Antonio Torralba fra MIT sagde, at laboratoriet ikke kendte til disse st\u00f8dende etiketter, og sagde til The Register: \"Det er klart, at vi burde have screenet dem manuelt.\" MIT udsendte senere en erkl\u00e6ring om, at de havde fjernet datas\u00e6ttet fra tjenesten.<\/span><\/p>\n<figure id=\"attachment_3344\" aria-describedby=\"caption-attachment-3344\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-3344 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-1024x281.png\" alt=\"Lille PNG-erkl\u00e6ring\" width=\"1024\" height=\"281\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-1024x281.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-300x82.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-768x211.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-1536x421.png 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-2048x561.png 2048w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-370x101.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-800x219.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-740x203.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-20x5.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-1600x439.png 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/statement-175x48.png 175w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-3344\" class=\"wp-caption-text\">MIT's erkl\u00e6ring om Tiny Images. Kilde: <a href=\"https:\/\/groups.csail.mit.edu\/vision\/TinyImages\/\">Sm\u00e5 billeder<\/a>.<\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">Det er ikke den eneste gang, et tidligere benchmark-datas\u00e6t har vist sig at v\u00e6re fyldt med problemer. Labeled Faces in the Wild (LFW), et datas\u00e6t med ber\u00f8mthedsansigter, der bruges flittigt i ansigtsgenkendelsesopgaver, best\u00e5r af 77,5% m\u00e6nd og 83,5% hvidhudede personer.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mange af disse veterandatas\u00e6t har fundet vej til moderne AI-modeller, men stammer fra en \u00e6ra med AI-udvikling, hvor fokus var p\u00e5 at bygge systemer, der <\/span><i><span style=\"font-weight: 400;\">bare arbejde <\/span><\/i><span style=\"font-weight: 400;\">snarere end dem, der er egnede til implementering i virkelige scenarier.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">N\u00e5r et AI-system er tr\u00e6net p\u00e5 et s\u00e5dant datas\u00e6t, har det ikke n\u00f8dvendigvis det samme privilegium som den menneskelige hjerne til at omkalibrere til nutidige v\u00e6rdier. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Modeller kan opdateres iterativt, men det er en langsom og ufuldkommen proces, der ikke kan matche tempoet i den menneskelige udvikling.\u00a0<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">2: Billedgenkendelse: bias mod m\u00f8rkhudede personer<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">I 2019 vil <\/span><a href=\"https:\/\/www.reuters.com\/article\/usa-crime-face\/u-s-government-study-finds-racial-bias-in-facial-recognition-tools-idINL1N28T29H\"><span style=\"font-weight: 400;\">Den amerikanske regering fandt<\/span><\/a><span style=\"font-weight: 400;\"> at de bedste ansigtsgenkendelsessystemer fejlidentificerer sorte mennesker 5 til 10 gange mere end hvide mennesker.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det er ikke bare en statistisk anomali - det har alvorlige konsekvenser i den virkelige verden, lige fra Google Fotos, der identificerer sorte mennesker som gorillaer, til selvk\u00f8rende biler, der ikke genkender m\u00f8rkhudede personer og k\u00f8rer ind i dem.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Derudover var der en b\u00f8lge af uretm\u00e6ssige anholdelser og f\u00e6ngslinger, der involverede falske ansigtsmatch, m\u00e5ske mest produktivt <\/span><a href=\"https:\/\/www.engadget.com\/facial-recognition-wrongful-arrest-lawsuit-new-jersey-201517290.html\"><span style=\"font-weight: 400;\">Nijeer Parks'<\/span><\/a><span style=\"font-weight: 400;\"> som fejlagtigt blev anklaget for butikstyveri og f\u00e6rdselsforseelser, selv om han befandt sig 50 km v\u00e6k fra h\u00e6ndelsen. Parks tilbragte efterf\u00f8lgende 10 dage i f\u00e6ngsel og m\u00e5tte punge ud med tusindvis af kroner i sagsomkostninger.\u00a0<\/span><\/p>\n<figure id=\"attachment_3345\" aria-describedby=\"caption-attachment-3345\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-3345 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-1024x576.jpg\" alt=\"Nijeer Parks\" width=\"1024\" height=\"576\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-1024x576.jpg 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-300x169.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-768x432.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-1536x864.jpg 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-370x208.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-800x450.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-20x11.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-740x416.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-1600x900.jpg 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks-85x48.jpg 85w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/facialmatchparks.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-3345\" class=\"wp-caption-text\">Nijeer Parks' forkerte ansigtsgenkendelsesmatch. Kilde: <a href=\"https:\/\/edition.cnn.com\/2021\/04\/29\/tech\/nijeer-parks-facial-recognition-police-arrest\/index.html\">CNN<\/a>.<\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">Den indflydelsesrige unders\u00f8gelse fra 2018, <\/span><a href=\"http:\/\/gendershades.org\/\"><span style=\"font-weight: 400;\">Nuancer af k\u00f8n<\/span><\/a><span style=\"font-weight: 400;\">unders\u00f8gte yderligere algoritmisk bias. Unders\u00f8gelsen analyserede algoritmer bygget af IBM og Microsoft og fandt d\u00e5rlig n\u00f8jagtighed, n\u00e5r de blev udsat for m\u00f8rkhudede kvinder, med fejlrater op til 34% st\u00f8rre end for lyshudede m\u00e6nd.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dette m\u00f8nster viste sig at v\u00e6re konsekvent p\u00e5 tv\u00e6rs af 189 forskellige algoritmer. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Videoen nedenfor fra unders\u00f8gelsens hovedforsker Joy Buolamwini giver en glimrende guide til, hvordan ansigtsgenkendelsens ydeevne varierer p\u00e5 tv\u00e6rs af hudfarver.\u00a0<\/span><\/p>\n<p><iframe loading=\"lazy\" title=\"Nuancer af k\u00f8n\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/TWWsW1w-BVo?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<h3><\/h3>\n<h3><span style=\"font-weight: 400;\">3: OpenAI's CLIP-projekt<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">OpenAI's <\/span><a href=\"https:\/\/openai.com\/research\/clip\"><span style=\"font-weight: 400;\">CLIP-projektet<\/span><\/a><span style=\"font-weight: 400;\">der blev udgivet i 2021, og som er designet til at matche billeder med beskrivende tekst, illustrerede ogs\u00e5 l\u00f8bende problemer med bias.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I et revisionspapir fremh\u00e6vede CLIP's skabere deres bekymringer og sagde: \"CLIP vedh\u00e6ftede nogle etiketter, der beskrev h\u00f8jstatuserhverv, uforholdsm\u00e6ssigt ofte til m\u00e6nd, s\u00e5som 'leder' og 'l\u00e6ge'. Dette svarer til de sk\u00e6vheder, der findes i Google Cloud Vision (GCV), og peger p\u00e5 historiske k\u00f8nsbestemte forskelle.\"<\/span><\/p>\n<figure id=\"attachment_3346\" aria-describedby=\"caption-attachment-3346\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-3346 size-large\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-1024x539.png\" alt=\"OpenAI CLIP\" width=\"1024\" height=\"539\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-1024x539.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-300x158.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-768x404.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-370x195.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-800x421.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-20x11.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-740x389.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP-91x48.png 91w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/CLIP.png 1120w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-3346\" class=\"wp-caption-text\">CLIP havde en tendens til at forbinde m\u00e6nd og kvinder med problematiske stereotyper som \"dame\" og \"blondine\". Kilde: <a href=\"https:\/\/arxiv.org\/pdf\/2108.02818.pdf\">Evaluering af CLIP<\/a>.<\/figcaption><\/figure>\n<h3><span style=\"font-weight: 400;\">4: Retsh\u00e5ndh\u00e6velse: PredPol-kontroversen<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Et andet eksempel p\u00e5 algoritmisk bias med h\u00f8j indsats er <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/PredPol\"><span style=\"font-weight: 400;\">PredPol<\/span><\/a><span style=\"font-weight: 400;\">en algoritme til forudseende politiarbejde, som bruges af forskellige politiafdelinger i USA.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">PredPol blev tr\u00e6net p\u00e5 historiske kriminalitetsdata for at forudsige fremtidige hotspots for kriminalitet.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Men da disse data i sagens natur afspejler en partisk politipraksis, er algoritmen blevet kritiseret for at opretholde raceprofilering og for at ramme minoritetsomr\u00e5der uforholdsm\u00e6ssigt h\u00e5rdt.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">5: Bias i dermatologisk AI<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">I sundhedsv\u00e6senet bliver de potentielle risici ved AI-bias endnu tydeligere.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Tag eksemplet med AI-systemer, der er designet til at opdage hudkr\u00e6ft. Mange af disse systemer er tr\u00e6net p\u00e5 datas\u00e6t, der overvejende best\u00e5r af lyshudede personer.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A 2021 <\/span><a href=\"https:\/\/www.thelancet.com\/journals\/landig\/article\/PIIS2589-7500(21)00252-1\/fulltext\"><span style=\"font-weight: 400;\">unders\u00f8gelse foretaget af University of Oxford<\/span><\/a><span style=\"font-weight: 400;\"> unders\u00f8gte 21 open access-datas\u00e6t for billeder af hudkr\u00e6ft. De opdagede, at ud af de 14 datas\u00e6t, der afsl\u00f8rede deres geografiske oprindelse, bestod 11 udelukkende af billeder fra Europa, Nordamerika og Oceanien.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Kun 2.436 ud af 106.950 billeder p\u00e5 tv\u00e6rs af de 21 databaser havde oplysninger om hudtype registreret. Forskerne bem\u00e6rkede, at \"kun 10 billeder var fra personer, der var registreret som havende brun hud, og et var fra en person, der var registreret som havende m\u00f8rkebrun eller sort hud.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Med hensyn til data om etnicitet var det kun 1.585 billeder, der indeholdt disse oplysninger. Forskerne fandt ud af, at \"ingen billeder var fra personer med afrikansk, afrikansk-caribisk eller sydasiatisk baggrund.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De konkluderede: \"Sammen med datas\u00e6ttenes geografiske oprindelse var der en massiv underrepr\u00e6sentation af billeder af hudl\u00e6sioner fra m\u00f8rkhudede befolkninger.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hvis s\u00e5danne AI'er anvendes i kliniske sammenh\u00e6nge, skaber forudindtagede datas\u00e6t en meget reel risiko for fejldiagnoser.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Dissekering af bias i AI-tr\u00e6ningsdatas\u00e6t: et produkt af deres skabere?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Tr\u00e6ningsdata - oftest tekst, tale, billeder og video - giver en overv\u00e5get maskinl\u00e6ringsmodel (ML) et grundlag for at l\u00e6re begreber.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">AI-systemer er ikke andet end tomme l\u00e6rreder fra starten. De l\u00e6rer og danner associationer baseret p\u00e5 vores data og maler i bund og grund et billede af verden, som den er afbildet i deres tr\u00e6ningsdatas\u00e6t.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ved at l\u00e6re af tr\u00e6ningsdata er h\u00e5bet, at modellen vil anvende de l\u00e6rte begreber p\u00e5 nye, usete data.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">N\u00e5r de er implementeret, kan nogle avancerede modeller l\u00e6re af nye data, men deres tr\u00e6ningsdata styrer stadig deres grundl\u00e6ggende ydeevne.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det f\u00f8rste sp\u00f8rgsm\u00e5l, der skal besvares, er, hvor dataene kommer fra? Data indsamlet fra ikke-repr\u00e6sentative, ofte homogene og historisk ulige kilder er problematiske.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det g\u00e6lder sandsynligvis for en betydelig m\u00e6ngde onlinedata, herunder tekst- og billeddata, der er skrabet fra \"\u00e5bne\" eller \"offentlige\" kilder.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Internettet blev skabt for bare f\u00e5 \u00e5rtier siden, men det er ikke et universalmiddel for menneskelig viden og er langt fra retf\u00e6rdigt. Halvdelen af verden bruger ikke internettet, endsige bidrager til det, hvilket betyder, at det grundl\u00e6ggende ikke er repr\u00e6sentativt for det globale samfund og den globale kultur.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Og selvom AI-udviklere konstant arbejder p\u00e5 at sikre, at teknologiens fordele ikke er begr\u00e6nset til den engelsktalende verden, produceres st\u00f8rstedelen af tr\u00e6ningsdata (tekst og tale) p\u00e5 engelsk - hvilket betyder, at engelsktalende bidragsydere styrer modellens output.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Forskere fra Anthropic har for nylig <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/06\/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models\/\"><span style=\"font-weight: 400;\">udgav en artikel<\/span><\/a><span style=\"font-weight: 400;\"> om netop dette emne og konkluderede: \"Hvis en sprogmodel i uforholdsm\u00e6ssig grad repr\u00e6senterer visse meninger, risikerer den at f\u00e5 potentielt u\u00f8nskede virkninger, s\u00e5som at fremme hegemoniske verdensbilleder og homogenisere folks perspektiver og overbevisninger.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Selv om AI-systemer fungerer ud fra de \"objektive\" principper for matematik og programmering, eksisterer de ikke desto mindre inden for og formes af en dybt subjektiv menneskelig social kontekst.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Mulige l\u00f8sninger p\u00e5 algoritmisk bias<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Hvis data er det grundl\u00e6ggende problem, kan l\u00f8sningen p\u00e5 at opbygge retf\u00e6rdige modeller virke enkel: Man g\u00f8r bare datas\u00e6ttene mere afbalancerede, ikke sandt?\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ikke helt. A <\/span><a href=\"https:\/\/arxiv.org\/abs\/1811.08489\"><span style=\"font-weight: 400;\">2019 unders\u00f8gelse<\/span><\/a><span style=\"font-weight: 400;\"> viste, at det ikke er tilstr\u00e6kkeligt at afbalancere datas\u00e6t, da algoritmer stadig handler uforholdsm\u00e6ssigt meget p\u00e5 baggrund af beskyttede karakteristika som k\u00f8n og race.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Forfatterne skriver: \"Overraskende nok viser vi, at selv n\u00e5r datas\u00e6t er afbalancerede, s\u00e5 hver etiket forekommer lige meget med hvert k\u00f8n, forst\u00e6rker indl\u00e6rte modeller sammenh\u00e6ngen mellem etiketter og k\u00f8n, lige s\u00e5 meget som hvis data ikke havde v\u00e6ret afbalancerede!\"\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De foresl\u00e5r en de-biasing-teknik, hvor s\u00e5danne etiketter fjernes helt fra datas\u00e6ttet. Andre teknikker omfatter tilf\u00f8jelse af tilf\u00e6ldige forstyrrelser og forvr\u00e6ngninger, som reducerer en algoritmes opm\u00e6rksomhed p\u00e5 specifikke beskyttede egenskaber.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Selv om modificering af maskinl\u00e6ringsmetoder og optimering er afg\u00f8rende for at producere ikke-partisk output, er avancerede modeller modtagelige for \u00e6ndringer eller 'drift', hvilket betyder, at deres pr\u00e6stationer ikke n\u00f8dvendigvis forbliver konsistente p\u00e5 lang sigt.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En model kan v\u00e6re helt upartisk ved udrulning, men senere blive partisk med \u00f8get eksponering for nye data.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Bev\u00e6gelsen for algoritmisk gennemsigtighed<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">I sin provokerende bog <\/span><i><span style=\"font-weight: 400;\">Kunstig uintelligens: Hvordan computere misforst\u00e5r verden<\/span><\/i><span style=\"font-weight: 400;\">Meredith Broussard argumenterer for \u00f8get \"algoritmisk gennemsigtighed\" for at uds\u00e6tte AI-systemer for flere niveauer af l\u00f8bende kontrol.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det betyder, at man skal give klar information om, hvordan systemet fungerer, hvordan det er blevet tr\u00e6net, og hvilke data det er blevet tr\u00e6net p\u00e5.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mens gennemsigtighedsinitiativer let absorberes i open source AI-landskabet, er propriet\u00e6re modeller som GPT, Bard og Anthropics Claude 'sorte bokse', og kun deres udviklere ved pr\u00e6cist, hvordan de fungerer - og selv det er et sp\u00f8rgsm\u00e5l om debat.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Problemet med den \"sorte boks\" i AI betyder, at eksterne observat\u00f8rer kun ser, hvad der g\u00e5r ind i modellen (input), og hvad der kommer ud (output). Den indre mekanik er fuldst\u00e6ndig ukendt, undtagen for deres skabere - ligesom den magiske cirkel beskytter tryllekunstneres hemmeligheder. AI tr\u00e6kker bare kaninen op af hatten.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Sp\u00f8rgsm\u00e5let om sorte bokse blev for nylig udkrystalliseret omkring rapporter om <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/07\/is-chatgpt-getting-worse-heres-everything-we-know-so-far\/\"><span style=\"font-weight: 400;\">GPT-4's potentielle fald i ydeevne<\/span><\/a><span style=\"font-weight: 400;\">. GPT-4-brugere h\u00e6vder, at modellens evner er faldet hurtigt, og selv om OpenAI har erkendt, at det er sandt, har de ikke v\u00e6ret helt klare over, hvorfor det sker. Det rejser sp\u00f8rgsm\u00e5let, om de overhovedet ved det?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">AI-forsker Dr. Sasha Luccioni siger, at OpenAI's manglende gennemsigtighed er et problem, der ogs\u00e5 g\u00e6lder for andre propriet\u00e6re eller lukkede AI-modeludviklere. \"Alle resultater fra lukkede modeller kan ikke reproduceres eller verificeres, og derfor sammenligner vi fra et videnskabeligt perspektiv vaskebj\u00f8rne og egern.\" <\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8220;<\/span><span style=\"font-weight: 400;\">Det er ikke forskernes opgave l\u00f8bende at overv\u00e5ge implementerede LLM'er. Det er op til modelskaberne at give adgang til de underliggende modeller, i det mindste til revisionsform\u00e5l,\" siger hun.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Luccioni understregede, at udviklere af AI-modeller b\u00f8r levere r\u00e5 resultater fra standardbenchmarks som SuperGLUE og WikiText og biasbenchmarks som BOLD og HONEST.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Kampen mod AI-drevet bias og fordomme vil sandsynligvis v\u00e6re konstant og kr\u00e6ve l\u00f8bende opm\u00e6rksomhed og forskning for at holde modeloutput i skak, efterh\u00e5nden som AI og samfundet udvikler sig sammen.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mens regulering vil kr\u00e6ve former for overv\u00e5gning og rapportering, er der kun f\u00e5 h\u00e5rde og hurtige l\u00f8sninger p\u00e5 sp\u00f8rgsm\u00e5let om algoritmisk bias, og det er ikke sidste gang, vi h\u00f8rer om det.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>AI-tidsalderen byder p\u00e5 et komplekst samspil mellem teknologi og samfundsholdninger.  Den \u00f8gede sofistikering af AI-systemer sl\u00f8rer gr\u00e6nserne mellem mennesker og maskiner - er AI-teknologi adskilt fra os selv? I hvor h\u00f8j grad arver AI menneskelige fejl og mangler sammen med f\u00e6rdigheder og viden? Det er m\u00e5ske fristende at forestille sig AI som en empirisk teknologi, der understreges af matematikkens, kodens og beregningernes objektivitet.  Men vi er blevet klar over, at de beslutninger, som AI-systemerne tr\u00e6ffer, er meget subjektive baseret p\u00e5 de data, de uds\u00e6ttes for - og det er mennesker, der beslutter, hvordan disse data skal udv\u00e6lges og samles.\u00a0<\/p>","protected":false},"author":2,"featured_media":3347,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[148,213,118,117,93,257],"class_list":["post-3343","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-anthropic","tag-bias","tag-llms","tag-mit","tag-openai","tag-prejudice"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Unmasking the deep-seated biases in AI systems | DailyAI<\/title>\n<meta name=\"description\" content=\"The increased sophistication of AI systems is blurring the lines between humans and machines \u2013 is AI technology separate from ourselves?\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Unmasking the deep-seated biases in AI systems | DailyAI\" \/>\n<meta property=\"og:description\" content=\"The increased sophistication of AI systems is blurring the lines between humans and machines \u2013 is AI technology separate from ourselves?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-28T20:34:47+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T00:46:33+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_2319661185.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"527\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"Unmasking the deep-seated biases in AI systems\",\"datePublished\":\"2023-07-28T20:34:47+00:00\",\"dateModified\":\"2024-03-28T00:46:33+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/\"},\"wordCount\":2105,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_2319661185.jpg\",\"keywords\":[\"Anthropic\",\"Bias\",\"LLMS\",\"MIT\",\"OpenAI\",\"prejudice\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/\",\"name\":\"Unmasking the deep-seated biases in AI systems | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_2319661185.jpg\",\"datePublished\":\"2023-07-28T20:34:47+00:00\",\"dateModified\":\"2024-03-28T00:46:33+00:00\",\"description\":\"The increased sophistication of AI systems is blurring the lines between humans and machines \u2013 is AI technology separate from ourselves?\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_2319661185.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_2319661185.jpg\",\"width\":1000,\"height\":527,\"caption\":\"AI bias\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/unmasking-the-deep-seated-biases-in-ai-systems\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Unmasking the deep-seated biases in AI systems\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Afsl\u00f8ring af de dybtliggende fordomme i AI-systemer | DailyAI","description":"Den \u00f8gede sofistikering af AI-systemer udvisker gr\u00e6nserne mellem mennesker og maskiner - er AI-teknologi adskilt fra os selv?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/","og_locale":"da_DK","og_type":"article","og_title":"Unmasking the deep-seated biases in AI systems | DailyAI","og_description":"The increased sophistication of AI systems is blurring the lines between humans and machines \u2013 is AI technology separate from ourselves?","og_url":"https:\/\/dailyai.com\/da\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/","og_site_name":"DailyAI","article_published_time":"2023-07-28T20:34:47+00:00","article_modified_time":"2024-03-28T00:46:33+00:00","og_image":[{"width":1000,"height":527,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_2319661185.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Sam Jeans","Estimeret l\u00e6setid":"10 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"Unmasking the deep-seated biases in AI systems","datePublished":"2023-07-28T20:34:47+00:00","dateModified":"2024-03-28T00:46:33+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/"},"wordCount":2105,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_2319661185.jpg","keywords":["Anthropic","Bias","LLMS","MIT","OpenAI","prejudice"],"articleSection":["Ethics &amp; Society"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/","url":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/","name":"Afsl\u00f8ring af de dybtliggende fordomme i AI-systemer | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_2319661185.jpg","datePublished":"2023-07-28T20:34:47+00:00","dateModified":"2024-03-28T00:46:33+00:00","description":"Den \u00f8gede sofistikering af AI-systemer udvisker gr\u00e6nserne mellem mennesker og maskiner - er AI-teknologi adskilt fra os selv?","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_2319661185.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_2319661185.jpg","width":1000,"height":527,"caption":"AI bias"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Unmasking the deep-seated biases in AI systems"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er videnskabs- og teknologiforfatter og har arbejdet i forskellige AI-startups. N\u00e5r han ikke skriver, kan han finde p\u00e5 at l\u00e6se medicinske tidsskrifter eller grave i kasser med vinylplader.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/da\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/3343","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=3343"}],"version-history":[{"count":19,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/3343\/revisions"}],"predecessor-version":[{"id":3369,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/3343\/revisions\/3369"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/3347"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=3343"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=3343"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=3343"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}