{"id":4724,"date":"2023-08-27T20:07:21","date_gmt":"2023-08-27T20:07:21","guid":{"rendered":"https:\/\/dailyai.com\/?p=4724"},"modified":"2023-08-30T21:09:12","modified_gmt":"2023-08-30T21:09:12","slug":"ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/","title":{"rendered":"AI p\u00e5 slagmarken: Hvem har ansvaret, hvis det g\u00e5r galt?"},"content":{"rendered":"<p><strong>I en verden, hvor \"krigsspil\" ikke l\u00e6ngere kun refererer til br\u00e6tspil eller videospil, men til scenarier om liv og d\u00f8d, der faciliteres af maskinel intelligens, er sp\u00f8rgsm\u00e5let om ansvar monumentalt.<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">Milit\u00e6ret optrapper forskning og investeringer i AI. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Nogle analytikere definerede et 11-dages m\u00f8de mellem Israel og Pal\u00e6stina i 2021 som det f\u00f8rste \"<a href=\"https:\/\/dailyai.com\/da\/2023\/07\/israel-deploys-advanced-ai-onto-the-battlefield\/\">AI-krig<\/a>\", hvor israelske teknologier hjalp med efterretninger og inds\u00e6ttelse p\u00e5 slagmarken. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">I de seneste m\u00e5neder,<\/span><span style=\"font-weight: 400;\">\u00a0t<\/span><span style=\"font-weight: 400;\">Det amerikanske milit\u00e6r etablerede en <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/08\/us-military-establishes-generative-ai-task-force\/\"><span style=\"font-weight: 400;\">Generativ AI-arbejdsgruppe<\/span><\/a><span style=\"font-weight: 400;\"> og testede med succes en <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/08\/the-us-air-force-confirms-successful-ai-powered-test-flight\/\"><span style=\"font-weight: 400;\">autonomt AI-drevet jetfly<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">AI's stigende rolle i krigsf\u00f8relse giver en r\u00e6kke komplekse juridiske og etiske dilemmaer, som vi endnu ikke har besvaret, selv p\u00e5 trods af at AI-drevet milit\u00e6rteknologi anvendes netop nu. <\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Hvem har ansvaret, n\u00e5r AI g\u00e5r galt?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Den milit\u00e6re teknologiproducent Rafaels \"<a href=\"https:\/\/www.rafael.co.il\/worlds\/land\/multi-service-network-centric-warfare\/\">Ildv\u00e6ver<\/a>\" lokaliserer fjendens positioner ved hj\u00e6lp af sensorer og foresl\u00e5r den bedst placerede enhed til at skyde p\u00e5 dem.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I produktbeskrivelsen st\u00e5r der: \"Fire Weaver beregner Rules of Engagement og styrer m\u00e5ludpegning og affyring ved at bruge den bedst egnede skytte til hvert m\u00e5l.\" <\/span><\/p>\n<p><span style=\"font-weight: 400;\">\"Beregner\" er det afg\u00f8rende ord her.\u00a0<\/span><span style=\"font-weight: 400;\">AI-v\u00e5ben kan reducere beslutningen om at \u00f8del\u00e6gge et m\u00e5l til en bin\u00e6r ja\/nej-beslutning, s\u00e5 hvad nu hvis Fire Weaver forveksler et barn med en fjendtlig soldat? Eller en lastbil med humanit\u00e6r hj\u00e6lp i stedet for et fjendtligt k\u00f8ret\u00f8j?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I \u00f8jeblikket er manglen p\u00e5 moralsk, etisk og juridisk klarhed skrigende. Kunstig intelligens befinder sig i et juridisk og etisk vakuum, og s\u00e5dan kan det blive ved med at v\u00e6re, for lovgivningen er notorisk langsom - og den m\u00f8der sj\u00e6ldent en teknologi, der udvikler sig s\u00e5 hurtigt.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I samfundet som helhed er der allerede mange eksempler p\u00e5, at AI g\u00e5r p\u00e5 kompromis med mennesker og deres rettigheder. De giver et glimt af det tomrum i lovgivningen og etikken, som AI og dens forskellige anvendelser har skabt.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">For eksempel, <\/span><a href=\"https:\/\/dailyai.com\/da\/2023\/06\/first-in-kind-libel-lawsuit-filed-against-openai\/\"><span style=\"font-weight: 400;\">ChatGPT p\u00e5st\u00e5et<\/span><\/a><span style=\"font-weight: 400;\"> En mand fra Georgia, Mark Walters, blev fundet skyldig i undersl\u00e6b og anklagede juraprofessor Jonathan Turley for seksuelle overgreb. ChatGPT var falsk i begge tilf\u00e6lde.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">P\u00e5 samme m\u00e5de har Getty Images og flere kunstnere, forfattere og andre skabere lanceret <a href=\"https:\/\/dailyai.com\/da\/2023\/07\/more-authors-attempt-to-sue-openai-for-using-copyright-material\/\">Retssager om ophavsret<\/a> mod teknologivirksomheder for at bruge tr\u00e6ningsdata til at bygge deres modeller.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Tr\u00e6ningsdata indsamles ofte fra tredjeparter som Common Crawl og \"skyggebiblioteker\" som Bibliotik, der minder om torrentsider som PirateBay.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hvem er i s\u00e5 fald ansvarlig for kr\u00e6nkelse af ophavsretten? AI-udviklerne eller udgiverne af datas\u00e6ttene? Det er en tegneserieagtig ansvarscirkel, hvor hver potentielt skyldig part peger p\u00e5 dem ved siden af sig, og i sidste ende slipper alle for skylden.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dette uklare etiske landskab er langt mere risikabelt i forbindelse med AI-v\u00e5ben og algoritmiske milit\u00e6rtaktikker, hvor en models beregninger bogstaveligt talt kan afg\u00f8re liv eller d\u00f8d.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Milit\u00e6re organisationer er allerede opm\u00e6rksomme p\u00e5 det paradigmeskifte, som AI-v\u00e5ben udg\u00f8r. If\u00f8lge Department of Defense (DoD) er den f\u00f8rste af de fem \"<\/span><a href=\"https:\/\/www.ai.mil\/docs\/Ethical_Principles_for_Artificial_Intelligence.pdf\"><span style=\"font-weight: 400;\">Etiske principper for kunstig intelligens<\/span><\/a><span style=\"font-weight: 400;\">\" til milit\u00e6r brug er \"Ansvarlig\", defineret som: \"DoD-personale vil ud\u00f8ve passende niveauer af d\u00f8mmekraft og omhu, mens de forbliver ansvarlige for udvikling, implementering og brug af AI-kapaciteter.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det betyder, at et menneske i sidste ende skal v\u00e6re ansvarlig for maskinens handlinger. Milit\u00e6ret har altid baseret sig p\u00e5 princippet om, at nogen - typisk den \u00f8verstbefalende eller en soldat - skal holdes ansvarlig for handlinger, der foretages under krigsf\u00f8relse. Men AI's rolle i beslutningsprocessen bliver stadig mere uklar.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det f\u00f8rste sp\u00f8rgsm\u00e5l er, at n\u00e5r AI f\u00e5r mere sofistikerede roller inden for m\u00e5lretning, overv\u00e5gning og andre omr\u00e5der, er det s\u00e5 ensbetydende med skyld at trykke p\u00e5 \"godkend\"-knappen?\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hvis AI's forudsigelser f.eks. slog fejl og resulterede i civile tab, er det tvivlsomt, om nogen ville acceptere, at 'maskinen' fik hele skylden for en ulykke.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Til geng\u00e6ld kan der v\u00e6re situationer, hvor mennesker fejlagtigt f\u00e5r skylden for deres rolle i en ulykke, som de ikke har bidraget til, hvilket antropologen M.C. Elish beskriver som en \"<\/span><a href=\"https:\/\/estsjournal.org\/index.php\/ests\/article\/view\/260\"><span style=\"font-weight: 400;\">moralsk kr\u00f8llezone<\/span><\/a><span style=\"font-weight: 400;\">.\"\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Elishs forskning i industrielle og maskinbaserede ulykker tyder p\u00e5, at mennesker har en tendens til at p\u00e5tage sig skylden i enhver ulykke, selv om fejlen ligger hos maskinen, algoritmen eller den beslutningstager, der godkendte teknologien i f\u00f8rste omgang.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hun tr\u00e6kker p\u00e5 flere eksempler fra det virkelige liv, s\u00e5som en atomulykke p\u00e5 Three Mile Island, Pennsylvania, og styrtet med Air France Flight 447, som i h\u00f8j grad blev tilskrevet 'menneskelige fejl' snarere end en mere indviklet serie af fejl fordelt p\u00e5 flere personer og systemer.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Elish siger om ansvar i AI-\u00e6raen: \"Med hensyn til autonome og robotteknologier er regler, love og normer stadig under udformning og kan v\u00e6re s\u00e6rligt modtagelige for usikkerhed eller endda unddragelse af ansvar.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det rejser ogs\u00e5 sp\u00f8rgsm\u00e5l om de demokratiske processers rolle i krigsf\u00f8relse og den menneskelige beslutningstagnings suver\u00e6nitet. Da Vietnamkrigen blev vist i stuerne i hele USA, p\u00e5virkede den umiddelbare virkning af krigens omkostninger den offentlige mening og politikken.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I mods\u00e6tning hertil kan AI-st\u00f8ttet kamp fjerne den samfundsm\u00e6ssige kontrol og balance fra den offentlige mening og den demokratiske diskurs.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Etikeren og filosoffen Thomas Metzinger understreger, at etiske normer ikke kun er juridiske konstruktioner, men ogs\u00e5 sociale, som udspringer af demokratiske processer. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hvis algoritmerne tr\u00e6ffer beslutningerne, bliver den menneskelige involvering - og dermed det moralske ansvar - diffust og overfl\u00f8digt.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Internationale juridiske konsekvenser af AI i krigsf\u00f8relse<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">En af Gen\u00e8ve-konventionens s\u00f8jler er princippet om \"distinktion\", som foreskriver, at der skal skelnes mellem kombattanter og civile.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Teknologien har allerede bevist, at den ikke lever op til l\u00f8fterne om \u00f8get beskyttelse af civile, idet kun 10% af dem, der blev dr\u00e6bt i amerikanske droneangreb under Obamas pr\u00e6sidentskab, var de tilsigtede m\u00e5l, if\u00f8lge l\u00e6kkede papirer fra <\/span><a href=\"https:\/\/theintercept.com\/drone-papers\/\"><span style=\"font-weight: 400;\">The Intercept<\/span><\/a><span style=\"font-weight: 400;\">.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">AI-algoritmer er kun s\u00e5 gode som de data, de er tr\u00e6net p\u00e5, og de regler, de er programmeret til at f\u00f8lge. N\u00e5r det g\u00e6lder krigsf\u00f8relse, kan en algoritme fejlfortolke data p\u00e5 grund af krigens t\u00e5ge, fejlbeh\u00e6ftede tr\u00e6ningsdata eller bevidst vildledende fjendtlige taktikker.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Den Internationale R\u00f8de Kors Komit\u00e9 (ICRC) har startet diskussioner om lovligheden af autonome v\u00e5bensystemer i henhold til den eksisterende humanit\u00e6re folkeret, men der er kun f\u00e5 konkrete definitioner.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De fastsl\u00e5r, at de eksisterende rammer er uegnede til de nye udfordringer, som kunstig intelligens stiller, og de <\/span><a href=\"https:\/\/www.icrc.org\/en\/document\/artificial-intelligence-and-machine-learning-armed-conflict-human-centred-approach%C2%A0\"><span style=\"font-weight: 400;\">ICRC's foresl\u00e5ede principper<\/span><\/a><span style=\"font-weight: 400;\"> er vage; for eksempel: \"Uforudsigelige autonome v\u00e5bensystemer b\u00f8r udtrykkeligt udelukkes.\" <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Hvor s\u00e6tter vi gr\u00e6nsen for 'uforudsigelighed', n\u00e5r en lille fejl kan v\u00e6re katastrofal?<\/span><\/p>\n<p><iframe loading=\"lazy\" title=\"Hvad er farerne ved autonome v\u00e5ben? | Krigens love | ICRC\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/8GwBTFRFlzA?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><span style=\"font-weight: 400;\">Desuden er slagmarken i evig forandring og vil byde p\u00e5 'edge cases', som er blinde vinkler, der ikke er taget h\u00f8jde for i AI-tr\u00e6ning. Det er us\u00e6dvanligt sv\u00e6rt at opbygge AI-systemer, der reagerer p\u00e5 dynamiske milj\u00f8forhold med samme reaktionstid som mennesker.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mens AI-v\u00e5ben kan opretholde n\u00f8jagtigheden i typiske slagmarksscenarier, hvad sker der s\u00e5, n\u00e5r milj\u00f8et skifter v\u00e6k fra, hvad modellen tror er \"sandheden p\u00e5 jorden\", eller n\u00e5r marginaltilf\u00e6lde \u00f8del\u00e6gger dens n\u00f8jagtighed?<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Erosion af moralsk f\u00f8lsomhed<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Det at dr\u00e6be i krig har \u00e6ndret sig meget med fremkomsten af moderne milit\u00e6re strategier og taktikker.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Under F\u00f8rste Verdenskrig blev historiske analyser og beretninger som S.L.A. Marshalls kontroversielle bog \"<\/span><a href=\"https:\/\/history.army.mil\/html\/books\/070\/70-64\/cmhPub_70-64.pdf\"><span style=\"font-weight: 400;\">M\u00e6nd mod ild<\/span><\/a><span style=\"font-weight: 400;\">\" tyder p\u00e5, at kun 15 til 25% af frontlinjesoldaterne affyrede deres v\u00e5ben med den hensigt at dr\u00e6be.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I dag oplever en soldat, der betjener en drone tusindvis af kilometer v\u00e6k, ikke de umiddelbare f\u00f8lelsesm\u00e6ssige og psykologiske konsekvenser af sine handlinger, hvilket har vist sig i en generelt lavere forekomst af PTSD og andre psykiske problemer sammenlignet med dem, der g\u00f8r tjeneste i felten.<\/span><\/p>\n<p><span style=\"font-weight: 400;\"> Selve designet af milit\u00e6rteknologi har tilpasset sig nye paradigmer for 'fjernkrigsf\u00f8relse'. De controllere, der bruges til droner, er blevet bem\u00e6rket for deres lighed med controllere til videospil, et designvalg, der m\u00e5ske ikke er tilf\u00e6ldigt.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">AI-operat\u00f8rer kan tr\u00e6ffe beslutninger om liv og d\u00f8d i et milj\u00f8, der abstraherer krigens realiteter til datapunkter og billeder p\u00e5 en sk\u00e6rm, hvilket afsk\u00e6rer<\/span><span style=\"font-weight: 400;\"> de sidste moralske forbindelser, vi har med livet for dem, der er indblandet i konflikten.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Hvad g\u00f8r vi nu?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">I takt med at milit\u00e6r AI udvikler sig til noget, som mange frygter, bliver ansvaret for fejl og fiaskoer lagt over p\u00e5 en enkelt person, <\/span><span style=\"font-weight: 400;\">udviklere, organisationer eller hele grupper virker usandsynligt.\u00a0<\/span><\/p>\n<p>Hvem eller hvad skal s\u00e5 b\u00e6re ansvaret? H<span style=\"font-weight: 400;\">Hvordan undg\u00e5r vi at tr\u00e6de ind i en fremtid, hvor drab bliver mere maskinelt end menneskeligt, og hvor etisk ansvar forsvinder i algoritmiske abstraktioner?\u00a0<\/span><\/p>\n<p>Med tiden vil stadig mere intelligente kunstige systemer sandsynligvis forandre etikken og ansvarsfordelingen, is\u00e6r hvis de viser tegn p\u00e5 sansning eller bevidsthed.<\/p>\n<p>Men det tager ikke h\u00f8jde for, hvordan denne situation h\u00e5ndteres i dag, og som s\u00e5 mange andre ting i AI-verdenen er der flere sp\u00f8rgsm\u00e5l end svar.<\/p>","protected":false},"excerpt":{"rendered":"<p>I en verden, hvor \"krigsspil\" ikke l\u00e6ngere kun refererer til br\u00e6tspil eller videospil, men til scenarier om liv og d\u00f8d, der er muliggjort af maskinel intelligens, er sp\u00f8rgsm\u00e5let om ansvar monumentalt. Milit\u00e6ret \u00f8ger forskningen og investeringerne i kunstig intelligens. Nogle analytikere definerede et 11-dages m\u00f8de mellem Israel og Pal\u00e6stina i 2021 som den f\u00f8rste \"AI-krig\", hvor israelske teknologier bistod med efterretninger og inds\u00e6ttelse p\u00e5 slagmarken. I de seneste m\u00e5neder har det amerikanske milit\u00e6r etableret en generativ AI-taskforce og med succes testet et autonomt AI-drevet jetfly. AI's stigende rolle i krigsf\u00f8relse giver en r\u00e6kke komplekse juridiske og etiske dilemmaer, som vi endnu ikke har besvaret, selv p\u00e5 trods af<\/p>","protected":false},"author":2,"featured_media":4726,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[87],"tags":[163,302,346,294,277,348,347,345],"class_list":["post-4724","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-opinions","tag-ai-risks","tag-ai-weapons","tag-battlefield","tag-darpa","tag-drones","tag-ethics","tag-palantir","tag-war"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI on the battlefield: who is responsible if it goes wrong? | DailyAI<\/title>\n<meta name=\"description\" content=\"In a world where &quot;war games&quot; no longer refer solely to board games or video games but to life-and-death scenarios facilitated by machine intelligence, the question of ethical responsibility is monumental.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI on the battlefield: who is responsible if it goes wrong? | DailyAI\" \/>\n<meta property=\"og:description\" content=\"In a world where &quot;war games&quot; no longer refer solely to board games or video games but to life-and-death scenarios facilitated by machine intelligence, the question of ethical responsibility is monumental.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-27T20:07:21+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-08-30T21:09:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/shutterstock_2143842119.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"AI on the battlefield: who is responsible if it goes wrong?\",\"datePublished\":\"2023-08-27T20:07:21+00:00\",\"dateModified\":\"2023-08-30T21:09:12+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/\"},\"wordCount\":1414,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/shutterstock_2143842119.jpg\",\"keywords\":[\"AI risks\",\"AI weapons\",\"Battlefield\",\"DARPA\",\"Drones\",\"Ethics\",\"Palantir\",\"War\"],\"articleSection\":[\"Opinions &amp; Analysis\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/\",\"name\":\"AI on the battlefield: who is responsible if it goes wrong? | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/shutterstock_2143842119.jpg\",\"datePublished\":\"2023-08-27T20:07:21+00:00\",\"dateModified\":\"2023-08-30T21:09:12+00:00\",\"description\":\"In a world where \\\"war games\\\" no longer refer solely to board games or video games but to life-and-death scenarios facilitated by machine intelligence, the question of ethical responsibility is monumental.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/shutterstock_2143842119.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/shutterstock_2143842119.jpg\",\"width\":1000,\"height\":667,\"caption\":\"War crimes\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI on the battlefield: who is responsible if it goes wrong?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI p\u00e5 slagmarken: Hvem er ansvarlig, hvis det g\u00e5r galt? | DailyAI","description":"I en verden, hvor \"krigsspil\" ikke l\u00e6ngere kun refererer til br\u00e6tspil eller videospil, men til liv-og-d\u00f8d-scenarier, der faciliteres af maskinel intelligens, er sp\u00f8rgsm\u00e5let om etisk ansvar monumentalt.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/","og_locale":"da_DK","og_type":"article","og_title":"AI on the battlefield: who is responsible if it goes wrong? | DailyAI","og_description":"In a world where \"war games\" no longer refer solely to board games or video games but to life-and-death scenarios facilitated by machine intelligence, the question of ethical responsibility is monumental.","og_url":"https:\/\/dailyai.com\/da\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/","og_site_name":"DailyAI","article_published_time":"2023-08-27T20:07:21+00:00","article_modified_time":"2023-08-30T21:09:12+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/shutterstock_2143842119.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Sam Jeans","Estimeret l\u00e6setid":"7 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"AI on the battlefield: who is responsible if it goes wrong?","datePublished":"2023-08-27T20:07:21+00:00","dateModified":"2023-08-30T21:09:12+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/"},"wordCount":1414,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/shutterstock_2143842119.jpg","keywords":["AI risks","AI weapons","Battlefield","DARPA","Drones","Ethics","Palantir","War"],"articleSection":["Opinions &amp; Analysis"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/","url":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/","name":"AI p\u00e5 slagmarken: Hvem er ansvarlig, hvis det g\u00e5r galt? | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/shutterstock_2143842119.jpg","datePublished":"2023-08-27T20:07:21+00:00","dateModified":"2023-08-30T21:09:12+00:00","description":"I en verden, hvor \"krigsspil\" ikke l\u00e6ngere kun refererer til br\u00e6tspil eller videospil, men til liv-og-d\u00f8d-scenarier, der faciliteres af maskinel intelligens, er sp\u00f8rgsm\u00e5let om etisk ansvar monumentalt.","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/shutterstock_2143842119.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/shutterstock_2143842119.jpg","width":1000,"height":667,"caption":"War crimes"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/08\/ai-on-the-battlefield-who-is-responsible-if-it-goes-wrong\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"AI on the battlefield: who is responsible if it goes wrong?"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er videnskabs- og teknologiforfatter og har arbejdet i forskellige AI-startups. N\u00e5r han ikke skriver, kan han finde p\u00e5 at l\u00e6se medicinske tidsskrifter eller grave i kasser med vinylplader.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/da\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/4724","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=4724"}],"version-history":[{"count":21,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/4724\/revisions"}],"predecessor-version":[{"id":4773,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/4724\/revisions\/4773"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/4726"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=4724"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=4724"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=4724"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}