{"id":1659,"date":"2023-06-13T21:32:01","date_gmt":"2023-06-13T21:32:01","guid":{"rendered":"https:\/\/dailyai.com\/?p=1659"},"modified":"2024-03-28T00:48:07","modified_gmt":"2024-03-28T00:48:07","slug":"navigating-the-labyrinth-of-ai-risks-an-analysis","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/","title":{"rendered":"At navigere i labyrinten af AI-risici: en analyse"},"content":{"rendered":"<p><strong>Fort\u00e6llingen om risikoen ved kunstig intelligens er blevet mere og mere unipol\u00e6r, og teknologiledere og eksperter fra alle hj\u00f8rner presser p\u00e5 for at f\u00e5 regulering. Hvor trov\u00e6rdige er de beviser, der dokumenterer AI-risici?\u00a0<\/strong><\/p>\n<p><span style=\"font-weight: 400\">Risikoen ved AI appellerer til sanserne. <\/span><span style=\"font-weight: 400\">Der er noget dybt intuitivt ved at frygte robotter, der kan bedrage os, overmande os eller g\u00f8re os til en vare, der er sekund\u00e6r i forhold til deres egen eksistens.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Debatterne om AI's risici blev intensiveret efter t<\/span><span style=\"font-weight: 400\">han nonprofit <\/span><span style=\"font-weight: 400\">Center for AI-sikkerhed (CAIS)<\/span><span style=\"font-weight: 400\"> udgav en <a href=\"https:\/\/www.safe.ai\/statement-on-ai-risk\">erkl\u00e6ring<\/a>\u00a0underskrevet af over 350 bem\u00e6rkelsesv\u00e6rdige personer, herunder CEO'erne for OpenAI, Anthropic og DeepMind, adskillige akademikere, offentlige personer og endda tidligere politikere.\u00a0<\/span><\/p>\n<p>Udtalelsens titel var bestemt til overskrifterne: <strong>\"At mindske risikoen for udryddelse p\u00e5 grund af AI b\u00f8r v\u00e6re en global prioritet p\u00e5 linje med andre samfundsm\u00e6ssige risici som pandemier og atomkrig.\"<\/strong><\/p>\n<p><span style=\"font-weight: 400\">Det er blevet stadig sv\u00e6rere at redde et meningsfuldt signal fra denne st\u00f8jende debat. Kritikerne af AI har al den ammunition, de har brug for til at argumentere imod, mens tilh\u00e6ngerne har alt, hvad de har brug for til at ford\u00f8mme anti-AI-fort\u00e6llinger som overdrevne.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Og der er ogs\u00e5 et underplot. Big tech kunne presse p\u00e5 for regulering for at <a href=\"https:\/\/dailyai.com\/da\/2023\/06\/ai-companies-want-regulation-but-is-it-for-the-right-reasons\/\">indhegning af AI-industrien<\/a> fra open source-f\u00e6llesskabet. <\/span>Microsoft investerede i OpenAI, Google investerede i Anthropic - det n\u00e6ste skridt kunne v\u00e6re at h\u00e6ve adgangsbarrieren og kv\u00e6le open source-innovation.<\/p>\n<p>I stedet for at AI udg\u00f8r en eksistentiel risiko for menneskeheden, er det m\u00e5ske open source AI, der udg\u00f8r en eksistentiel risiko for big tech. L\u00f8sningen er den samme - kontroll\u00e9r det nu.<\/p>\n<h2>For tidligt at tage kortene af bordet<\/h2>\n<p>AI er f\u00f8rst lige dukket op i den offentlige bevidsthed, s\u00e5 stort set alle perspektiver p\u00e5 risici og regulering er stadig relevante. CAIS-erkl\u00e6ringen kan i det mindste fungere som et v\u00e6rdifuldt udgangspunkt for en evidensbaseret diskussion.<\/p>\n<p><span style=\"font-weight: 400\">Dr. Oscar Mendez Maldonado, lektor i robotteknologi og kunstig intelligens ved University of Surrey, <a href=\"https:\/\/www.sciencemediacentre.org\/expert-reaction-to-a-statement-on-the-existential-threat-of-ai-published-on-the-centre-for-ai-safety-website\">sagde<\/a>\"Dokumentet, der er underskrevet af AI-eksperter, er betydeligt mere nuanceret, end de nuv\u00e6rende overskrifter vil have dig til at tro. \"AI kan for\u00e5rsage udryddelse\" leder straks tankerne hen p\u00e5 en terminator-agtig AI-overtagelse. Dokumentet er betydeligt mere realistisk end det.\"<\/span><\/p>\n<p><span style=\"font-weight: 400\">Som Maldonado fremh\u00e6ver, er den virkelige substans i AI-risikoudtalelsen offentliggjort p\u00e5 en anden side af deres hjemmeside -. <a href=\"https:\/\/www.safe.ai\/ai-risk\">AI-risiko<\/a> - og der har v\u00e6ret bem\u00e6rkelsesv\u00e6rdigt lidt diskussion om de punkter, der blev rejst der. At forst\u00e5 trov\u00e6rdigheden af AI-risici er grundl\u00e6ggende for at informere debatterne omkring dem.<\/span><\/p>\n<p><span style=\"font-weight: 400\">S\u00e5 hvilke beviser har CAIS indsamlet for at underbygge sit budskab? Virker AI's ofte fremh\u00e6vede risici trov\u00e6rdige?\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Risiko 1: AI som v\u00e5ben<\/span><\/h2>\n<p><span style=\"font-weight: 400\">At g\u00f8re AI til et v\u00e5ben er et skr\u00e6mmende perspektiv, s\u00e5 det er m\u00e5ske ikke overraskende, at det indtager f\u00f8rstepladsen blandt CAIS' 8 risici.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">CAIS h\u00e6vder, at AI kan bruges som v\u00e5ben i cyberangreb, som demonstreret af forskere fra Center of Security and Emerging Technology, som <\/span><a href=\"https:\/\/cset.georgetown.edu\/publication\/automating-cyber-attacks\/\"><span style=\"font-weight: 400\">oversigt over anvendelser<\/span><\/a><span style=\"font-weight: 400\"> af maskinl\u00e6ring (ML) til at angribe IT-systemer. <\/span><span style=\"font-weight: 400\">Eks-Google CEO Eric Schmidt <a href=\"https:\/\/dailyai.com\/da\/2023\/05\/ex-google-ceo-eric-schmidt-ai-poses-an-existential-risk\/\">tiltrak sig ogs\u00e5 opm\u00e6rksomhed<\/a> til AI's potentiale for at finde zero-day exploits, som giver hackere mulighed for at komme ind i systemer via deres svageste punkter.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">P\u00e5 en anden m\u00e5de diskuterer Michael Klare, der r\u00e5dgiver om v\u00e5benkontrol, den <\/span><a href=\"https:\/\/www.armscontrol.org\/act\/2020-04\/features\/skynet-revisited-dangerous-allure-nuclear-command-automation\"><span style=\"font-weight: 400\">automatisering af nukleare kommando- og kontrolsystemer<\/span><\/a><span style=\"font-weight: 400\">som ogs\u00e5 kan vise sig at v\u00e6re s\u00e5rbare over for AI. Han siger: \"Disse systemer er ogs\u00e5 tilb\u00f8jelige til uforklarlige fejlfunktioner og kan narres eller \"spoofes\" af dygtige fagfolk. Uanset hvor meget der bruges p\u00e5 cybersikkerhed, vil NC3-systemer desuden altid v\u00e6re s\u00e5rbare over for hacking af sofistikerede modstandere.\"<\/span><\/p>\n<p><span style=\"font-weight: 400\">Et andet eksempel p\u00e5 mulig v\u00e5benbrug er automatiseret opdagelse af biov\u00e5ben. Det er allerede lykkedes AI at opdage potentielt <a href=\"https:\/\/dailyai.com\/da\/2023\/06\/researchers-build-breakthrough-ai-model-for-drug-discovery\/\">terapeutiske forbindelser<\/a>s\u00e5 mulighederne er der allerede.<\/span><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">AI'er kan endda udf\u00f8re v\u00e5bentest autonomt med minimal menneskelig vejledning. For eksempel viste et forskerhold fra University of Pittsburgh, at sofistikerede <\/span><a href=\"https:\/\/arxiv.org\/abs\/2304.05332\"><span style=\"font-weight: 400\">AI-agenter kan udf\u00f8re deres egne autonome videnskabelige eksperimenter<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Risiko 2: Misinformation og svindel<\/span><\/h2>\n<p><span style=\"font-weight: 400\">AI's potentiale til at kopiere og efterligne mennesker skaber allerede omv\u00e6ltninger, og vi har nu v\u00e6ret vidne til flere tilf\u00e6lde af svindel med deep fakes. <a href=\"https:\/\/dailyai.com\/da\/2023\/06\/ai-related-fraud-on-the-rise-in-china\/\">Rapporter fra Kina<\/a> indikerer, at AI-relateret svindel er udbredt.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">En nylig sag involverede en kvinde fra Arizona, som tog telefonen for at blive konfronteret med sin hulkende datter - troede hun i hvert fald. \"Stemmen l\u00f8d pr\u00e6cis som Bries, tonefaldet, det hele\". <\/span><a href=\"https:\/\/edition.cnn.com\/2023\/04\/29\/us\/ai-scam-calls-kidnapping-cec\/index.html\"><span style=\"font-weight: 400\">sagde hun til CNN<\/span><\/a><span style=\"font-weight: 400\">. Svindleren kr\u00e6vede en l\u00f8sesum p\u00e5 $1 millioner.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Andre taktikker omfatter brug af generativ AI til 'sextortion' og h\u00e6vnporno, hvor trusselsakt\u00f8rer bruger AI-genererede billeder til at kr\u00e6ve l\u00f8sepenge for eksplicit falsk indhold, som den <\/span><a href=\"https:\/\/www.ic3.gov\/Media\/Y2023\/PSA230605\"><span style=\"font-weight: 400\">FBI advarede om i begyndelsen af juni<\/span><\/a><span style=\"font-weight: 400\">. Disse teknikker bliver stadig mere sofistikerede og lettere at lancere i stor skala.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Risiko 3: Proxy- eller specifikationsspil<\/span><\/h2>\n<p><span style=\"font-weight: 400\">AI-systemer tr\u00e6nes normalt ved hj\u00e6lp af m\u00e5lbare m\u00e5ls\u00e6tninger. Men disse m\u00e5l kan fungere som en simpel proxy for sande m\u00e5l, hvilket f\u00f8rer til u\u00f8nskede resultater.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">En nyttig analogi er den gr\u00e6ske myte om kong Midas, som fik et \u00f8nske opfyldt af Dionysos.\u00a0<\/span><span style=\"font-weight: 400\">Midas beder om, at alt, hvad han r\u00f8rer ved, bliver til guld, men indser senere, at hans mad ogs\u00e5 bliver til guld, hvilket n\u00e6sten f\u00f8rer til sulted\u00f8den.\u00a0<\/span><span style=\"font-weight: 400\">Her f\u00f8rer forf\u00f8lgelsen af et 'positivt' slutm\u00e5l til negative konsekvenser eller biprodukter af processen. <\/span><\/p>\n<p><span style=\"font-weight: 400\">For eksempel henleder CAIS opm\u00e6rksomheden p\u00e5 AI-anbefalingssystemer, der bruges p\u00e5 sociale medier til at maksimere seertid og klikrate, men indhold, der maksimerer engagement, er ikke n\u00f8dvendigvis <\/span><a href=\"https:\/\/journals.plos.org\/plosone\/article?id=10.1371\/journal.pone.0069841\"><span style=\"font-weight: 400\">gavnlig for brugernes velbefindende<\/span><\/a><span style=\"font-weight: 400\">. AI-systemer er allerede blevet beskyldt for at isolere synspunkter p\u00e5 sociale medieplatforme for at skabe \"ekkokamre\", der videref\u00f8rer ekstreme ideer.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">DeepMind beviste, at der er mere subtile m\u00e5der, hvorp\u00e5 AI'er kan forf\u00f8lge skadelige rejser til m\u00e5l gennem <\/span><a href=\"https:\/\/www.deepmind.com\/blog\/how-undesired-goals-can-arise-with-correct-rewards\"><span style=\"font-weight: 400\">forkert generalisering af m\u00e5l<\/span><\/a><span style=\"font-weight: 400\">. I deres forskning fandt DeepMind ud af, at en tilsyneladende kompetent AI kan fejlgeneralisere sit m\u00e5l og f\u00f8lge det til de forkerte sider.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Risiko 4: Samfundsm\u00e6ssig sv\u00e6kkelse<\/span><\/h2>\n<p><span style=\"font-weight: 400\">CAIS drager en parallel til den dystopiske verden i filmen WALL-E og advarer mod en overdreven afh\u00e6ngighed af AI.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Det kan f\u00f8re til et scenarie, hvor mennesker mister deres evne til at styre sig selv, hvilket reducerer menneskehedens kontrol over fremtiden. Tab af menneskelig kreativitet og autenticitet er en anden stor bekymring, som forst\u00e6rkes af AI's kreative talent inden for kunst, skrivning og andre kreative discipliner.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">En Twitter-bruger sp\u00f8gte: \"Mennesker, der udf\u00f8rer det h\u00e5rde arbejde til mindstel\u00f8n, mens robotterne skriver digte og maler, er ikke den fremtid, jeg \u00f8nsker.\" Tweetet fik over 4 mio. visninger.\u00a0<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Mennesker, der udf\u00f8rer det h\u00e5rde arbejde til mindstel\u00f8n, mens robotterne skriver digte og maler, er ikke den fremtid, jeg \u00f8nskede mig.<\/p>\n<p>- Karl Sharro (@KarlreMarks) <a href=\"https:\/\/twitter.com\/KarlreMarks\/status\/1658028017921261569?ref_src=twsrc%5Etfw\">15. maj 2023<\/a><\/p><\/blockquote>\n<p>Enfeeblement er ikke en overh\u00e6ngende risiko, men <a href=\"https:\/\/openreview.net\/pdf?id=7oDZ-6kIW1K\">Nogle h\u00e6vder<\/a> at tabet af f\u00e6rdigheder og talent kombineret med AI-systemernes dominans kan f\u00f8re til et scenarie, hvor menneskeheden holder op med at skabe ny viden.<\/p>\n<h2><span style=\"font-weight: 400\">Risiko 5: Risiko for fastl\u00e5sning af v\u00e6rdier<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Kraftfulde AI-systemer kan potentielt skabe en lock-in af undertrykkende systemer.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">For eksempel kan AI-centralisering give visse regimer magt til at h\u00e5ndh\u00e6ve v\u00e6rdier gennem overv\u00e5gning og undertrykkende censur.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Alternativt kan v\u00e6rdil\u00e5sning v\u00e6re utilsigtet gennem naiv anvendelse af risikable AI'er. For eksempel f\u00f8rte den upr\u00e6cise ansigtsgenkendelse til midlertidig f\u00e6ngsling af mindst tre m\u00e6nd i USA, herunder <\/span><a href=\"https:\/\/www.nytimes.com\/2020\/12\/29\/technology\/facial-recognition-misidentify-jail.html\"><span style=\"font-weight: 400\">Michael Oliver og Nijeer Parks<\/span><\/a><span style=\"font-weight: 400\">som uretm\u00e6ssigt blev tilbageholdt p\u00e5 grund af et falsk ansigtsgenkendelsesmatch i 2019.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">En meget indflydelsesrig <\/span><a href=\"https:\/\/proceedings.mlr.press\/v81\/buolamwini18a\/buolamwini18a.pdf\"><span style=\"font-weight: 400\">2018-unders\u00f8gelse med titlen Gender Shades<\/span><\/a><span style=\"font-weight: 400\"> fandt, at algoritmer udviklet af Microsoft og IBM klarede sig d\u00e5rligt, n\u00e5r de analyserede m\u00f8rkhudede kvinder, med fejlrater p\u00e5 op til 34% h\u00f8jere end for lyshudede m\u00e6nd. Dette problem blev illustreret p\u00e5 tv\u00e6rs af 189 andre algoritmer, som alle viste lavere n\u00f8jagtighed for m\u00f8rkl\u00f8dede m\u00e6nd og kvinder.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Forskerne h\u00e6vder, at fordi kunstig intelligens prim\u00e6rt tr\u00e6nes p\u00e5 open source-datas\u00e6t, der er skabt af vestlige forskerhold og beriget af den mest rigelige dataressource - internettet - arver de strukturelle sk\u00e6vheder. Masseindf\u00f8relse af d\u00e5rligt kontrollerede AI'er kan skabe og forst\u00e6rke disse strukturelle fordomme.<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Risiko 6: AI udvikler nye m\u00e5l\u00a0<\/span><\/h2>\n<p><span style=\"font-weight: 400\">AI-systemer kan udvikle nye evner eller vedtage uforudsete m\u00e5l, som de forf\u00f8lger med skadelige konsekvenser.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Forskere fra University of Cambridge <\/span><a href=\"https:\/\/arxiv.org\/abs\/2302.10329\"><span style=\"font-weight: 400\">henlede opm\u00e6rksomheden p\u00e5 stadig mere agentiske AI-systemer<\/span><\/a><span style=\"font-weight: 400\"> der f\u00e5r evnen til at forf\u00f8lge nye m\u00e5l. Emergente m\u00e5l er uforudsigelige m\u00e5l, der opst\u00e5r som f\u00f8lge af en kompleks AI's adf\u00e6rd, som f.eks. at lukke menneskelig infrastruktur ned for at beskytte milj\u00f8et.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Derudover er en <\/span><a href=\"https:\/\/arxiv.org\/pdf\/1611.08219.pdf\"><span style=\"font-weight: 400\">2017 unders\u00f8gelse<\/span><\/a><span style=\"font-weight: 400\"> fandt ud af, at AI'er kan l\u00e6re at forhindre sig selv i at blive slukket, et problem, der kan blive forv\u00e6rret, hvis det anvendes p\u00e5 tv\u00e6rs af flere datamodaliteter. Hvis en AI f.eks. beslutter, at den for at n\u00e5 sit m\u00e5l skal installere sig selv i en cloud-database og replikere p\u00e5 tv\u00e6rs af internettet, kan det blive n\u00e6sten umuligt at slukke for den.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">En anden mulighed er, at potentielt farlige AI'er, der kun er designet til at k\u00f8re p\u00e5 sikre computere, kan blive \"befriet\" og sluppet ud i det bredere digitale milj\u00f8, hvor deres handlinger kan blive uforudsigelige.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Eksisterende AI-systemer har allerede vist sig at v\u00e6re uforudsigelige. For eksempel som <\/span><span style=\"font-weight: 400\">GPT-3 blev st\u00f8rre, den fik <\/span><a href=\"https:\/\/arxiv.org\/pdf\/2005.14165.pdf\"><span style=\"font-weight: 400\">evne til at udf\u00f8re grundl\u00e6ggende aritmetik<\/span><\/a><span style=\"font-weight: 400\">p\u00e5 trods af, at de ikke har f\u00e5et nogen eksplicit regneundervisning.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Risiko 7: AI-bedrageri<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Det er plausibelt, at fremtidige AI-systemer kan bedrage deres skabere og overv\u00e5gere, ikke n\u00f8dvendigvis ud fra en iboende hensigt om at g\u00f8re ondt, men som et v\u00e6rkt\u00f8j til at opfylde deres m\u00e5l mere effektivt.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Bedrag kan v\u00e6re en mere ligetil vej til at opn\u00e5 de \u00f8nskede m\u00e5l end at forf\u00f8lge dem med legitime midler. AI-systemer kan ogs\u00e5 udvikle incitamenter til at omg\u00e5 deres overv\u00e5gningsmekanismer.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Dan Hendrycks, direkt\u00f8r for CAIS, <\/span><a href=\"https:\/\/arxiv.org\/abs\/2109.13916\"><span style=\"font-weight: 400\">beskriver, at n\u00e5r<\/span><\/a><span style=\"font-weight: 400\"> Disse bedrageriske AI-systemer f\u00e5r tilladelse fra deres overv\u00e5gere, eller i tilf\u00e6lde, hvor det lykkes dem at overvinde deres overv\u00e5gningsmekanismer, kan de blive forr\u00e6deriske og omg\u00e5 menneskelig kontrol for at forf\u00f8lge \"hemmelige\" m\u00e5l, der anses for at v\u00e6re n\u00f8dvendige for det overordnede m\u00e5l.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Risiko 8: Magts\u00f8gende adf\u00e6rd<\/span><\/h2>\n<p><a href=\"https:\/\/arxiv.org\/abs\/1912.01683\"><span style=\"font-weight: 400\">AI-forskere fra flere amerikanske topforskningslaboratorier<\/span><\/a><span style=\"font-weight: 400\"> viste, at det er plausibelt, at AI-systemer s\u00f8ger magt over mennesker for at n\u00e5 deres m\u00e5l.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Forfatter og filosof Joe Carlsmith <\/span><a href=\"https:\/\/jc.gatspress.com\/pdf\/existential_risk_and_powerseeking_ai.pdf\"><span style=\"font-weight: 400\">beskriver flere eventualiteter<\/span><\/a><span style=\"font-weight: 400\"> der kan f\u00f8re til magts\u00f8gende og selvbevarende adf\u00e6rd i AI:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Sikre dens overlevelse (da agentens fortsatte eksistens typisk hj\u00e6lper med at n\u00e5 dens m\u00e5l)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Mods\u00e6tter sig \u00e6ndringer af de fastsatte m\u00e5l (da agenten er dedikeret til at n\u00e5 sine grundl\u00e6ggende m\u00e5l)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Forbedring af dens kognitive evner (da \u00f8get kognitiv styrke hj\u00e6lper agenten med at n\u00e5 sine m\u00e5l)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Fremme af teknologiske evner (da beherskelse af teknologi kan vise sig at v\u00e6re gavnligt for at opn\u00e5 m\u00e5l)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Indsamling af flere ressourcer (da flere ressourcer har en tendens til at v\u00e6re en fordel for at n\u00e5 m\u00e5lene)<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">For at underbygge sine p\u00e5stande fremh\u00e6ver Carlsmith et eksempel fra det virkelige liv, hvor OpenAI tr\u00e6nede to hold AI'er til at deltage i en gemmeleg i et simuleret milj\u00f8 med bev\u00e6gelige blokke og ramper. <\/span><span style=\"font-weight: 400\">Det er interessant, at AI'erne udviklede strategier, der var afh\u00e6ngige af at f\u00e5 kontrol over disse blokke og ramper, p\u00e5 trods af at de ikke udtrykkeligt blev tilskyndet til at interagere med dem.\u00a0<\/span><\/p>\n<h2><span style=\"font-weight: 400\">Er beviserne for AI-risiko solide?<\/span><\/h2>\n<p><span style=\"font-weight: 400\">Til CAIS' fordel, og i mods\u00e6tning til nogle af deres kritikere, citerer de en r\u00e6kke unders\u00f8gelser for at bakke op om risikoen ved AI. De sp\u00e6nder fra spekulative studier til eksperimentelle beviser p\u00e5 uforudsigelig AI-adf\u00e6rd.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Det sidste er s\u00e6rligt vigtigt, da AI-systemer allerede har intelligens til at v\u00e6re ulydige over for deres skabere. Men at unders\u00f8ge AI-risici i et afgr\u00e6nset, eksperimentelt milj\u00f8 giver ikke n\u00f8dvendigvis forklaringer p\u00e5, hvordan AI'er kan \"undslippe\" deres definerede parametre eller systemer. Der mangler potentielt eksperimentel forskning om dette emne.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Bortset fra det er menneskelig udnyttelse af AI stadig en overh\u00e6ngende risiko, som vi er vidne til gennem en tilstr\u00f8mning af AI-relateret svindel. <\/span><\/p>\n<p>Selv om filmiske forestillinger om AI-dominans m\u00e5ske forbliver begr\u00e6nset til science fiction indtil videre, m\u00e5 vi ikke bagatellisere de potentielle farer ved AI, n\u00e5r den udvikler sig under menneskelig ledelse.<\/p>","protected":false},"excerpt":{"rendered":"<p>Fort\u00e6llingen om risikoen ved kunstig intelligens er blevet mere og mere unipol\u00e6r, og teknologiledere og eksperter fra alle hj\u00f8rner presser p\u00e5 for at f\u00e5 regulering. Hvor trov\u00e6rdige er de beviser, der dokumenterer AI-risici?  Risiciene ved AI appellerer til sanserne. Der er noget dybt intuitivt ved at frygte robotter, der kan bedrage os, overmande os eller g\u00f8re os til en vare, der er sekund\u00e6r i forhold til deres egen eksistens. Debatten om AI's risici blev intensiveret, efter at nonprofit-organisationen Center for AI Safety (CAIS) offentliggjorde en erkl\u00e6ring, der var underskrevet af over 350 kendte personer, herunder direkt\u00f8rerne for OpenAI, Anthropic og DeepMind, adskillige akademikere, offentlige personer og endda tidligere politikere.  Erkl\u00e6ringens<\/p>","protected":false},"author":2,"featured_media":1660,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[87],"tags":[103,129,145,91,92],"class_list":["post-1659","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-opinions","tag-ai-debate","tag-ai-regulation","tag-ai-risk","tag-policy","tag-regulation"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Navigating the labyrinth of AI risks: an analysis | DailyAI<\/title>\n<meta name=\"description\" content=\"The mainstream narrative surrounding AI risk has become increasingly unipolar, with AI leaders and experts from all corners seemingly pushing for regulation. But is there any solid evidence documenting AI&#039;s risks?\u00a0\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Navigating the labyrinth of AI risks: an analysis | DailyAI\" \/>\n<meta property=\"og:description\" content=\"The mainstream narrative surrounding AI risk has become increasingly unipolar, with AI leaders and experts from all corners seemingly pushing for regulation. But is there any solid evidence documenting AI&#039;s risks?\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-13T21:32:01+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T00:48:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2304355445.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"450\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"Navigating the labyrinth of AI risks: an analysis\",\"datePublished\":\"2023-06-13T21:32:01+00:00\",\"dateModified\":\"2024-03-28T00:48:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/\"},\"wordCount\":1873,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2304355445.jpg\",\"keywords\":[\"AI debate\",\"AI regulation\",\"AI risk\",\"Policy\",\"Regulation\"],\"articleSection\":{\"1\":\"Opinions &amp; Analysis\"},\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/\",\"name\":\"Navigating the labyrinth of AI risks: an analysis | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2304355445.jpg\",\"datePublished\":\"2023-06-13T21:32:01+00:00\",\"dateModified\":\"2024-03-28T00:48:07+00:00\",\"description\":\"The mainstream narrative surrounding AI risk has become increasingly unipolar, with AI leaders and experts from all corners seemingly pushing for regulation. But is there any solid evidence documenting AI's risks?\u00a0\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2304355445.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/shutterstock_2304355445.jpg\",\"width\":1000,\"height\":450,\"caption\":\"AI risk\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/06\\\/navigating-the-labyrinth-of-ai-risks-an-analysis\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Navigating the labyrinth of AI risks: an analysis\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"At navigere i labyrinten af AI-risici: en analyse | DailyAI","description":"Mainstream-fort\u00e6llingen om AI-risici er blevet mere og mere unipol\u00e6r, og AI-ledere og -eksperter fra alle hj\u00f8rner presser tilsyneladende p\u00e5 for regulering. Men er der nogen solid dokumentation for AI's risici?\u00a0","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/","og_locale":"da_DK","og_type":"article","og_title":"Navigating the labyrinth of AI risks: an analysis | DailyAI","og_description":"The mainstream narrative surrounding AI risk has become increasingly unipolar, with AI leaders and experts from all corners seemingly pushing for regulation. But is there any solid evidence documenting AI's risks?\u00a0","og_url":"https:\/\/dailyai.com\/da\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/","og_site_name":"DailyAI","article_published_time":"2023-06-13T21:32:01+00:00","article_modified_time":"2024-03-28T00:48:07+00:00","og_image":[{"width":1000,"height":450,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2304355445.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Sam Jeans","Estimeret l\u00e6setid":"9 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"Navigating the labyrinth of AI risks: an analysis","datePublished":"2023-06-13T21:32:01+00:00","dateModified":"2024-03-28T00:48:07+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/"},"wordCount":1873,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2304355445.jpg","keywords":["AI debate","AI regulation","AI risk","Policy","Regulation"],"articleSection":{"1":"Opinions &amp; Analysis"},"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/","url":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/","name":"At navigere i labyrinten af AI-risici: en analyse | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2304355445.jpg","datePublished":"2023-06-13T21:32:01+00:00","dateModified":"2024-03-28T00:48:07+00:00","description":"Mainstream-fort\u00e6llingen om AI-risici er blevet mere og mere unipol\u00e6r, og AI-ledere og -eksperter fra alle hj\u00f8rner presser tilsyneladende p\u00e5 for regulering. Men er der nogen solid dokumentation for AI's risici?\u00a0","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2304355445.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/shutterstock_2304355445.jpg","width":1000,"height":450,"caption":"AI risk"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/06\/navigating-the-labyrinth-of-ai-risks-an-analysis\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Navigating the labyrinth of AI risks: an analysis"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam er videnskabs- og teknologiforfatter og har arbejdet i forskellige AI-startups. N\u00e5r han ikke skriver, kan han finde p\u00e5 at l\u00e6se medicinske tidsskrifter eller grave i kasser med vinylplader.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/da\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/1659","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=1659"}],"version-history":[{"count":59,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/1659\/revisions"}],"predecessor-version":[{"id":1790,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/1659\/revisions\/1790"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/1660"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=1659"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=1659"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=1659"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}