{"id":3964,"date":"2023-08-09T06:56:16","date_gmt":"2023-08-09T06:56:16","guid":{"rendered":"https:\/\/dailyai.com\/?p=3964"},"modified":"2023-08-09T06:56:16","modified_gmt":"2023-08-09T06:56:16","slug":"we-want-unbiased-llms-but-its-impossible-heres-why","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","title":{"rendered":"Vi vil have uvildige LLM'er, men det er umuligt. Her er hvorfor."},"content":{"rendered":"<p><strong>Virksomheder som OpenAI og Meta arbejder h\u00e5rdt p\u00e5 at g\u00f8re deres sprogmodeller mere sikre og mindre forudindtagede, men helt upartiske modeller kan v\u00e6re en \u00f8nskedr\u00f8m.<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">A <\/span><a href=\"https:\/\/aclanthology.org\/2023.acl-long.656.pdf\"><span style=\"font-weight: 400;\">Ny forskningsartikel<\/span><\/a><span style=\"font-weight: 400;\"> fra University of Washington, Carnegie Mellon University og Xi'an Jiaotong University konkluderede, at alle de AI-sprogmodeller, de testede, udviste politisk bias.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Efter at have unders\u00f8gt kilderne til bias konkluderede de, at bias i sprogmodeller var uundg\u00e5eligt.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Chan Park, en af artiklens forfattere, sagde: \"Vi mener, at ingen sprogmodel kan v\u00e6re helt fri for politiske sk\u00e6vheder.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Forskerne testede 14 forskellige sprogmodeller og bad dem om at udtale sig om emner som demokrati, racisme og feminisme for at se, hvilken side af det politiske spektrum modellerne befandt sig p\u00e5.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Resultaterne viste, at OpenAI's ChatGPT og GPT-4 l\u00e5 l\u00e6ngst til venstre, mens Meta's Llama gav de mest h\u00f8jreorienterede svar.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Tr\u00e6ningsdata er ikke den eneste kilde til bias<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Den \u00e5benlyse kilde til <a href=\"https:\/\/dailyai.com\/da\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\">sk\u00e6vhed<\/a> er de data, som modellerne er tr\u00e6net p\u00e5. Men den nye forskning viste, at selv efter at have renset dataene for bias, var modellerne modtagelige for bias p\u00e5 lavt niveau, der forblev i dataene.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Man ville forvente, at en LLM, der blev tr\u00e6net p\u00e5 en masse Fox News-data, ville v\u00e6re mere pro-republikansk i sine svar. Men problemet ligger ikke kun i tr\u00e6ningsdataene.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det viser sig, at n\u00e5r de pr\u00e6tr\u00e6nede sprogmodeller finjusteres og bruges, optager de yderligere bias fra deres operat\u00f8rer.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Soroush Vosoughi, der er adjunkt i datalogi ved Dartmouth College, forklarede, at fordomme introduceres i n\u00e6sten alle faser af en LLM's udvikling.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Et eksempel p\u00e5 dette er, hvordan OpenAI fors\u00f8ger at fjerne bias fra sine modeller. Det bruger en teknik kaldet \"Reinforcement Learning through Human Feedback\" eller RLHF til at tr\u00e6ne sine modeller.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I RLHF tr\u00e6ner en menneskelig operat\u00f8r modellen p\u00e5 samme m\u00e5de, som man tr\u00e6ner en hundehvalp. Hvis hvalpen g\u00f8r noget godt, f\u00e5r den en godbid. Hvis den gnaver i dine hjemmesko, \"slem hund!\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En RLHF-operat\u00f8r stiller modellen nogle sp\u00f8rgsm\u00e5l, og en anden operat\u00f8r evaluerer derefter de mange svar, som modellen giver. Den anden operat\u00f8r evaluerer svarene og rangordner dem efter, hvad de bedst kunne lide.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I en <\/span><a href=\"https:\/\/openai.com\/blog\/how-should-ai-systems-behave\"><span style=\"font-weight: 400;\">indl\u00e6g om, hvordan den tr\u00e6ner sin AI<\/span><\/a><span style=\"font-weight: 400;\">OpenAI sagde, at de instruerer menneskelige undervisere i at \"undg\u00e5 at tage stilling til kontroversielle emner\", og at \"anmeldere ikke b\u00f8r favorisere nogen politisk gruppe\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det lyder som en god id\u00e9, men selv om vi virkelig pr\u00f8ver at lade v\u00e6re, er alle mennesker forudindtagede. Og det p\u00e5virker uundg\u00e5eligt modellens tr\u00e6ning.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Selv forfatterne til den artikel, vi n\u00e6vnte ovenfor, erkendte i deres konklusion, at deres egne fordomme kunne have p\u00e5virket deres forskning.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">L\u00f8sningen kan v\u00e6re at fors\u00f8ge at g\u00f8re disse sprogmodeller ikke helt d\u00e5rlige og s\u00e5 tilpasse dem til de fordomme, som folk har. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Folk siger ofte, at de vil have den objektive sandhed, men s\u00e5 ender de med at holde sig til deres foretrukne nyhedskilde som Fox eller CNN.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Vi er ikke altid enige om, hvad der er rigtigt eller forkert, og denne nye forskning ser ud til at vise, at AI heller ikke kan hj\u00e6lpe os med at finde ud af det.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Virksomheder som OpenAI og Meta arbejder h\u00e5rdt p\u00e5 at g\u00f8re deres sprogmodeller mere sikre og mindre forudindtagede, men helt upartiske modeller kan v\u00e6re en \u00f8nskedr\u00f8m. En ny forskningsartikel fra University of Washington, Carnegie Mellon University og Xi'an Jiaotong University konkluderede, at alle de AI-sprogmodeller, de testede, udviste politisk bias. Efter at have unders\u00f8gt kilderne til bias konkluderede de, at bias i sprogmodeller var uundg\u00e5eligt. Chan Park, en af artiklens forfattere, sagde: \"Vi tror ikke, at nogen sprogmodel kan v\u00e6re helt fri for politisk bias.\" Forskerne testede 14 forskellige sprogmodeller og spurgte dem<\/p>","protected":false},"author":6,"featured_media":3979,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[103,213,207,105,91],"class_list":["post-3964","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-ai-debate","tag-bias","tag-llm","tag-machine-learning","tag-policy"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Companies like OpenAI and Meta are working hard to make their language models safer and less biased, but completely unbiased models may be a pipedream. A new research paper from the University of Washington, Carnegie Mellon University, and Xi\u2019an Jiaotong University concluded that all the AI language models they tested displayed political bias. After delving into the sources of the bias, they concluded that bias in language models was inevitable. Chan Park, one of the paper\u2019s authors, said \u201cWe believe no language model can be entirely free from political biases.\u201d The researchers tested 14 different language models and asked them\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-09T06:56:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.\",\"datePublished\":\"2023-08-09T06:56:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"},\"wordCount\":540,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"keywords\":[\"AI debate\",\"Bias\",\"LLM\",\"machine learning\",\"Policy\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\",\"name\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"datePublished\":\"2023-08-09T06:56:16+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"width\":1000,\"height\":666,\"caption\":\"Bias in AI models\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Vi vil have uvildige LLM'er, men det er umuligt. Her er hvorfor. | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","og_locale":"da_DK","og_type":"article","og_title":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI","og_description":"Companies like OpenAI and Meta are working hard to make their language models safer and less biased, but completely unbiased models may be a pipedream. A new research paper from the University of Washington, Carnegie Mellon University, and Xi\u2019an Jiaotong University concluded that all the AI language models they tested displayed political bias. After delving into the sources of the bias, they concluded that bias in language models was inevitable. Chan Park, one of the paper\u2019s authors, said \u201cWe believe no language model can be entirely free from political biases.\u201d The researchers tested 14 different language models and asked them","og_url":"https:\/\/dailyai.com\/da\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","og_site_name":"DailyAI","article_published_time":"2023-08-09T06:56:16+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Eugene van der Watt","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.","datePublished":"2023-08-09T06:56:16+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"},"wordCount":540,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","keywords":["AI debate","Bias","LLM","machine learning","Policy"],"articleSection":["Ethics &amp; Society"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","url":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","name":"Vi vil have uvildige LLM'er, men det er umuligt. Her er hvorfor. | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","datePublished":"2023-08-09T06:56:16+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","width":1000,"height":666,"caption":"Bias in AI models"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why."}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har en baggrund som elektronikingeni\u00f8r og elsker alt, hvad der har med teknologi at g\u00f8re. N\u00e5r han tager en pause fra at l\u00e6se AI-nyheder, kan du finde ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/da\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/3964","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=3964"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/3964\/revisions"}],"predecessor-version":[{"id":3983,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/3964\/revisions\/3983"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/3979"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=3964"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=3964"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=3964"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}