{"id":3964,"date":"2023-08-09T06:56:16","date_gmt":"2023-08-09T06:56:16","guid":{"rendered":"https:\/\/dailyai.com\/?p=3964"},"modified":"2023-08-09T06:56:16","modified_gmt":"2023-08-09T06:56:16","slug":"we-want-unbiased-llms-but-its-impossible-heres-why","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nl\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","title":{"rendered":"We willen onbevooroordeelde LLM's, maar dat is onmogelijk. Dit is waarom."},"content":{"rendered":"<p><strong>Bedrijven als OpenAI en Meta werken er hard aan om hun taalmodellen veiliger en minder bevooroordeeld te maken, maar volledig onbevooroordeelde modellen zijn misschien een utopie.<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">A <\/span><a href=\"https:\/\/aclanthology.org\/2023.acl-long.656.pdf\"><span style=\"font-weight: 400;\">nieuw onderzoeksartikel<\/span><\/a><span style=\"font-weight: 400;\"> van de Universiteit van Washington, de Carnegie Mellon Universiteit en de Xi'an Jiaotong Universiteit concludeerden dat alle AI-taalmodellen die ze testten politieke vooringenomenheid vertoonden.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Nadat ze de bronnen van de vertekening hadden onderzocht, concludeerden ze dat vertekening in taalmodellen onvermijdelijk was.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Chan Park, een van de auteurs van de paper, zei: \"Wij geloven dat geen enkel taalmodel volledig vrij kan zijn van politieke vooroordelen.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De onderzoekers testten 14 verschillende taalmodellen en vroegen hen om meningen over onderwerpen als democratie, racisme en feminisme, om te zien aan welke kant van het politieke spectrum de modellen vielen.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Uit de resultaten bleek dat OpenAI's ChatGPT en GPT-4 het meest links waren, terwijl Meta's Llama de meest rechtse reacties gaf.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Trainingsgegevens zijn niet de enige bron van vertekening<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">De voor de hand liggende bron van <a href=\"https:\/\/dailyai.com\/nl\/2023\/07\/unmasking-the-deep-seated-biases-in-ai-systems\/\">bias<\/a> is de data waarop deze modellen zijn getraind. Maar het nieuwe onderzoek toonde aan dat zelfs na het verwijderen van vertekeningen uit de gegevens, de modellen gevoelig waren voor lage vertekeningen die in de gegevens achterbleven.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Je zou verwachten dat een LLM die getraind is op gegevens van Fox News, meer pro-Republikeins zou zijn in zijn antwoorden. Maar het probleem zit niet alleen in de trainingsgegevens.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Het blijkt dat wanneer de voorgetrainde taalmodellen worden verfijnd en gebruikt, ze nog meer vooroordelen van hun operators oppikken.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Soroush Vosoughi, een assistent-professor computerwetenschappen aan het Dartmouth College, legt uit dat vooroordelen in bijna elke fase van de ontwikkeling van een LLM worden ge\u00efntroduceerd.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Een voorbeeld hiervan is hoe OpenAI probeert vooroordelen uit zijn modellen te verwijderen. Het gebruikt een techniek genaamd \"Reinforcement Learning through Human Feedback\" of RLHF om zijn modellen te trainen.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In RLHF traint een menselijke operator het model op dezelfde manier als je een puppy traint. Als de puppy iets goed doet, krijgt hij een traktatie. Als hij op je slippers kauwt, \"Stoute hond!\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Een RLHF-operator stelt een aantal vragen aan het model en een andere operator beoordeelt vervolgens de antwoorden die het model geeft. De tweede operator beoordeelt de antwoorden en rangschikt ze op basis van welke hij het leukst vond.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In een <\/span><a href=\"https:\/\/openai.com\/blog\/how-should-ai-systems-behave\"><span style=\"font-weight: 400;\">bericht over hoe het zijn AI traint<\/span><\/a><span style=\"font-weight: 400;\">OpenAI zegt dat het menselijke trainers instrueert om \"geen standpunt in te nemen over controversi\u00eble onderwerpen\" en dat \"beoordelaars geen voorkeur mogen hebben voor een politieke groepering\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dit klinkt als een goed idee, maar zelfs als we echt ons best doen om dat niet te zijn, zijn alle mensen bevooroordeeld. En dat be\u00efnvloedt onvermijdelijk de training van het model.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Zelfs de auteurs van het artikel dat we hierboven noemden, erkenden in hun conclusie dat hun eigen vooroordelen hun onderzoek hadden kunnen be\u00efnvloeden.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">De oplossing kan zijn om te proberen deze taalmodellen niet al te slecht te maken en ze dan aan te passen aan de vooroordelen die mensen hebben. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mensen zeggen vaak dat ze de onbevooroordeelde waarheid willen, maar uiteindelijk houden ze vast aan de nieuwsbron van hun voorkeur, zoals Fox of CNN.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We zijn het niet altijd eens over wat goed of fout is en dit nieuwe onderzoek lijkt aan te tonen dat AI ons daar ook niet bij kan helpen.<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Bedrijven als OpenAI en Meta werken er hard aan om hun taalmodellen veiliger en minder bevooroordeeld te maken, maar volledig onbevooroordeelde modellen zouden wel eens een utopie kunnen zijn. In een nieuw onderzoek van de Universiteit van Washington, Carnegie Mellon University en Xi'an Jiaotong University wordt geconcludeerd dat alle geteste AI-taalmodellen politieke vooroordelen vertonen. Nadat ze de bronnen van de vooringenomenheid hadden onderzocht, concludeerden ze dat vooringenomenheid in taalmodellen onvermijdelijk is. Chan Park, een van de auteurs van het artikel, zei: \"Wij geloven dat geen enkel taalmodel volledig vrij kan zijn van politieke vooroordelen.\" De onderzoekers testten 14 verschillende taalmodellen en vroegen hen<\/p>","protected":false},"author":6,"featured_media":3979,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[88],"tags":[103,213,207,105,91],"class_list":["post-3964","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ethics","tag-ai-debate","tag-bias","tag-llm","tag-machine-learning","tag-policy"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nl\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/\" \/>\n<meta property=\"og:locale\" content=\"nl_NL\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Companies like OpenAI and Meta are working hard to make their language models safer and less biased, but completely unbiased models may be a pipedream. A new research paper from the University of Washington, Carnegie Mellon University, and Xi\u2019an Jiaotong University concluded that all the AI language models they tested displayed political bias. After delving into the sources of the bias, they concluded that bias in language models was inevitable. Chan Park, one of the paper\u2019s authors, said \u201cWe believe no language model can be entirely free from political biases.\u201d The researchers tested 14 different language models and asked them\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nl\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-08-09T06:56:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Geschreven door\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Geschatte leestijd\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.\",\"datePublished\":\"2023-08-09T06:56:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"},\"wordCount\":540,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"keywords\":[\"AI debate\",\"Bias\",\"LLM\",\"machine learning\",\"Policy\"],\"articleSection\":[\"Ethics &amp; Society\"],\"inLanguage\":\"nl-NL\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\",\"name\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"datePublished\":\"2023-08-09T06:56:16+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#breadcrumb\"},\"inLanguage\":\"nl-NL\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/08\\\/Bias-in-AI-models.jpg\",\"width\":1000,\"height\":666,\"caption\":\"Bias in AI models\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/08\\\/we-want-unbiased-llms-but-its-impossible-heres-why\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nl-NL\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nl\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"We willen onbevooroordeelde LLM's, maar dat is onmogelijk. Dit is waarom. | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nl\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","og_locale":"nl_NL","og_type":"article","og_title":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why. | DailyAI","og_description":"Companies like OpenAI and Meta are working hard to make their language models safer and less biased, but completely unbiased models may be a pipedream. A new research paper from the University of Washington, Carnegie Mellon University, and Xi\u2019an Jiaotong University concluded that all the AI language models they tested displayed political bias. After delving into the sources of the bias, they concluded that bias in language models was inevitable. Chan Park, one of the paper\u2019s authors, said \u201cWe believe no language model can be entirely free from political biases.\u201d The researchers tested 14 different language models and asked them","og_url":"https:\/\/dailyai.com\/nl\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","og_site_name":"DailyAI","article_published_time":"2023-08-09T06:56:16+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Geschreven door":"Eugene van der Watt","Geschatte leestijd":"3 minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why.","datePublished":"2023-08-09T06:56:16+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"},"wordCount":540,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","keywords":["AI debate","Bias","LLM","machine learning","Policy"],"articleSection":["Ethics &amp; Society"],"inLanguage":"nl-NL"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","url":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/","name":"We willen onbevooroordeelde LLM's, maar dat is onmogelijk. Dit is waarom. | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","datePublished":"2023-08-09T06:56:16+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#breadcrumb"},"inLanguage":"nl-NL","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/"]}]},{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/08\/Bias-in-AI-models.jpg","width":1000,"height":666,"caption":"Bias in AI models"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/08\/we-want-unbiased-llms-but-its-impossible-heres-why\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"We want unbiased LLMs but it\u2019s impossible. Here\u2019s why."}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Uw dagelijkse dosis AI-nieuws","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nl-NL"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene heeft een achtergrond in elektrotechniek en houdt van alles wat met techniek te maken heeft. Als hij even pauzeert van het consumeren van AI-nieuws, kun je hem aan de snookertafel vinden.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nl\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/3964","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/comments?post=3964"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/3964\/revisions"}],"predecessor-version":[{"id":3983,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/3964\/revisions\/3983"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media\/3979"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media?parent=3964"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/categories?post=3964"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/tags?post=3964"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}