{"id":11227,"date":"2024-04-03T10:42:20","date_gmt":"2024-04-03T10:42:20","guid":{"rendered":"https:\/\/dailyai.com\/?p=11227"},"modified":"2024-04-03T10:42:20","modified_gmt":"2024-04-03T10:42:20","slug":"apples-realm-sees-on-screen-visuals-better-than-gpt-4","status":"publish","type":"post","link":"https:\/\/dailyai.com\/nl\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","title":{"rendered":"Apple's ReALM 'ziet' beelden op het scherm beter dan GPT-4"},"content":{"rendered":"<p><strong>Technici van Apple hebben een AI-systeem ontwikkeld dat complexe verwijzingen naar entiteiten op het scherm en gebruikersgesprekken oplost. Het lichtgewicht model zou een ideale oplossing kunnen zijn voor virtuele assistenten op apparaten.<\/strong><\/p>\n<p>Mensen zijn goed in het oplossen van verwijzingen in gesprekken met elkaar. Wanneer we termen als \"de onderste\" of \"hem\" gebruiken, begrijpen we waar de persoon naar verwijst op basis van de context van het gesprek en dingen die we kunnen zien.<\/p>\n<p>Het is veel moeilijker voor een AI-model om dit te doen. Multimodale LLM's zoals GPT-4 zijn goed in het beantwoorden van vragen over afbeeldingen, maar zijn duur om te trainen en vereisen veel rekenoverhead om elke vraag over een afbeelding te verwerken.<\/p>\n<p>De technici van Apple kozen voor een andere aanpak met hun systeem, ReALM (Reference Resolution As Language Modeling) genaamd. <a href=\"https:\/\/arxiv.org\/pdf\/2403.20329.pdf\" target=\"_blank\" rel=\"noopener\">Het papier<\/a> is de moeite waard om te lezen voor meer informatie over hun ontwikkelings- en testproces.<\/p>\n<p>ReALM gebruikt een LLM voor het verwerken van conversatie-, scherm- en achtergrondentiteiten (alarmen, achtergrondmuziek) die deel uitmaken van de interacties van een gebruiker met een virtuele AI-agent.<\/p>\n<p>Hier is een voorbeeld van het soort interactie dat een gebruiker zou kunnen hebben met een AI-agent.<\/p>\n<figure id=\"attachment_11231\" aria-describedby=\"caption-attachment-11231\" style=\"width: 746px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-11231\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png\" alt=\"\" width=\"746\" height=\"298\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png 746w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-300x120.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-60x24.png 60w\" sizes=\"auto, (max-width: 746px) 100vw, 746px\" \/><figcaption id=\"caption-attachment-11231\" class=\"wp-caption-text\">Voorbeelden van interacties van een gebruiker met een virtuele assistent. Bron: arXiv<\/figcaption><\/figure>\n<p>De agent moet de conversatie-entiteiten begrijpen, zoals het feit dat wanneer de gebruiker \"the one\" zegt, hij verwijst naar het telefoonnummer van de apotheek.<\/p>\n<p>Het moet ook de visuele context begrijpen wanneer de gebruiker \"de onderste\" zegt, en hierin verschilt de benadering van ReALM van modellen zoals GPT-4.<\/p>\n<p>ReALM vertrouwt op upstream encoders om eerst de elementen op het scherm en hun posities te ontleden. ReALM reconstrueert vervolgens het scherm in puur tekstuele representaties van links naar rechts en van boven naar beneden.<\/p>\n<p>Eenvoudig gezegd gebruikt het natuurlijke taal om het scherm van de gebruiker samen te vatten.<\/p>\n<p>Als een gebruiker nu een vraag stelt over iets op het scherm, verwerkt het taalmodel de tekstbeschrijving van het scherm in plaats van dat er een visiemodel nodig is om het beeld op het scherm te verwerken.<\/p>\n<p>De onderzoekers maakten synthetische datasets van gespreks-, scherm- en achtergrondentiteiten en testten ReALM en andere modellen om hun effectiviteit te testen bij het oplossen van verwijzingen in gesprekssystemen.<\/p>\n<p>De kleinere versie van ReALM (80M parameters) presteerde vergelijkbaar met GPT-4 en de grotere versie (3B parameters) presteerde aanzienlijk beter dan GPT-4.<\/p>\n<p>ReALM is een klein model vergeleken met GPT-4. De superieure referentieresolutie maakt het een ideale keuze voor een virtuele assistent die op het apparaat kan bestaan zonder afbreuk te doen aan de prestaties.<\/p>\n<p>ReALM presteert niet zo goed met complexere afbeeldingen of genuanceerde gebruikersverzoeken, maar het zou goed kunnen werken als virtuele assistent in de auto of op het apparaat. Stel je voor dat Siri het scherm van je iPhone zou kunnen \"zien\" en zou kunnen reageren op verwijzingen naar elementen op het scherm.<\/p>\n<p>Apple is een beetje traag uit de startblokken gekomen, maar recente ontwikkelingen zoals hun <a href=\"https:\/\/dailyai.com\/nl\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\">MM1-model<\/a> en ReALM laten zien dat er veel gebeurt achter gesloten deuren.<\/p>","protected":false},"excerpt":{"rendered":"<p>Technici van Apple hebben een AI-systeem ontwikkeld dat complexe verwijzingen naar entiteiten op het scherm en gebruikersgesprekken oplost. Het lichtgewicht model zou een ideale oplossing kunnen zijn voor virtuele assistenten op apparaten. Mensen zijn goed in het oplossen van verwijzingen in gesprekken met elkaar. Als we termen als \"de onderste\" of \"hem\" gebruiken, begrijpen we waar de persoon naar verwijst op basis van de context van het gesprek en dingen die we kunnen zien. Voor een AI-model is het een stuk moeilijker om dit te doen. Multimodale LLM's zoals GPT-4 zijn goed in het beantwoorden van vragen over afbeeldingen, maar zijn duur om te trainen en vereisen een<\/p>","protected":false},"author":6,"featured_media":11232,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[126,166,118],"class_list":["post-11227","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-apple","tag-computer-vision","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/nl\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:locale\" content=\"nl_NL\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/nl\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-03T10:42:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Geschreven door\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Geschatte leestijd\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"wordCount\":486,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"keywords\":[\"Apple\",\"Computer vision\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"nl-NL\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\"},\"inLanguage\":\"nl-NL\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"nl-NL\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"nl-NL\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/nl\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Apple's ReALM 'ziet' beelden op het scherm beter dan GPT-4 | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/nl\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_locale":"nl_NL","og_type":"article","og_title":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI","og_description":"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a","og_url":"https:\/\/dailyai.com\/nl\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_site_name":"DailyAI","article_published_time":"2024-04-03T10:42:20+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Geschreven door":"Eugene van der Watt","Geschatte leestijd":"3 minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4","datePublished":"2024-04-03T10:42:20+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"wordCount":486,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","keywords":["Apple","Computer vision","LLMS"],"articleSection":["Industry"],"inLanguage":"nl-NL"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","url":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","name":"Apple's ReALM 'ziet' beelden op het scherm beter dan GPT-4 | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","datePublished":"2024-04-03T10:42:20+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb"},"inLanguage":"nl-NL","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"]}]},{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Uw dagelijkse dosis AI-nieuws","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"nl-NL"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"nl-NL","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene heeft een achtergrond in elektrotechniek en houdt van alles wat met techniek te maken heeft. Als hij even pauzeert van het consumeren van AI-nieuws, kun je hem aan de snookertafel vinden.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/nl\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/11227","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/comments?post=11227"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/11227\/revisions"}],"predecessor-version":[{"id":11234,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/posts\/11227\/revisions\/11234"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media\/11232"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/media?parent=11227"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/categories?post=11227"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/nl\/wp-json\/wp\/v2\/tags?post=11227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}