{"id":11227,"date":"2024-04-03T10:42:20","date_gmt":"2024-04-03T10:42:20","guid":{"rendered":"https:\/\/dailyai.com\/?p=11227"},"modified":"2024-04-03T10:42:20","modified_gmt":"2024-04-03T10:42:20","slug":"apples-realm-sees-on-screen-visuals-better-than-gpt-4","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","title":{"rendered":"Apples ReALM 'ser' billeder p\u00e5 sk\u00e6rmen bedre end GPT-4"},"content":{"rendered":"<p><strong>Apples ingeni\u00f8rer har udviklet et AI-system, der l\u00f8ser komplekse referencer til enheder p\u00e5 sk\u00e6rmen og brugersamtaler. Den lette model kan v\u00e6re en ideel l\u00f8sning til virtuelle assistenter p\u00e5 enheden.<\/strong><\/p>\n<p>Mennesker er gode til at l\u00f8se referencer i samtaler med hinanden. N\u00e5r vi bruger udtryk som \"den nederste\" eller \"ham\", forst\u00e5r vi, hvad personen henviser til ud fra samtalens kontekst og de ting, vi kan se.<\/p>\n<p>Det er meget sv\u00e6rere for en AI-model at g\u00f8re det. Multimodale LLM'er som GPT-4 er gode til at besvare sp\u00f8rgsm\u00e5l om billeder, men de er dyre at tr\u00e6ne og kr\u00e6ver et stort computeroverhead for at behandle hver foresp\u00f8rgsel om et billede.<\/p>\n<p>Apples ingeni\u00f8rer valgte en anden tilgang til deres system, som de kaldte ReALM (Reference Resolution As Language Modeling). <a href=\"https:\/\/arxiv.org\/pdf\/2403.20329.pdf\" target=\"_blank\" rel=\"noopener\">Avisen<\/a> er v\u00e6rd at l\u00e6se for at f\u00e5 flere detaljer om deres udviklings- og testproces.<\/p>\n<p>ReALM bruger en LLM til at behandle samtale-, sk\u00e6rm- og baggrundsenheder (alarmer, baggrundsmusik), som udg\u00f8r en brugers interaktion med en virtuel AI-agent.<\/p>\n<p>Her er et eksempel p\u00e5 den slags interaktion, en bruger kan have med en AI-agent.<\/p>\n<figure id=\"attachment_11231\" aria-describedby=\"caption-attachment-11231\" style=\"width: 746px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-11231\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png\" alt=\"\" width=\"746\" height=\"298\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png 746w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-300x120.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-60x24.png 60w\" sizes=\"auto, (max-width: 746px) 100vw, 746px\" \/><figcaption id=\"caption-attachment-11231\" class=\"wp-caption-text\">Eksempler p\u00e5 en brugers interaktion med en virtuel assistent. Kilde: arXiv<\/figcaption><\/figure>\n<p>Agenten skal forst\u00e5 samtaleenheder som det faktum, at n\u00e5r brugeren siger \"den ene\", henviser de til telefonnummeret til apoteket.<\/p>\n<p>Den skal ogs\u00e5 forst\u00e5 den visuelle kontekst, n\u00e5r brugeren siger \"den nederste\", og det er her, ReALM's tilgang adskiller sig fra modeller som GPT-4.<\/p>\n<p>ReALM er afh\u00e6ngig af upstream-kodere til f\u00f8rst at analysere elementerne p\u00e5 sk\u00e6rmen og deres positioner. ReALM rekonstruerer derefter sk\u00e6rmen i rent tekstuelle repr\u00e6sentationer fra venstre mod h\u00f8jre og fra top til bund.<\/p>\n<p>Enkelt sagt bruger den naturligt sprog til at opsummere brugerens sk\u00e6rm.<\/p>\n<p>N\u00e5r en bruger nu stiller et sp\u00f8rgsm\u00e5l om noget p\u00e5 sk\u00e6rmen, behandler sprogmodellen tekstbeskrivelsen af sk\u00e6rmen i stedet for at skulle bruge en synsmodel til at behandle billedet p\u00e5 sk\u00e6rmen.<\/p>\n<p>Forskerne skabte syntetiske datas\u00e6t med samtale-, sk\u00e6rm- og baggrundsenheder og testede ReALM og andre modeller for at afpr\u00f8ve deres effektivitet i forhold til at l\u00f8se referencer i samtalesystemer.<\/p>\n<p>ReALM's mindre version (80M parametre) klarede sig sammenligneligt med GPT-4, og den st\u00f8rre version (3B parametre) klarede sig v\u00e6sentligt bedre end GPT-4.<\/p>\n<p>ReALM er en lille model sammenlignet med GPT-4. Dens overlegne referenceopl\u00f8sning g\u00f8r den til et ideelt valg til en virtuel assistent, der kan eksistere p\u00e5 enheden uden at g\u00e5 p\u00e5 kompromis med ydeevnen.<\/p>\n<p>ReALM fungerer ikke s\u00e5 godt med mere komplekse billeder eller nuancerede brugeranmodninger, men den kunne fungere godt som en virtuel assistent i bilen eller p\u00e5 enheden. Forestil dig, at Siri kunne \"se\" din iPhone-sk\u00e6rm og reagere p\u00e5 henvisninger til elementer p\u00e5 sk\u00e6rmen.<\/p>\n<p>Apple har v\u00e6ret lidt langsom ud af starthullerne, men nylige udviklinger som deres <a href=\"https:\/\/dailyai.com\/da\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\">MM1-modellen<\/a> og ReALM viser, at der sker meget bag lukkede d\u00f8re.<\/p>","protected":false},"excerpt":{"rendered":"<p>Apples ingeni\u00f8rer har udviklet et AI-system, der l\u00f8ser komplekse referencer til enheder p\u00e5 sk\u00e6rmen og brugersamtaler. Den lette model kan v\u00e6re en ideel l\u00f8sning til virtuelle assistenter p\u00e5 enheder. Mennesker er gode til at l\u00f8se referencer i samtaler med hinanden. N\u00e5r vi bruger udtryk som \"den nederste\" eller \"ham\", forst\u00e5r vi, hvad personen henviser til ud fra samtalens kontekst og de ting, vi kan se. Det er meget sv\u00e6rere for en AI-model at g\u00f8re dette. Multimodale LLM'er som GPT-4 er gode til at besvare sp\u00f8rgsm\u00e5l om billeder, men de er dyre at tr\u00e6ne og kr\u00e6ver en<\/p>","protected":false},"author":6,"featured_media":11232,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[126,166,118],"class_list":["post-11227","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-apple","tag-computer-vision","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-03T10:42:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"wordCount\":486,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"keywords\":[\"Apple\",\"Computer vision\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Apples ReALM 'ser' billeder p\u00e5 sk\u00e6rmen bedre end GPT-4 | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_locale":"da_DK","og_type":"article","og_title":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI","og_description":"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a","og_url":"https:\/\/dailyai.com\/da\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_site_name":"DailyAI","article_published_time":"2024-04-03T10:42:20+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Eugene van der Watt","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4","datePublished":"2024-04-03T10:42:20+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"wordCount":486,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","keywords":["Apple","Computer vision","LLMS"],"articleSection":["Industry"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","url":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","name":"Apples ReALM 'ser' billeder p\u00e5 sk\u00e6rmen bedre end GPT-4 | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","datePublished":"2024-04-03T10:42:20+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har en baggrund som elektronikingeni\u00f8r og elsker alt, hvad der har med teknologi at g\u00f8re. N\u00e5r han tager en pause fra at l\u00e6se AI-nyheder, kan du finde ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/da\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/11227","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=11227"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/11227\/revisions"}],"predecessor-version":[{"id":11234,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/11227\/revisions\/11234"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/11232"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=11227"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=11227"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=11227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}