{"id":11227,"date":"2024-04-03T10:42:20","date_gmt":"2024-04-03T10:42:20","guid":{"rendered":"https:\/\/dailyai.com\/?p=11227"},"modified":"2024-04-03T10:42:20","modified_gmt":"2024-04-03T10:42:20","slug":"apples-realm-sees-on-screen-visuals-better-than-gpt-4","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","title":{"rendered":"Apples ReALM \"sieht\" Bildschirmanzeigen besser als GPT-4"},"content":{"rendered":"<p><strong>Apple-Ingenieure haben ein KI-System entwickelt, das komplexe Verweise auf Bildschirmelemente und Benutzergespr\u00e4che aufl\u00f6st. Das leichtgewichtige Modell k\u00f6nnte eine ideale L\u00f6sung f\u00fcr virtuelle Assistenten auf Ger\u00e4ten sein.<\/strong><\/p>\n<p>Menschen sind gut darin, Bez\u00fcge in Gespr\u00e4chen untereinander aufzul\u00f6sen. Wenn wir Begriffe wie \"der untere\" oder \"er\" verwenden, verstehen wir, worauf sich die Person bezieht, basierend auf dem Kontext des Gespr\u00e4chs und den Dingen, die wir sehen k\u00f6nnen.<\/p>\n<p>F\u00fcr ein KI-Modell ist das viel schwieriger. Multimodale LLMs wie GPT-4 sind gut bei der Beantwortung von Fragen zu Bildern, aber sie sind teuer zu trainieren und erfordern eine Menge Rechenaufwand, um jede Anfrage zu einem Bild zu verarbeiten.<\/p>\n<p>Die Apple-Ingenieure verfolgten mit ihrem System, das sie ReALM (Reference Resolution As Language Modeling) nannten, einen anderen Ansatz. <a href=\"https:\/\/arxiv.org\/pdf\/2403.20329.pdf\" target=\"_blank\" rel=\"noopener\">Das Papier<\/a> ist es wert, gelesen zu werden, um mehr \u00fcber den Entwicklungs- und Testprozess zu erfahren.<\/p>\n<p>ReALM verwendet ein LLM zur Verarbeitung von Konversations-, Bildschirm- und Hintergrundinformationen (Alarme, Hintergrundmusik), die die Interaktion eines Benutzers mit einem virtuellen KI-Agenten ausmachen.<\/p>\n<p>Hier ein Beispiel f\u00fcr die Art der Interaktion, die ein Nutzer mit einem KI-Agenten haben k\u00f6nnte.<\/p>\n<figure id=\"attachment_11231\" aria-describedby=\"caption-attachment-11231\" style=\"width: 746px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-11231\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png\" alt=\"\" width=\"746\" height=\"298\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions.png 746w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-300x120.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Agent-interactions-60x24.png 60w\" sizes=\"auto, (max-width: 746px) 100vw, 746px\" \/><figcaption id=\"caption-attachment-11231\" class=\"wp-caption-text\">Beispiele f\u00fcr die Interaktionen eines Benutzers mit einem virtuellen Assistenten. Quelle: arXiv<\/figcaption><\/figure>\n<p>Der Agent muss Konversationseinheiten verstehen, wie z. B. die Tatsache, dass sich der Benutzer, wenn er \"die\" sagt, auf die Telefonnummer der Apotheke bezieht.<\/p>\n<p>Es muss auch den visuellen Kontext verstehen, wenn der Benutzer \"das untere\" sagt, und hier unterscheidet sich der Ansatz von ReALM von Modellen wie GPT-4.<\/p>\n<p>ReALM st\u00fctzt sich auf vorgelagerte Encoder, die zun\u00e4chst die Bildschirmelemente und deren Positionen analysieren. Anschlie\u00dfend rekonstruiert ReALM den Bildschirm in rein textuellen Darstellungen von links nach rechts und von oben nach unten.<\/p>\n<p>Vereinfacht ausgedr\u00fcckt, verwendet es nat\u00fcrliche Sprache, um den Bildschirm des Benutzers zusammenzufassen.<\/p>\n<p>Wenn nun ein Benutzer eine Frage zu etwas auf dem Bildschirm stellt, verarbeitet das Sprachmodell die Textbeschreibung des Bildschirms, anstatt ein Bildmodell zur Verarbeitung des Bildes auf dem Bildschirm zu verwenden.<\/p>\n<p>Die Forscher erstellten synthetische Datens\u00e4tze von Gespr\u00e4chs-, Bildschirm- und Hintergrundentit\u00e4ten und testeten ReALM und andere Modelle, um ihre Effektivit\u00e4t bei der Aufl\u00f6sung von Referenzen in Gespr\u00e4chssystemen zu pr\u00fcfen.<\/p>\n<p>Die kleinere Version von ReALM (80M Parameter) schneidet vergleichbar mit GPT-4 ab und die gr\u00f6\u00dfere Version (3B Parameter) \u00fcbertrifft GPT-4 deutlich.<\/p>\n<p>ReALM ist im Vergleich zum GPT-4 ein winziges Modell. Seine \u00fcberlegene Referenzaufl\u00f6sung macht ihn zur idealen Wahl f\u00fcr einen virtuellen Assistenten, der ohne Leistungseinbu\u00dfen auf dem Ger\u00e4t betrieben werden kann.<\/p>\n<p>ReALM funktioniert nicht so gut bei komplexeren Bildern oder nuancierten Benutzeranfragen, aber es k\u00f6nnte gut als virtueller Assistent im Auto oder auf dem Ger\u00e4t funktionieren. Stellen Sie sich vor, Siri k\u00f6nnte Ihren iPhone-Bildschirm \"sehen\" und auf Verweise auf Bildschirmelemente reagieren.<\/p>\n<p>Apple ist ein wenig langsam aus den Startl\u00f6chern gekommen, aber die j\u00fcngsten Entwicklungen wie die <a href=\"https:\/\/dailyai.com\/de\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\">Modell MM1<\/a> und ReALM zeigen, dass sich vieles hinter verschlossenen T\u00fcren abspielt.<\/p>","protected":false},"excerpt":{"rendered":"<p>Apple-Ingenieure haben ein KI-System entwickelt, das komplexe Verweise auf Bildschirmelemente und Benutzergespr\u00e4che aufl\u00f6st. Das leichtgewichtige Modell k\u00f6nnte eine ideale L\u00f6sung f\u00fcr virtuelle Assistenten auf Ger\u00e4ten sein. Menschen sind gut darin, Bez\u00fcge in Gespr\u00e4chen untereinander aufzul\u00f6sen. Wenn wir Begriffe wie \"der da unten\" oder \"er\" verwenden, verstehen wir anhand des Kontexts der Unterhaltung und der Dinge, die wir sehen k\u00f6nnen, worauf sich die Person bezieht. F\u00fcr ein KI-Modell ist dies sehr viel schwieriger zu bewerkstelligen. Multimodale LLMs wie GPT-4 sind gut in der Lage, Fragen zu Bildern zu beantworten, sind aber teuer im Training und erfordern einen<\/p>","protected":false},"author":6,"featured_media":11232,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[126,166,118],"class_list":["post-11227","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-apple","tag-computer-vision","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-03T10:42:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"},\"wordCount\":486,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"keywords\":[\"Apple\",\"Computer vision\",\"LLMS\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\",\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"datePublished\":\"2024-04-03T10:42:20+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/Apple-ReALM.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/04\\\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Apples ReALM \"sieht\" Bildschirmdarstellungen besser als GPT-4 | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_locale":"de_DE","og_type":"article","og_title":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4 | DailyAI","og_description":"Apple engineers developed an AI system that resolves complex references to on-screen entities and user conversations. The lightweight model could be an ideal solution for on-device virtual assistants. Humans are good at resolving references in conversations with each other. When we use terms like \u201cthe bottom one\u201d or \u201chim\u201d we understand what the person is referring to based on the context of the conversation and things we can see. It\u2019s a lot more difficult for an AI model to do this. Multimodal LLMs like GPT-4 are good at answering questions about images but are expensive to train and require a","og_url":"https:\/\/dailyai.com\/de\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","og_site_name":"DailyAI","article_published_time":"2024-04-03T10:42:20+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4","datePublished":"2024-04-03T10:42:20+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"},"wordCount":486,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","keywords":["Apple","Computer vision","LLMS"],"articleSection":["Industry"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","url":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/","name":"Apples ReALM \"sieht\" Bildschirmdarstellungen besser als GPT-4 | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","datePublished":"2024-04-03T10:42:20+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/04\/Apple-ReALM.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/04\/apples-realm-sees-on-screen-visuals-better-than-gpt-4\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Apple\u2019s ReALM \u2018sees\u2019 on-screen visuals better than GPT-4"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/11227","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=11227"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/11227\/revisions"}],"predecessor-version":[{"id":11234,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/11227\/revisions\/11234"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/11232"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=11227"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=11227"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=11227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}