{"id":10185,"date":"2024-02-20T07:06:53","date_gmt":"2024-02-20T07:06:53","guid":{"rendered":"https:\/\/dailyai.com\/?p=10185"},"modified":"2024-02-22T09:44:53","modified_gmt":"2024-02-22T09:44:53","slug":"meta-releases-v-jepa-a-predictive-vision-model","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","title":{"rendered":"Meta ver\u00f6ffentlicht V-JEPA, ein pr\u00e4diktives Bildgebungsmodell"},"content":{"rendered":"<p><strong>Meta hat V-JEPA ver\u00f6ffentlicht, ein Modell f\u00fcr pr\u00e4diktives Sehen, das den n\u00e4chsten Schritt in Richtung der Vision des Meta Chief AI Scientist Yann LeCun von fortgeschrittener maschineller Intelligenz (AMI) darstellt.<\/strong><\/p>\n<p>Damit KI-gest\u00fctzte Maschinen mit Objekten in der realen Welt interagieren k\u00f6nnen, m\u00fcssen sie trainiert werden, aber herk\u00f6mmliche Methoden sind sehr ineffizient. Sie verwenden Tausende von Videobeispielen mit vortrainierten Bildkodierern, Text oder menschlichen Kommentaren, damit eine Maschine ein einziges Konzept, geschweige denn mehrere F\u00e4higkeiten erlernen kann.<\/p>\n<p>V-JEPA, die Abk\u00fcrzung f\u00fcr Joint Embedding Predictive Architectures, ist ein Bildverarbeitungsmodell, das diese Konzepte auf effizientere Weise erlernen soll.<\/p>\n<p>LeCun sagte, dass \"V-JEPA ein Schritt in Richtung eines fundierteren Verst\u00e4ndnisses der Welt ist, damit Maschinen ein allgemeineres Denken und Planen erreichen k\u00f6nnen.\"<\/p>\n<p>V-JEPA lernt, wie Objekte in der physischen Welt in \u00e4hnlicher Weise interagieren <a href=\"https:\/\/dailyai.com\/de\/2024\/02\/chinese-researchers-unveil-a-robot-toddler-named-tong-tong\/\">wie bei Kleinkindern<\/a>. Ein wichtiger Teil unseres Lernprozesses besteht darin, die L\u00fccken zu f\u00fcllen, um fehlende Informationen vorherzusagen. Wenn eine Person hinter einem Bildschirm verschwindet und auf der anderen Seite wieder herauskommt, f\u00fcllt unser Gehirn die L\u00fccke mit dem Wissen, was hinter dem Bildschirm passiert ist.<\/p>\n<p>V-JEPA ist ein nicht-generatives Modell, das lernt, indem es fehlende oder maskierte Teile eines Videos vorhersagt. Generative Modelle k\u00f6nnen einen maskierten Teil eines Videos Pixel f\u00fcr Pixel wiederherstellen, aber V-JEPA tut das nicht.<\/p>\n<p>Es vergleicht abstrakte Darstellungen von nicht beschrifteten Bildern und nicht die Pixel selbst. V-JEPA wird ein Video vorgelegt, bei dem ein gro\u00dfer Teil des Bildes ausgeblendet ist, aber gerade so viel, dass ein gewisser Kontext erkennbar ist. Das Modell wird dann gebeten, eine abstrakte Beschreibung dessen zu liefern, was in dem ausgeblendeten Bereich passiert.<\/p>\n<p>Anstatt f\u00fcr eine bestimmte F\u00e4higkeit trainiert zu werden, sagt Meta, dass \"es selbst\u00fcberwachtes Training f\u00fcr eine Reihe von Videos verwendet und eine Reihe von Dingen dar\u00fcber gelernt hat, wie die Welt funktioniert.\"<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\">Heute stellen wir V-JEPA vor, eine Methode, mit der man Maschinen beibringen kann, die physikalische Welt durch das Betrachten von Videos zu verstehen und zu modellieren. Diese Arbeit ist ein weiterer wichtiger Schritt in Richtung <a href=\"https:\/\/twitter.com\/ylecun?ref_src=twsrc%5Etfw\">@ylecun<\/a>Die in der Studie skizzierte Vision von KI-Modellen, die ein erlerntes Verst\u00e4ndnis der Welt nutzen, um zu planen, zu denken und... <a href=\"https:\/\/t.co\/5i6uNeFwJp\">pic.twitter.com\/5i6uNeFwJp<\/a><\/p>\n<p>- AI bei Meta (@AIatMeta) <a href=\"https:\/\/twitter.com\/AIatMeta\/status\/1758176023588577326?ref_src=twsrc%5Etfw\">15. Februar 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Eingefrorene Bewertungen<\/h2>\n<p>Metas <a href=\"https:\/\/ai.meta.com\/research\/publications\/revisiting-feature-prediction-for-learning-visual-representations-from-video\/\" target=\"_blank\" rel=\"noopener\">Forschungsarbeit<\/a> erkl\u00e4rt, dass einer der Hauptgr\u00fcnde, warum V-JEPA so viel effizienter ist als andere Modelle zum Erlernen des Sehens, darin liegt, dass es \"eingefrorene Bewertungen\" durchf\u00fchren kann.<\/p>\n<p>Nach dem selbst\u00fcberwachten Lernen mit umfangreichen unbeschrifteten Daten ben\u00f6tigen der Encoder und der Pr\u00e4diktor beim Erlernen einer neuen F\u00e4higkeit kein weiteres Training. Das vortrainierte Modell wird eingefroren.<\/p>\n<p>Wollte man fr\u00fcher ein Modell feinabstimmen, um eine neue F\u00e4higkeit zu erlernen, musste man die Parameter oder die Gewichte des gesamten Modells aktualisieren. Damit V-JEPA eine neue Aufgabe erlernen kann, ben\u00f6tigt es nur eine kleine Menge an markierten Daten mit einem kleinen Satz aufgabenspezifischer Parameter, die auf dem eingefrorenen Grundger\u00fcst optimiert werden.<\/p>\n<p>Die F\u00e4higkeit von V-JEPA, effizient neue Aufgaben zu erlernen, ist vielversprechend f\u00fcr die Entwicklung der verk\u00f6rperten KI. Sie k\u00f6nnte der Schl\u00fcssel dazu sein, dass Maschinen ihre physische Umgebung kontextbezogen wahrnehmen und Planungs- und sequenzielle Entscheidungsaufgaben bew\u00e4ltigen k\u00f6nnen.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta hat V-JEPA ver\u00f6ffentlicht, ein pr\u00e4diktives Visionsmodell, das den n\u00e4chsten Schritt in Richtung der Vision des Meta Chief AI Scientist Yann LeCun von fortgeschrittener maschineller Intelligenz (AMI) darstellt. Damit KI-gest\u00fctzte Maschinen mit Objekten in der realen Welt interagieren k\u00f6nnen, m\u00fcssen sie trainiert werden, aber herk\u00f6mmliche Methoden sind sehr ineffizient. Sie verwenden Tausende von Videobeispielen mit vortrainierten Bildkodierern, Text oder menschlichen Anmerkungen, damit eine Maschine ein einziges Konzept, geschweige denn mehrere F\u00e4higkeiten erlernen kann. V-JEPA, eine Abk\u00fcrzung f\u00fcr Joint Embedding Predictive Architectures, ist ein Bildverarbeitungsmodell, das diese Konzepte auf effizientere Weise erlernen soll. LeCun sagte<\/p>","protected":false},"author":6,"featured_media":10193,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,131],"class_list":["post-10185","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta releases V-JEPA, a predictive vision model | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta releases V-JEPA, a predictive vision model | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-20T07:06:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-02-22T09:44:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"750\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Meta releases V-JEPA, a predictive vision model\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"wordCount\":525,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"keywords\":[\"Computer vision\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"name\":\"Meta releases V-JEPA, a predictive vision model | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"width\":1000,\"height\":750},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta releases V-JEPA, a predictive vision model\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta ver\u00f6ffentlicht V-JEPA, ein Modell f\u00fcr pr\u00e4diktives Sehen | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_locale":"de_DE","og_type":"article","og_title":"Meta releases V-JEPA, a predictive vision model | DailyAI","og_description":"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said","og_url":"https:\/\/dailyai.com\/de\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_site_name":"DailyAI","article_published_time":"2024-02-20T07:06:53+00:00","article_modified_time":"2024-02-22T09:44:53+00:00","og_image":[{"width":1000,"height":750,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Meta releases V-JEPA, a predictive vision model","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"wordCount":525,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","keywords":["Computer vision","Meta"],"articleSection":["Industry"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","url":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","name":"Meta ver\u00f6ffentlicht V-JEPA, ein Modell f\u00fcr pr\u00e4diktives Sehen | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","width":1000,"height":750},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Meta releases V-JEPA, a predictive vision model"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/10185","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=10185"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/10185\/revisions"}],"predecessor-version":[{"id":10262,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/10185\/revisions\/10262"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/10193"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=10185"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=10185"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=10185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}