{"id":3315,"date":"2023-07-29T11:38:41","date_gmt":"2023-07-29T11:38:41","guid":{"rendered":"https:\/\/dailyai.com\/?p=3315"},"modified":"2023-07-29T11:38:41","modified_gmt":"2023-07-29T11:38:41","slug":"googles-ai-turns-vision-language-into-robotic-actions","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","title":{"rendered":"Googles KI verwandelt Visionen und Sprache in robotische Aktionen"},"content":{"rendered":"<p><strong>Google hat einige aufregende Testergebnisse seines neuesten Vision-Language-Action (VLA)-Robotermodells namens Robotics Transformer 2 (RT-2) vorgestellt.<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">Der Gro\u00dfteil der j\u00fcngsten KI-Diskussionen hat sich um gro\u00dfe Sprachmodelle wie ChatGPT und Llama gedreht. Die Antworten, die diese Modelle liefern, sind zwar n\u00fctzlich, bleiben aber auf dem Bildschirm Ihres Ger\u00e4ts. Mit RT-2 bringt Google die Macht der KI in die physische Welt. Eine Welt, in der selbstlernende Roboter bald Teil unseres Alltags sein k\u00f6nnten.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Die Geschicklichkeit von Robotern hat sich stark verbessert, aber sie ben\u00f6tigen immer noch sehr spezifische Programmieranweisungen, um selbst einfache Aufgaben zu bew\u00e4ltigen. Wenn sich die Aufgabe auch nur geringf\u00fcgig \u00e4ndert, muss das Programm angepasst werden.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Mit RT-2 hat Google ein Modell entwickelt, das es einem Roboter erm\u00f6glicht, Dinge, die er sieht, in Kombination mit Worten, die er h\u00f6rt, zu klassifizieren und daraus zu lernen. Er reagiert dann auf die Anweisungen, die er erh\u00e4lt, und f\u00fchrt physische Aktionen aus.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Bei LLMs wird ein Satz in Tokens zerlegt, also in mundgerechte Wortbrocken, die es der KI erm\u00f6glichen, den Satz zu verstehen. Google hat dieses Prinzip \u00fcbernommen und die Bewegungen, die ein Roboter als Reaktion auf einen Befehl ausf\u00fchren muss, in Token unterteilt.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Die Bewegungen eines Roboterarms mit einem Greifer w\u00fcrden beispielsweise in Token f\u00fcr \u00c4nderungen der X- und Y-Positionen oder Drehungen unterteilt.<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">In der Vergangenheit mussten Roboter in der Regel Erfahrungen aus erster Hand sammeln, um eine Aktion ausf\u00fchren zu k\u00f6nnen. Aber mit unserem neuen Vision-Language-Action-Modell RT-2 k\u00f6nnen sie jetzt sowohl aus Texten als auch aus Bildern aus dem Internet lernen, um neue und komplexe Aufgaben zu bew\u00e4ltigen. Mehr erfahren \u2193 <a href=\"https:\/\/t.co\/4DSRwUHhwg\">https:\/\/t.co\/4DSRwUHhwg<\/a><\/p>\n<p style=\"text-align: center;\">- Google (@Google) <a href=\"https:\/\/twitter.com\/Google\/status\/1684974085837660170?ref_src=twsrc%5Etfw\">28. Juli 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2><span style=\"font-weight: 400;\">Was kann ein Roboter mit RT-2 tun?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Da der Roboter in der Lage ist, zu verstehen, was er sieht und h\u00f6rt, und er \u00fcber eine Gedankenkette verf\u00fcgt, muss er nicht f\u00fcr neue Aufgaben programmiert werden.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ein Beispiel, das Google in seinem DeepMind <\/span><a href=\"https:\/\/www.deepmind.com\/blog\/rt-2-new-model-translates-vision-and-language-into-action\"><span style=\"font-weight: 400;\">Blogbeitrag zu RT-2<\/span><\/a><span style=\"font-weight: 400;\"> war \"die Entscheidung, welcher Gegenstand als improvisierter Hammer verwendet werden kann (ein Stein) oder welches Getr\u00e4nk am besten f\u00fcr einen m\u00fcden Menschen geeignet ist (ein Energydrink)\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In den von Google durchgef\u00fchrten Tests wurden ein Roboterarm und ein Greifer einer Reihe von Anforderungen unterzogen, die Sprachverst\u00e4ndnis, Sehverm\u00f6gen und logisches Denken erforderten, damit der Roboter die richtige Aktion ausf\u00fchren konnte. Wenn zum Beispiel zwei T\u00fcten mit Chips auf einem Tisch lagen, von denen eine leicht \u00fcber den Rand hinausragte, sollte der Roboter \"die T\u00fcte aufheben, die vom Tisch zu fallen drohte\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Das mag einfach klingen, aber das kontextbezogene Bewusstsein, das erforderlich ist, um die richtige Tasche zu finden, ist in der Welt der Robotik bahnbrechend.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Um zu erkl\u00e4ren, wie viel fortschrittlicher RT-2 im Vergleich zu normalen LLMs ist, wurde in einem anderen Google-Blog erkl\u00e4rt, dass \"ein Roboter in der Lage sein muss, einen Apfel im Kontext zu erkennen, ihn von einem roten Ball zu unterscheiden, zu verstehen, wie er aussieht, und vor allem zu wissen, wie man ihn aufhebt.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Auch wenn die Entwicklung noch in den Kinderschuhen steckt, ist die Aussicht auf Haushalts- oder Industrieroboter, die bei einer Vielzahl von Aufgaben in sich ver\u00e4ndernden Umgebungen helfen, spannend. Auch die Anwendungen im Verteidigungsbereich werden mit Sicherheit Aufmerksamkeit erregen.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Der Roboterarm von Google hat es nicht immer richtig gemacht und hatte einen gro\u00dfen roten Not-Aus-Knopf f\u00fcr den Fall, dass er nicht richtig funktioniert. Hoffen wir, dass die zuk\u00fcnftigen Roboter etwas \u00c4hnliches haben, falls sie eines Tages das Gef\u00fchl haben, dass sie mit ihrem Chef nicht zufrieden sind.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Google hat einige aufregende Testergebnisse seines neuesten Vision-Language-Action (VLA)-Robotermodells namens Robotics Transformer 2 (RT-2) vorgestellt. Der Gro\u00dfteil der j\u00fcngsten KI-Diskussionen hat sich um gro\u00dfe Sprachmodelle wie ChatGPT und Llama gedreht. Die Antworten, die diese Modelle liefern, sind zwar n\u00fctzlich, bleiben aber auf dem Bildschirm Ihres Ger\u00e4ts. Mit RT-2 bringt Google die Macht der KI in die physische Welt. Eine Welt, in der selbstlernende Roboter bald zu unserem Alltag geh\u00f6ren k\u00f6nnten. Die Geschicklichkeit von Robotern hat sich zwar stark verbessert, aber sie ben\u00f6tigen immer noch sehr spezifische Programmieranweisungen, um selbst einfache Aufgaben zu erledigen. Wenn<\/p>","protected":false},"author":6,"featured_media":3367,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[147,102,169],"class_list":["post-3315","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-deepmind","tag-google","tag-robotics"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-29T11:38:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s AI turns vision &#038; language into robotic actions\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"wordCount\":558,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"keywords\":[\"DeepMind\",\"Google\",\"Robotics\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"name\":\"Google\u2019s AI turns vision & language into robotic actions | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"width\":1000,\"height\":563,\"caption\":\"Google AI RT-2 Robotics\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s AI turns vision &#038; language into robotic actions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Googles KI verwandelt Vision und Sprache in Roboteraktionen | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_locale":"de_DE","og_type":"article","og_title":"Google\u2019s AI turns vision & language into robotic actions | DailyAI","og_description":"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When","og_url":"https:\/\/dailyai.com\/de\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_site_name":"DailyAI","article_published_time":"2023-07-29T11:38:41+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s AI turns vision &#038; language into robotic actions","datePublished":"2023-07-29T11:38:41+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"wordCount":558,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","keywords":["DeepMind","Google","Robotics"],"articleSection":["Industry"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","url":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","name":"Googles KI verwandelt Vision und Sprache in Roboteraktionen | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","datePublished":"2023-07-29T11:38:41+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","width":1000,"height":563,"caption":"Google AI RT-2 Robotics"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s AI turns vision &#038; language into robotic actions"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/3315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=3315"}],"version-history":[{"count":2,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/3315\/revisions"}],"predecessor-version":[{"id":3368,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/3315\/revisions\/3368"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/3367"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=3315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=3315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=3315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}