{"id":3315,"date":"2023-07-29T11:38:41","date_gmt":"2023-07-29T11:38:41","guid":{"rendered":"https:\/\/dailyai.com\/?p=3315"},"modified":"2023-07-29T11:38:41","modified_gmt":"2023-07-29T11:38:41","slug":"googles-ai-turns-vision-language-into-robotic-actions","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","title":{"rendered":"Googles AI omvandlar syn och spr\u00e5k till robothandlingar"},"content":{"rendered":"<p><strong>Google presenterade n\u00e5gra sp\u00e4nnande testresultat av sin senaste robotmodell med vision-language-action (VLA), Robotics Transformer 2 (RT-2).<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">Huvuddelen av de senaste AI-diskussionerna har kretsat kring stora spr\u00e5kmodeller som ChatGPT och Llama. Svaren som dessa modeller ger, \u00e4ven om de \u00e4r anv\u00e4ndbara, f\u00f6rblir p\u00e5 sk\u00e4rmen p\u00e5 din enhet. Med RT-2 tar Google kraften i AI till den fysiska v\u00e4rlden. En v\u00e4rld d\u00e4r sj\u00e4lvl\u00e4rande robotar snart kan vara en del av v\u00e5r vardag.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det har skett stora f\u00f6rb\u00e4ttringar n\u00e4r det g\u00e4ller robotarnas fingerf\u00e4rdighet, men de beh\u00f6ver fortfarande mycket specifika programmeringsinstruktioner f\u00f6r att utf\u00f6ra \u00e4ven enkla uppgifter. N\u00e4r uppgiften \u00e4ndras, \u00e4ven om det bara \u00e4r en liten f\u00f6r\u00e4ndring, m\u00e5ste programmet \u00e4ndras.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Med RT-2 har Google skapat en modell som g\u00f6r det m\u00f6jligt f\u00f6r en robot att klassificera och l\u00e4ra sig av saker den ser i kombination med ord den h\u00f6r. Den resonerar sedan utifr\u00e5n de instruktioner den f\u00e5r och vidtar fysiska \u00e5tg\u00e4rder som svar.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Med LLM:er delas en mening upp i tokens, i princip sm\u00e5 bitar av ord som g\u00f6r det m\u00f6jligt f\u00f6r AI att f\u00f6rst\u00e5 meningen. Google tog den principen och tokeniserade de r\u00f6relser som en robot skulle beh\u00f6va g\u00f6ra som svar p\u00e5 ett kommando.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">R\u00f6relserna hos en robotarm med gripdon skulle t.ex. delas upp i symboler f\u00f6r f\u00f6r\u00e4ndringar i x- och y-positioner eller rotationer.<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">Tidigare har robotar oftast beh\u00f6vt f\u00f6rstahandserfarenhet f\u00f6r att kunna utf\u00f6ra en handling. Men med v\u00e5r nya vision-language-action-modell, RT-2, kan de nu l\u00e4ra sig av b\u00e5de text och bilder fr\u00e5n webben f\u00f6r att ta itu med nya och komplexa uppgifter. L\u00e4s mer \u2193 <a href=\"https:\/\/t.co\/4DSRwUHhwg\">https:\/\/t.co\/4DSRwUHhwg<\/a><\/p>\n<p style=\"text-align: center;\">- Google (@Google) <a href=\"https:\/\/twitter.com\/Google\/status\/1684974085837660170?ref_src=twsrc%5Etfw\">28 juli 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2><span style=\"font-weight: 400;\">Vad kan en robot g\u00f6ra med RT-2?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Att kunna f\u00f6rst\u00e5 vad den ser och h\u00f6r och att ha en tankekedja inneb\u00e4r att roboten inte beh\u00f6ver programmeras f\u00f6r nya uppgifter.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Ett exempel som Google gav i sin DeepMind <\/span><a href=\"https:\/\/www.deepmind.com\/blog\/rt-2-new-model-translates-vision-and-language-into-action\"><span style=\"font-weight: 400;\">blogginl\u00e4gg om RT-2<\/span><\/a><span style=\"font-weight: 400;\"> var \"att best\u00e4mma vilket f\u00f6rem\u00e5l som skulle kunna anv\u00e4ndas som en improviserad hammare (en sten), eller vilken typ av dryck som \u00e4r b\u00e4st f\u00f6r en tr\u00f6tt person (en energidryck).\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I de tester som Google genomf\u00f6rde fick en robotarm och ett gripdon genomg\u00e5 en rad f\u00f6rfr\u00e5gningar som kr\u00e4vde spr\u00e5kf\u00f6rst\u00e5else, syn och resonemang f\u00f6r att kunna vidta l\u00e4mpliga \u00e5tg\u00e4rder. N\u00e4r roboten till exempel st\u00e4lldes inf\u00f6r tv\u00e5 chipsp\u00e5sar p\u00e5 ett bord, d\u00e4r den ena l\u00e5g lite \u00f6ver kanten, uppmanades den att \"plocka upp p\u00e5sen som var p\u00e5 v\u00e4g att falla av bordet\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Det l\u00e5ter kanske enkelt, men den kontextuella medvetenhet som kr\u00e4vs f\u00f6r att plocka upp r\u00e4tt v\u00e4ska \u00e4r banbrytande inom robotv\u00e4rlden.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">F\u00f6r att f\u00f6rklara hur mycket mer avancerad RT-2 \u00e4r \u00e4n vanliga LLM:er f\u00f6rklarade en annan Google-blogg att \"En robot m\u00e5ste kunna k\u00e4nna igen ett \u00e4pple i sitt sammanhang, skilja det fr\u00e5n en r\u00f6d boll, f\u00f6rst\u00e5 hur det ser ut och, viktigast av allt, veta hur man plockar upp det.\"<\/span><\/p>\n<p><span style=\"font-weight: 400;\">\u00c4ven om det bara \u00e4r b\u00f6rjan \u00e4r det sp\u00e4nnande att se hur hush\u00e5lls- eller industrirobotar kan hj\u00e4lpa till med en m\u00e4ngd olika uppgifter i f\u00f6r\u00e4nderliga milj\u00f6er. F\u00f6rsvarsapplikationerna kommer n\u00e4stan s\u00e4kert ocks\u00e5 att uppm\u00e4rksammas.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Googles robotarm gjorde inte alltid r\u00e4tt och hade en stor r\u00f6d n\u00f6davst\u00e4ngningsknapp om den inte fungerade som den skulle. L\u00e5t oss hoppas att framtidens robotar kommer med n\u00e5got liknande ifall de k\u00e4nner att de inte \u00e4r n\u00f6jda med chefen en dag.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Google presenterade n\u00e5gra sp\u00e4nnande testresultat av sin senaste robotmodell f\u00f6r vision-language-action (VLA) kallad Robotics Transformer 2 (RT-2). Huvuddelen av de senaste AI-diskussionerna har kretsat kring stora spr\u00e5kmodeller som ChatGPT och Llama. De svar som dessa modeller ger, \u00e4ven om de \u00e4r anv\u00e4ndbara, f\u00f6rblir p\u00e5 sk\u00e4rmen p\u00e5 din enhet. Med RT-2 tar Google med sig kraften i AI till den fysiska v\u00e4rlden. En v\u00e4rld d\u00e4r sj\u00e4lvl\u00e4rande robotar snart kan vara en del av v\u00e5r vardag. Det har skett stora f\u00f6rb\u00e4ttringar n\u00e4r det g\u00e4ller robotarnas fingerf\u00e4rdighet, men de beh\u00f6ver fortfarande mycket specifika programmeringsinstruktioner f\u00f6r att utf\u00f6ra \u00e4ven enkla uppgifter. N\u00e4r<\/p>","protected":false},"author":6,"featured_media":3367,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[147,102,169],"class_list":["post-3315","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-deepmind","tag-google","tag-robotics"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-29T11:38:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s AI turns vision &#038; language into robotic actions\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"wordCount\":558,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"keywords\":[\"DeepMind\",\"Google\",\"Robotics\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"name\":\"Google\u2019s AI turns vision & language into robotic actions | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"width\":1000,\"height\":563,\"caption\":\"Google AI RT-2 Robotics\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s AI turns vision &#038; language into robotic actions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Googles AI omvandlar syn och spr\u00e5k till robothandlingar | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_locale":"sv_SE","og_type":"article","og_title":"Google\u2019s AI turns vision & language into robotic actions | DailyAI","og_description":"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When","og_url":"https:\/\/dailyai.com\/sv\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_site_name":"DailyAI","article_published_time":"2023-07-29T11:38:41+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Eugene van der Watt","Ber\u00e4knad l\u00e4stid":"3 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s AI turns vision &#038; language into robotic actions","datePublished":"2023-07-29T11:38:41+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"wordCount":558,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","keywords":["DeepMind","Google","Robotics"],"articleSection":["Industry"],"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","url":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","name":"Googles AI omvandlar syn och spr\u00e5k till robothandlingar | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","datePublished":"2023-07-29T11:38:41+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","width":1000,"height":563,"caption":"Google AI RT-2 Robotics"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s AI turns vision &#038; language into robotic actions"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommer fr\u00e5n en bakgrund som elektronikingenj\u00f6r och \u00e4lskar allt som har med teknik att g\u00f6ra. N\u00e4r han tar en paus fr\u00e5n att konsumera AI-nyheter hittar du honom vid snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/sv\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/3315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=3315"}],"version-history":[{"count":2,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/3315\/revisions"}],"predecessor-version":[{"id":3368,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/3315\/revisions\/3368"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/3367"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=3315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=3315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=3315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}