{"id":3315,"date":"2023-07-29T11:38:41","date_gmt":"2023-07-29T11:38:41","guid":{"rendered":"https:\/\/dailyai.com\/?p=3315"},"modified":"2023-07-29T11:38:41","modified_gmt":"2023-07-29T11:38:41","slug":"googles-ai-turns-vision-language-into-robotic-actions","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","title":{"rendered":"L'IA de Google transforme la vision et le langage en actions robotiques"},"content":{"rendered":"<p><strong>Google a pr\u00e9sent\u00e9 les r\u00e9sultats de tests passionnants de son dernier mod\u00e8le de robot vision-langage-action (VLA) appel\u00e9 Robotics Transformer 2 (RT-2).<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">L'essentiel des discussions r\u00e9centes sur l'IA s'est concentr\u00e9 sur les grands mod\u00e8les linguistiques tels que ChatGPT et Llama. Les r\u00e9ponses fournies par ces mod\u00e8les, bien qu'utiles, restent sur l'\u00e9cran de votre appareil. Avec RT-2, Google apporte la puissance de l'IA au monde physique. Un monde o\u00f9 les robots auto-apprenants pourraient bient\u00f4t faire partie de notre vie quotidienne.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">La dext\u00e9rit\u00e9 des robots s'est consid\u00e9rablement am\u00e9lior\u00e9e, mais ils ont toujours besoin d'instructions de programmation tr\u00e8s sp\u00e9cifiques pour accomplir les t\u00e2ches les plus simples. Lorsque la t\u00e2che change, m\u00eame l\u00e9g\u00e8rement, le programme doit \u00eatre modifi\u00e9.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Avec RT-2, Google a cr\u00e9\u00e9 un mod\u00e8le qui permet \u00e0 un robot de classer et d'apprendre \u00e0 partir des choses qu'il voit en combinaison avec les mots qu'il entend. Il raisonne ensuite en fonction des instructions qu'il re\u00e7oit et prend des mesures physiques en r\u00e9ponse.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Avec les LLM, une phrase est d\u00e9compos\u00e9e en tokens, c'est-\u00e0-dire en petits morceaux de mots qui permettent \u00e0 l'IA de comprendre la phrase. Google a repris ce principe et a transform\u00e9 en jetons les mouvements qu'un robot devrait effectuer en r\u00e9ponse \u00e0 une commande.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Les mouvements d'un bras robotis\u00e9 dot\u00e9 d'une pince, par exemple, seraient d\u00e9compos\u00e9s en jetons correspondant \u00e0 des changements de positions x et y ou \u00e0 des rotations.<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">Dans le pass\u00e9, les robots avaient g\u00e9n\u00e9ralement besoin d'une exp\u00e9rience directe pour effectuer une action. Mais avec notre nouveau mod\u00e8le vision-langage-action, RT-2, ils peuvent d\u00e9sormais apprendre \u00e0 partir de textes et d'images du web pour s'attaquer \u00e0 des t\u00e2ches nouvelles et complexes. En savoir plus \u2193 <a href=\"https:\/\/t.co\/4DSRwUHhwg\">https:\/\/t.co\/4DSRwUHhwg<\/a><\/p>\n<p style=\"text-align: center;\">- Google (@Google) <a href=\"https:\/\/twitter.com\/Google\/status\/1684974085837660170?ref_src=twsrc%5Etfw\">28 juillet 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2><span style=\"font-weight: 400;\">Qu'est-ce que le RT-2 permet \u00e0 un robot de faire ?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Le fait de pouvoir comprendre ce qu'il voit et entend et d'avoir un raisonnement en cha\u00eene signifie que le robot n'a pas besoin d'\u00eatre programm\u00e9 pour de nouvelles t\u00e2ches.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Un exemple que Google a donn\u00e9 dans son rapport DeepMind <\/span><a href=\"https:\/\/www.deepmind.com\/blog\/rt-2-new-model-translates-vision-and-language-into-action\"><span style=\"font-weight: 400;\">article de blog sur la RT-2<\/span><\/a><span style=\"font-weight: 400;\"> \u00e9tait de \"d\u00e9cider quel objet peut \u00eatre utilis\u00e9 comme marteau improvis\u00e9 (une pierre), ou quel type de boisson est le meilleur pour une personne fatigu\u00e9e (une boisson \u00e9nergisante)\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Dans les tests qu'il a effectu\u00e9s, Google a soumis un bras robotique et une pince \u00e0 une s\u00e9rie de requ\u00eates qui n\u00e9cessitaient une compr\u00e9hension du langage, une vision et un raisonnement, afin de pouvoir prendre la mesure appropri\u00e9e. Par exemple, face \u00e0 deux sacs de chips pos\u00e9s sur une table, dont l'un d\u00e9passait l\u00e9g\u00e8rement le bord, le robot devait \"ramasser le sac sur le point de tomber de la table\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Cela peut para\u00eetre simple, mais la connaissance du contexte n\u00e9cessaire pour prendre le bon sac est r\u00e9volutionnaire dans le monde de la robotique.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Pour expliquer \u00e0 quel point RT-2 est plus avanc\u00e9 que les LLM ordinaires, un autre blog de Google explique qu'\"un robot doit \u00eatre capable de reconna\u00eetre une pomme dans son contexte, de la distinguer d'une balle rouge, de comprendre \u00e0 quoi elle ressemble et, surtout, de savoir comment la ramasser\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Bien qu'il s'agisse d'un stade pr\u00e9coce, la perspective de robots m\u00e9nagers ou industriels aidant \u00e0 accomplir diverses t\u00e2ches dans des environnements changeants est passionnante. Les applications dans le domaine de la d\u00e9fense retiennent certainement aussi l'attention.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Le bras robotique de Google n'a pas toujours bien fait les choses et disposait d'un gros bouton rouge d'arr\u00eat d'urgence en cas de dysfonctionnement. Esp\u00e9rons que les futurs robots seront \u00e9quip\u00e9s de quelque chose de similaire au cas o\u00f9 ils ne seraient pas satisfaits de leur patron un jour.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Google a pr\u00e9sent\u00e9 les r\u00e9sultats de tests passionnants de son dernier mod\u00e8le de robot vision-langage-action (VLA) appel\u00e9 Robotics Transformer 2 (RT-2). L'essentiel des discussions r\u00e9centes sur l'IA s'est concentr\u00e9 sur les grands mod\u00e8les de langage tels que ChatGPT et Llama. Les r\u00e9ponses fournies par ces mod\u00e8les, bien qu'utiles, restent sur l'\u00e9cran de votre appareil. Avec RT-2, Google introduit la puissance de l'IA dans le monde physique. Un monde o\u00f9 les robots auto-apprenants pourraient bient\u00f4t faire partie de notre vie quotidienne. La dext\u00e9rit\u00e9 des robots s'est consid\u00e9rablement am\u00e9lior\u00e9e, mais ils ont toujours besoin d'instructions de programmation tr\u00e8s sp\u00e9cifiques pour accomplir des t\u00e2ches simples. Lorsque<\/p>","protected":false},"author":6,"featured_media":3367,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[147,102,169],"class_list":["post-3315","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-deepmind","tag-google","tag-robotics"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-29T11:38:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s AI turns vision &#038; language into robotic actions\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"wordCount\":558,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"keywords\":[\"DeepMind\",\"Google\",\"Robotics\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"name\":\"Google\u2019s AI turns vision & language into robotic actions | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"width\":1000,\"height\":563,\"caption\":\"Google AI RT-2 Robotics\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s AI turns vision &#038; language into robotic actions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"L'IA de Google transforme la vision et le langage en actions robotiques | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_locale":"fr_FR","og_type":"article","og_title":"Google\u2019s AI turns vision & language into robotic actions | DailyAI","og_description":"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When","og_url":"https:\/\/dailyai.com\/fr\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_site_name":"DailyAI","article_published_time":"2023-07-29T11:38:41+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s AI turns vision &#038; language into robotic actions","datePublished":"2023-07-29T11:38:41+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"wordCount":558,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","keywords":["DeepMind","Google","Robotics"],"articleSection":["Industry"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","url":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","name":"L'IA de Google transforme la vision et le langage en actions robotiques | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","datePublished":"2023-07-29T11:38:41+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","width":1000,"height":563,"caption":"Google AI RT-2 Robotics"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s AI turns vision &#038; language into robotic actions"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/3315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=3315"}],"version-history":[{"count":2,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/3315\/revisions"}],"predecessor-version":[{"id":3368,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/3315\/revisions\/3368"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/3367"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=3315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=3315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=3315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}