{"id":3315,"date":"2023-07-29T11:38:41","date_gmt":"2023-07-29T11:38:41","guid":{"rendered":"https:\/\/dailyai.com\/?p=3315"},"modified":"2023-07-29T11:38:41","modified_gmt":"2023-07-29T11:38:41","slug":"googles-ai-turns-vision-language-into-robotic-actions","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","title":{"rendered":"La IA de Google convierte la visi\u00f3n y el lenguaje en acciones rob\u00f3ticas"},"content":{"rendered":"<p><strong>Google ha presentado algunos interesantes resultados de pruebas de su \u00faltimo modelo de robot de visi\u00f3n-lenguaje-acci\u00f3n (VLA), llamado Robotics Transformer 2 (RT-2).<\/strong><\/p>\n<p><span style=\"font-weight: 400;\">La mayor parte de los debates recientes sobre IA se han centrado en grandes modelos ling\u00fc\u00edsticos como ChatGPT y Llama. Las respuestas que proporcionan estos modelos, aunque \u00fatiles, permanecen en la pantalla de tu dispositivo. Con RT-2, Google lleva el poder de la IA al mundo f\u00edsico. Un mundo en el que los robots autodidactas pronto podr\u00edan formar parte de nuestra vida cotidiana.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">La destreza de los robots ha mejorado mucho, pero siguen necesitando instrucciones de programaci\u00f3n muy espec\u00edficas para realizar incluso tareas sencillas. Cuando la tarea cambia, aunque sea ligeramente, el programa tiene que cambiar.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Con RT-2, Google ha creado un modelo que permite a un robot clasificar y aprender de las cosas que ve en combinaci\u00f3n con las palabras que oye. A continuaci\u00f3n, razona sobre las instrucciones que recibe y realiza acciones f\u00edsicas en respuesta.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Con los LLM, una frase se divide en tokens, es decir, trozos peque\u00f1os de palabras que permiten a la IA entender la frase. Google adopt\u00f3 este principio y dividi\u00f3 en tokens los movimientos que tendr\u00eda que hacer un robot en respuesta a una orden.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Los movimientos de un brazo rob\u00f3tico con una pinza, por ejemplo, se dividir\u00edan en fichas de cambios en las posiciones x e y o rotaciones.<\/span><\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">En el pasado, los robots sol\u00edan necesitar experiencia de primera mano para realizar una acci\u00f3n. Pero con nuestro nuevo modelo de visi\u00f3n-lenguaje-acci\u00f3n, RT-2, ahora pueden aprender tanto del texto como de las im\u00e1genes de la web para abordar tareas nuevas y complejas. M\u00e1s informaci\u00f3n \u2193 <a href=\"https:\/\/t.co\/4DSRwUHhwg\">https:\/\/t.co\/4DSRwUHhwg<\/a><\/p>\n<p style=\"text-align: center;\">- Google (@Google) <a href=\"https:\/\/twitter.com\/Google\/status\/1684974085837660170?ref_src=twsrc%5Etfw\">28 de julio de 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2><span style=\"font-weight: 400;\">\u00bfQu\u00e9 permite hacer el RT-2 a un robot?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Al ser capaz de entender lo que ve y oye y tener un razonamiento en cadena, el robot no necesita ser programado para nuevas tareas.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Un ejemplo que dio Google en su campa\u00f1a DeepMind <\/span><a href=\"https:\/\/www.deepmind.com\/blog\/rt-2-new-model-translates-vision-and-language-into-action\"><span style=\"font-weight: 400;\">entrada de blog sobre RT-2<\/span><\/a><span style=\"font-weight: 400;\"> era \"decidir qu\u00e9 objeto pod\u00eda utilizarse como martillo improvisado (una roca), o qu\u00e9 tipo de bebida es mejor para una persona cansada (una bebida energ\u00e9tica)\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">En las pruebas realizadas por Google, se someti\u00f3 a un brazo rob\u00f3tico y a una pinza a una serie de peticiones que requer\u00edan comprensi\u00f3n del lenguaje, visi\u00f3n y razonamiento, para poder realizar la acci\u00f3n adecuada. Por ejemplo, ante dos bolsas de patatas fritas sobre una mesa, una de las cuales estaba ligeramente sobre el borde, el robot recibi\u00f3 la orden de \"coger la bolsa que est\u00e1 a punto de caerse de la mesa\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Puede parecer sencillo, pero el conocimiento del contexto necesario para recoger la bolsa correcta es revolucionario en el mundo de la rob\u00f3tica.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Para explicar hasta qu\u00e9 punto el RT-2 es m\u00e1s avanzado que los LLM normales, en otro blog de Google se explica que \"un robot tiene que ser capaz de reconocer una manzana en su contexto, distinguirla de una bola roja, entender qu\u00e9 aspecto tiene y, lo m\u00e1s importante, saber c\u00f3mo cogerla\".<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Aunque a\u00fan es pronto, la perspectiva de que robots dom\u00e9sticos o industriales ayuden a realizar diversas tareas en entornos cambiantes es apasionante. Las aplicaciones de defensa tambi\u00e9n est\u00e1n llamando la atenci\u00f3n.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">El brazo rob\u00f3tico de Google no siempre lo hac\u00eda bien y ten\u00eda un gran bot\u00f3n rojo de apagado de emergencia por si funcionaba mal. Esperemos que los futuros robots vengan con algo parecido por si alg\u00fan d\u00eda sienten que no est\u00e1n contentos con el jefe.\u00a0<\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>Google ha presentado algunos interesantes resultados de pruebas de su \u00faltimo modelo de robot de visi\u00f3n-lenguaje-acci\u00f3n (VLA), llamado Robotics Transformer 2 (RT-2). La mayor parte de los debates recientes sobre IA se han centrado en grandes modelos ling\u00fc\u00edsticos como ChatGPT y Llama. Las respuestas que proporcionan estos modelos, aunque \u00fatiles, permanecen en la pantalla de tu dispositivo. Con RT-2, Google lleva el poder de la IA al mundo f\u00edsico. Un mundo en el que los robots autodidactas pronto podr\u00edan formar parte de nuestra vida cotidiana. La destreza de los robots ha mejorado mucho, pero siguen necesitando instrucciones de programaci\u00f3n muy espec\u00edficas para realizar incluso tareas sencillas. Cuando<\/p>","protected":false},"author":6,"featured_media":3367,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[147,102,169],"class_list":["post-3315","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-deepmind","tag-google","tag-robotics"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google\u2019s AI turns vision &amp; language into robotic actions | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-29T11:38:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Google\u2019s AI turns vision &#038; language into robotic actions\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"},\"wordCount\":558,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"keywords\":[\"DeepMind\",\"Google\",\"Robotics\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\",\"name\":\"Google\u2019s AI turns vision & language into robotic actions | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"datePublished\":\"2023-07-29T11:38:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Google-AI-RT-2-Robotics.jpg\",\"width\":1000,\"height\":563,\"caption\":\"Google AI RT-2 Robotics\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/07\\\/googles-ai-turns-vision-language-into-robotic-actions\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google\u2019s AI turns vision &#038; language into robotic actions\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"La IA de Google convierte la visi\u00f3n y el lenguaje en acciones rob\u00f3ticas | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_locale":"es_ES","og_type":"article","og_title":"Google\u2019s AI turns vision & language into robotic actions | DailyAI","og_description":"Google showcased some exciting test results of its latest vision-language-action (VLA) robot model called Robotics Transformer 2 (RT-2). The bulk of recent AI discussions has centered around large language models like ChatGPT and Llama. The responses these models provide, while useful, remain on the screen of your device. With RT-2, Google is bringing the power of AI to the physical world. A world where self-learning robots could soon be a part of our everyday lives. There has been a big improvement in the dexterity of robots but they still need very specific programming instructions to accomplish even simple tasks. When","og_url":"https:\/\/dailyai.com\/es\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","og_site_name":"DailyAI","article_published_time":"2023-07-29T11:38:41+00:00","og_image":[{"width":1000,"height":563,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Google\u2019s AI turns vision &#038; language into robotic actions","datePublished":"2023-07-29T11:38:41+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"},"wordCount":558,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","keywords":["DeepMind","Google","Robotics"],"articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","url":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/","name":"La IA de Google convierte la visi\u00f3n y el lenguaje en acciones rob\u00f3ticas | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","datePublished":"2023-07-29T11:38:41+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Google-AI-RT-2-Robotics.jpg","width":1000,"height":563,"caption":"Google AI RT-2 Robotics"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/07\/googles-ai-turns-vision-language-into-robotic-actions\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google\u2019s AI turns vision &#038; language into robotic actions"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/3315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=3315"}],"version-history":[{"count":2,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/3315\/revisions"}],"predecessor-version":[{"id":3368,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/3315\/revisions\/3368"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/3367"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=3315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=3315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=3315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}