{"id":10185,"date":"2024-02-20T07:06:53","date_gmt":"2024-02-20T07:06:53","guid":{"rendered":"https:\/\/dailyai.com\/?p=10185"},"modified":"2024-02-22T09:44:53","modified_gmt":"2024-02-22T09:44:53","slug":"meta-releases-v-jepa-a-predictive-vision-model","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","title":{"rendered":"Meta lanza V-JEPA, un modelo de visi\u00f3n predictiva"},"content":{"rendered":"<p><strong>Meta ha lanzado V-JEPA, un modelo de visi\u00f3n predictiva que constituye el siguiente paso hacia la visi\u00f3n de la inteligencia artificial avanzada (IAM) del cient\u00edfico jefe de IA de Meta, Yann LeCun.<\/strong><\/p>\n<p>Para que las m\u00e1quinas dotadas de IA interact\u00faen con objetos del mundo f\u00edsico, es necesario entrenarlas, pero los m\u00e9todos convencionales son muy ineficaces. Utilizan miles de ejemplos de v\u00eddeo con codificadores de imagen preentrenados, texto o anotaciones humanas, para que una m\u00e1quina aprenda un solo concepto, por no hablar de m\u00faltiples habilidades.<\/p>\n<p>V-JEPA, acr\u00f3nimo de Joint Embedding Predictive Architectures, es un modelo de visi\u00f3n dise\u00f1ado para aprender estos conceptos de forma m\u00e1s eficaz.<\/p>\n<p>LeCun dijo que \"V-JEPA es un paso hacia una comprensi\u00f3n m\u00e1s fundamentada del mundo para que las m\u00e1quinas puedan lograr un razonamiento y una planificaci\u00f3n m\u00e1s generalizados\".<\/p>\n<p>V-JEPA aprende c\u00f3mo interact\u00faan los objetos en el mundo f\u00edsico de forma muy parecida a como lo hacen en el mundo real. <a href=\"https:\/\/dailyai.com\/es\/2024\/02\/chinese-researchers-unveil-a-robot-toddler-named-tong-tong\/\">de la misma manera que los ni\u00f1os peque\u00f1os<\/a>. Una parte fundamental de nuestro aprendizaje consiste en rellenar los espacios en blanco para predecir la informaci\u00f3n que falta. Cuando una persona pasa por detr\u00e1s de una pantalla y sale por el otro lado, nuestro cerebro rellena el espacio en blanco con la comprensi\u00f3n de lo que ocurri\u00f3 detr\u00e1s de la pantalla.<\/p>\n<p>V-JEPA es un modelo no generativo que aprende prediciendo partes perdidas o enmascaradas de un v\u00eddeo. Los modelos generativos pueden recrear un fragmento de v\u00eddeo enmascarado p\u00edxel a p\u00edxel, pero V-JEPA no lo hace.<\/p>\n<p>Compara representaciones abstractas de im\u00e1genes no etiquetadas en lugar de los propios p\u00edxeles. A V-JEPA se le presenta un v\u00eddeo que tiene una gran parte enmascarada, con s\u00f3lo lo suficiente del v\u00eddeo para dar algo de contexto. A continuaci\u00f3n, se pide al modelo que proporcione una descripci\u00f3n abstracta de lo que ocurre en el espacio enmascarado.<\/p>\n<p>En lugar de ser entrenado en una habilidad espec\u00edfica, Meta dice que \"utiliz\u00f3 entrenamiento autosupervisado en una gama de v\u00eddeos y aprendi\u00f3 una serie de cosas sobre c\u00f3mo funciona el mundo\".<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\">Hoy presentamos V-JEPA, un m\u00e9todo para ense\u00f1ar a las m\u00e1quinas a entender y modelar el mundo f\u00edsico viendo v\u00eddeos. Este trabajo es otro paso importante hacia <a href=\"https:\/\/twitter.com\/ylecun?ref_src=twsrc%5Etfw\">@ylecun<\/a>Los modelos de inteligencia artificial que utilizan una comprensi\u00f3n aprendida del mundo para planificar, razonar y... <a href=\"https:\/\/t.co\/5i6uNeFwJp\">pic.twitter.com\/5i6uNeFwJp<\/a><\/p>\n<p>- AI en Meta (@AIatMeta) <a href=\"https:\/\/twitter.com\/AIatMeta\/status\/1758176023588577326?ref_src=twsrc%5Etfw\">15 de febrero de 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Evaluaciones congeladas<\/h2>\n<p>Meta's <a href=\"https:\/\/ai.meta.com\/research\/publications\/revisiting-feature-prediction-for-learning-visual-representations-from-video\/\" target=\"_blank\" rel=\"noopener\">trabajo de investigaci\u00f3n<\/a> explica que una de las claves por las que V-JEPA es mucho m\u00e1s eficaz que otros modelos de aprendizaje visual es su capacidad para realizar \"evaluaciones congeladas\".<\/p>\n<p>Tras someterse a un aprendizaje autosupervisado con una gran cantidad de datos sin etiquetar, el codificador y el predictor no necesitan m\u00e1s entrenamiento cuando aprenden una nueva habilidad. El modelo preentrenado se congela.<\/p>\n<p>Antes, si se quer\u00eda ajustar un modelo para aprender una nueva habilidad, hab\u00eda que actualizar los par\u00e1metros o los pesos de todo el modelo. Para que V-JEPA aprenda una nueva tarea, solo necesita una peque\u00f1a cantidad de datos etiquetados con un peque\u00f1o conjunto de par\u00e1metros espec\u00edficos de la tarea optimizados sobre la columna vertebral congelada.<\/p>\n<p>La capacidad de V-JEPA para aprender nuevas tareas de forma eficiente es prometedora para el desarrollo de la IA incorporada. Podr\u00eda ser clave para que las m\u00e1quinas sean conscientes de su entorno f\u00edsico y puedan realizar tareas de planificaci\u00f3n y toma de decisiones secuenciales.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta ha lanzado V-JEPA, un modelo de visi\u00f3n predictiva que constituye el siguiente paso hacia la visi\u00f3n de la inteligencia artificial avanzada (IAM) del cient\u00edfico jefe de IA de Meta, Yann LeCun. Para que las m\u00e1quinas dotadas de IA interact\u00faen con objetos del mundo f\u00edsico, necesitan ser entrenadas, pero los m\u00e9todos convencionales son muy ineficaces. Utilizan miles de ejemplos de v\u00eddeo con codificadores de imagen preentrenados, texto o anotaciones humanas, para que una m\u00e1quina aprenda un solo concepto, por no hablar de m\u00faltiples habilidades. V-JEPA, siglas de Joint Embedding Predictive Architectures, es un modelo de visi\u00f3n dise\u00f1ado para aprender estos conceptos de forma m\u00e1s eficiente. LeCun dijo<\/p>","protected":false},"author":6,"featured_media":10193,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,131],"class_list":["post-10185","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta releases V-JEPA, a predictive vision model | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta releases V-JEPA, a predictive vision model | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-20T07:06:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-02-22T09:44:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"750\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Meta releases V-JEPA, a predictive vision model\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"wordCount\":525,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"keywords\":[\"Computer vision\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"name\":\"Meta releases V-JEPA, a predictive vision model | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"width\":1000,\"height\":750},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta releases V-JEPA, a predictive vision model\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta lanza V-JEPA, un modelo de visi\u00f3n predictiva | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_locale":"es_ES","og_type":"article","og_title":"Meta releases V-JEPA, a predictive vision model | DailyAI","og_description":"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said","og_url":"https:\/\/dailyai.com\/es\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_site_name":"DailyAI","article_published_time":"2024-02-20T07:06:53+00:00","article_modified_time":"2024-02-22T09:44:53+00:00","og_image":[{"width":1000,"height":750,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Meta releases V-JEPA, a predictive vision model","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"wordCount":525,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","keywords":["Computer vision","Meta"],"articleSection":["Industry"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","url":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","name":"Meta lanza V-JEPA, un modelo de visi\u00f3n predictiva | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","width":1000,"height":750},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Meta releases V-JEPA, a predictive vision model"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/10185","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=10185"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/10185\/revisions"}],"predecessor-version":[{"id":10262,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/10185\/revisions\/10262"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/10193"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=10185"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=10185"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=10185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}