{"id":10185,"date":"2024-02-20T07:06:53","date_gmt":"2024-02-20T07:06:53","guid":{"rendered":"https:\/\/dailyai.com\/?p=10185"},"modified":"2024-02-22T09:44:53","modified_gmt":"2024-02-22T09:44:53","slug":"meta-releases-v-jepa-a-predictive-vision-model","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","title":{"rendered":"Meta rilascia V-JEPA, un modello di visione predittivo"},"content":{"rendered":"<p><strong>Meta ha rilasciato V-JEPA, un modello di visione predittiva che rappresenta il prossimo passo verso la visione dell'intelligenza artificiale avanzata (AMI) di Yann LeCun, Chief AI Scientist di Meta.<\/strong><\/p>\n<p>Per poter interagire con gli oggetti del mondo fisico, le macchine dotate di intelligenza artificiale devono essere addestrate, ma i metodi tradizionali sono molto inefficienti. Utilizzano migliaia di esempi video con codificatori di immagini pre-addestrati, testo o annotazioni umane, perch\u00e9 una macchina possa imparare un singolo concetto, per non parlare di pi\u00f9 abilit\u00e0.<\/p>\n<p>V-JEPA, acronimo di Joint Embedding Predictive Architectures, \u00e8 un modello di visione progettato per apprendere questi concetti in modo pi\u00f9 efficiente.<\/p>\n<p>LeCun ha dichiarato che \"V-JEPA \u00e8 un passo avanti verso una comprensione pi\u00f9 fondata del mondo, in modo che le macchine possano raggiungere un ragionamento e una pianificazione pi\u00f9 generalizzati\".<\/p>\n<p>Il V-JEPA apprende come gli oggetti del mondo fisico interagiscano in modo molto simile a quello <a href=\"https:\/\/dailyai.com\/it\/2024\/02\/chinese-researchers-unveil-a-robot-toddler-named-tong-tong\/\">allo stesso modo in cui lo fanno i bambini piccoli<\/a>. Una parte fondamentale del modo in cui impariamo \u00e8 riempire gli spazi vuoti per prevedere le informazioni mancanti. Quando una persona passa dietro a uno schermo ed esce dall'altra parte, il nostro cervello riempie gli spazi vuoti con la comprensione di ci\u00f2 che \u00e8 accaduto dietro lo schermo.<\/p>\n<p>V-JEPA \u00e8 un modello non generativo che apprende prevedendo le parti mancanti o mascherate di un video. I modelli generativi possono ricreare un pezzo di video mascherato pixel per pixel, ma V-JEPA non lo fa.<\/p>\n<p>Confronta rappresentazioni astratte di immagini senza etichetta piuttosto che i pixel stessi. A V-JEPA viene presentato un video con un'ampia porzione mascherata, con una parte del video sufficiente a fornire un contesto. Al modello viene quindi chiesto di fornire una descrizione astratta di ci\u00f2 che accade nello spazio mascherato.<\/p>\n<p>Invece di essere addestrato su un'abilit\u00e0 specifica, Meta dice che \"ha usato un addestramento auto-supervisionato su una serie di video e ha imparato una serie di cose su come funziona il mondo\".<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\">Oggi pubblichiamo V-JEPA, un metodo per insegnare alle macchine a comprendere e modellare il mondo fisico guardando video. Questo lavoro \u00e8 un altro passo importante verso <a href=\"https:\/\/twitter.com\/ylecun?ref_src=twsrc%5Etfw\">@ylecun<\/a>La visione delineata dall'autore di modelli di IA che utilizzano una comprensione appresa del mondo per pianificare, ragionare e... <a href=\"https:\/\/t.co\/5i6uNeFwJp\">pic.twitter.com\/5i6uNeFwJp<\/a><\/p>\n<p>- AI a Meta (@AIatMeta) <a href=\"https:\/\/twitter.com\/AIatMeta\/status\/1758176023588577326?ref_src=twsrc%5Etfw\">15 febbraio 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Valutazioni congelate<\/h2>\n<p>Meta's <a href=\"https:\/\/ai.meta.com\/research\/publications\/revisiting-feature-prediction-for-learning-visual-representations-from-video\/\" target=\"_blank\" rel=\"noopener\">carta di ricerca<\/a> spiega che uno degli aspetti chiave che rende V-JEPA molto pi\u00f9 efficiente di altri modelli di apprendimento della visione \u00e8 la sua capacit\u00e0 di \"valutazione congelata\".<\/p>\n<p>Dopo aver subito un apprendimento auto-supervisionato con ampi dati non etichettati, il codificatore e il predittore non richiedono un ulteriore addestramento quando imparano una nuova abilit\u00e0. Il modello preaddestrato viene congelato.<\/p>\n<p>In precedenza, se si voleva mettere a punto un modello per apprendere una nuova abilit\u00e0, era necessario aggiornare i parametri o i pesi dell'intero modello. Per apprendere un nuovo compito, V-JEPA richiede solo una piccola quantit\u00e0 di dati etichettati e un piccolo insieme di parametri specifici per il compito, ottimizzati sulla base della struttura portante congelata.<\/p>\n<p>La capacit\u00e0 di V-JEPA di apprendere in modo efficiente nuovi compiti \u00e8 promettente per lo sviluppo dell'intelligenza artificiale incarnata. Potrebbe essere la chiave per consentire alle macchine di essere consapevoli del contesto fisico che le circonda e di gestire compiti di pianificazione e di decisione sequenziale.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta ha rilasciato V-JEPA, un modello di visione predittiva che rappresenta il prossimo passo verso la visione dell'intelligenza artificiale avanzata (AMI) di Yann LeCun, Chief AI Scientist di Meta. Per interagire con gli oggetti del mondo fisico, le macchine dotate di intelligenza artificiale devono essere addestrate, ma i metodi tradizionali sono molto inefficienti. Utilizzano migliaia di esempi video con codificatori di immagini pre-addestrati, testo o annotazioni umane, perch\u00e9 una macchina possa imparare un singolo concetto, per non parlare di pi\u00f9 abilit\u00e0. V-JEPA, acronimo di Joint Embedding Predictive Architectures, \u00e8 un modello di visione progettato per apprendere questi concetti in modo pi\u00f9 efficiente. LeCun ha detto<\/p>","protected":false},"author":6,"featured_media":10193,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,131],"class_list":["post-10185","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta releases V-JEPA, a predictive vision model | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta releases V-JEPA, a predictive vision model | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-20T07:06:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-02-22T09:44:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"750\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Meta releases V-JEPA, a predictive vision model\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"wordCount\":525,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"keywords\":[\"Computer vision\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"name\":\"Meta releases V-JEPA, a predictive vision model | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"width\":1000,\"height\":750},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta releases V-JEPA, a predictive vision model\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta rilascia V-JEPA, un modello di visione predittivo | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_locale":"it_IT","og_type":"article","og_title":"Meta releases V-JEPA, a predictive vision model | DailyAI","og_description":"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said","og_url":"https:\/\/dailyai.com\/it\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_site_name":"DailyAI","article_published_time":"2024-02-20T07:06:53+00:00","article_modified_time":"2024-02-22T09:44:53+00:00","og_image":[{"width":1000,"height":750,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Eugene van der Watt","Tempo di lettura stimato":"3 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Meta releases V-JEPA, a predictive vision model","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"wordCount":525,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","keywords":["Computer vision","Meta"],"articleSection":["Industry"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","url":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","name":"Meta rilascia V-JEPA, un modello di visione predittivo | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","width":1000,"height":750},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Meta releases V-JEPA, a predictive vision model"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene proviene da un background di ingegneria elettronica e ama tutto ci\u00f2 che \u00e8 tecnologico. Quando si prende una pausa dal consumo di notizie sull'intelligenza artificiale, lo si pu\u00f2 trovare al tavolo da biliardo.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/it\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/10185","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=10185"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/10185\/revisions"}],"predecessor-version":[{"id":10262,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/10185\/revisions\/10262"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/10193"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=10185"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=10185"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=10185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}