{"id":8006,"date":"2023-12-05T09:02:39","date_gmt":"2023-12-05T09:02:39","guid":{"rendered":"https:\/\/dailyai.com\/?p=8006"},"modified":"2023-12-05T13:15:00","modified_gmt":"2023-12-05T13:15:00","slug":"meta-releases-ego-exo4d-a-multimodal-perception-dataset","status":"publish","type":"post","link":"https:\/\/dailyai.com\/it\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","title":{"rendered":"Meta rilascia Ego-Exo4D, un dataset di percezione multimodale"},"content":{"rendered":"<p><strong>L'addestramento di modelli di intelligenza artificiale come il GPT-4 si \u00e8 basato principalmente su set di dati costituiti da testo e immagini. Il dataset di percezione multimodale Ego-Exo4D di Meta offre agli scienziati dei dati un nuovo e ricco set di dati di addestramento.<\/strong><\/p>\n<p>Si pu\u00f2 imparare una nuova abilit\u00e0 leggendo un libro, ma \u00e8 molto pi\u00f9 facile quando qualcuno ti mostra come fare qualcosa e te lo spiega. Questo \u00e8 l'obiettivo del team FAIR (Fundamental Artificial Intelligence Research) di Meta per Ego-Exo4D.<\/p>\n<p>Il set di dati \u00e8 costituito da video con prospettiva in prima persona (Ego) e in terza persona (Exo) di persone che svolgono diverse attivit\u00e0 umane qualificate. Si pu\u00f2 trattare di attivit\u00e0 come cucinare, ballare, suonare o riparare una bicicletta. I dati sono stati raccolti in 13 citt\u00e0 di tutto il mondo da 839 persone che indossavano la telecamera, catturando 1422 ore di video.<\/p>\n<p>I video, girati simultaneamente, sono poi integrati con ulteriori modalit\u00e0 di dati grazie agli occhiali Project Aria di Meta.<\/p>\n<p>Gli occhiali Project Aria sono computer indossabili sotto forma di occhiali. Catturano il video e l'audio di chi li indossa, oltre alle informazioni sul tracciamento degli occhi e sulla posizione. Gli occhiali rilevano anche le pose della testa e le nuvole di punti 3D dell'ambiente.<\/p>\n<p>Il risultato \u00e8 un insieme di video simultanei di un'attivit\u00e0 in corso, con narrazioni in prima persona da parte di chi indossa la telecamera che descrivono le proprie azioni, e il tracciamento della testa e degli occhi della persona che esegue l'attivit\u00e0.<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\" style=\"text-align: center;\">Presentiamo Ego-Exo4D, un set di dati fondamentali e una suite di benchmark incentrati sulle attivit\u00e0 umane qualificate per supportare la ricerca sull'apprendimento video e sulla percezione multimodale. Si tratta del pi\u00f9 grande dataset pubblico di questo tipo.<\/p>\n<p>Maggiori dettagli \u27a1\ufe0f <a href=\"https:\/\/t.co\/82OR4msehv\">https:\/\/t.co\/82OR4msehv<\/a> <a href=\"https:\/\/t.co\/NTI1kdj1RN\">pic.twitter.com\/NTI1kdj1RN<\/a><\/p>\n<p style=\"text-align: center;\">- AI a Meta (@AIatMeta) <a href=\"https:\/\/twitter.com\/AIatMeta\/status\/1731739266856935796?ref_src=twsrc%5Etfw\">4 dicembre 2023<\/a><\/p>\n<\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Meta ha poi aggiunto descrizioni in terza persona delle azioni di chi indossa la telecamera. Meta ha anche ingaggiato esperti in diversi settori per aggiungere commenti di esperti in terza persona che criticassero il modo in cui la persona nel video svolgeva il compito.<\/p>\n<p>Raccogliendo viste egocentriche ed esocentriche, il set di dati Ego-Exo4D pu\u00f2 mostrare ai ricercatori come appaiono le attivit\u00e0 da diverse prospettive. Questo potrebbe aiutarli a sviluppare algoritmi di visione artificiale in grado di riconoscere ci\u00f2 che una persona sta facendo da qualsiasi prospettiva.<\/p>\n<h2>Ego-Exo4D apre nuove opportunit\u00e0 di apprendimento<\/h2>\n<p>Uno degli ostacoli principali alla realizzazione dell'intelligenza artificiale o all'addestramento pi\u00f9 efficiente dei robot \u00e8 la mancanza di percezione sensoriale dei computer. Come esseri umani, abbiamo cos\u00ec tanti input sensoriali dall'ambiente che spesso diamo per scontati quando impariamo nuove abilit\u00e0.<\/p>\n<p>Ego-Exo4D sar\u00e0 una risorsa estremamente utile per colmare questo divario.<\/p>\n<p>Gedas Bertasius, professore assistente presso il Dipartimento di Informatica dell'Universit\u00e0 del North Carolina, ha dichiarato: \"Ego-Exo4D non si limita a raccogliere dati, ma cambia il modo in cui l'IA comprende, percepisce e apprende. Con un apprendimento e una prospettiva incentrati sull'uomo, l'IA pu\u00f2 diventare pi\u00f9 utile nella nostra vita quotidiana, assistendoci in modi che abbiamo solo immaginato\".<\/p>\n<figure id=\"attachment_8008\" aria-describedby=\"caption-attachment-8008\" style=\"width: 1792px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-8008 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot.png\" alt=\"\" width=\"1792\" height=\"1072\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot.png 1792w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-300x179.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1024x613.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-768x459.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1536x919.png 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-370x221.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-800x479.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-20x12.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-740x443.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1600x957.png 1600w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-1320x790.png 1320w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/Ego-Exo4D-training-data-snapshot-80x48.png 80w\" sizes=\"auto, (max-width: 1792px) 100vw, 1792px\" \/><figcaption id=\"caption-attachment-8008\" class=\"wp-caption-text\">Istantanea dei dati di addestramento di Ego-Exo4D dall'esempio di riparazione della bicicletta. Fonte: Meta<\/figcaption><\/figure>\n<p>Meta spera che Ego-Exo4D \"consenta ai robot del futuro di acquisire conoscenze su manipolazioni complesse e destre osservando esperti umani in azione\".<\/p>\n<p>Questo set di dati, combinato con gli occhiali Project Aria, consentir\u00e0 presto un'esperienza di apprendimento veramente coinvolgente per gli esseri umani. Immaginate di eseguire un compito mentre gli occhiali utilizzano la realt\u00e0 aumentata (AR) per sovrapporre un video tutorial o per guidarvi durante il compito.<\/p>\n<p>Potreste imparare a suonare il pianoforte e avere un'immagine in sovrimpressione che vi mostra dove le mani devono muoversi, con consigli audio in tempo reale mentre lo fate. Oppure potreste aprire il cofano della vostra auto ed essere guidati nella risoluzione di un problema al motore.<\/p>\n<p>Sar\u00e0 interessante vedere se Meta <a href=\"https:\/\/ai.meta.com\/research\/ego-how-to\/\" target=\"_blank\" rel=\"noopener\">Concetto di apprendimento Ego How-To<\/a> Il progetto Aria sar\u00e0 adottato meglio di quanto non lo sia stato il prodotto fallito Google Glass. Non si sa ancora quando saranno disponibili per l'acquisto.<\/p>\n<p>Meta render\u00e0 il set di dati Ego-Exo4D <a href=\"https:\/\/ego-exo4d-data.org\/\" target=\"_blank\" rel=\"noopener\">disponibile per il download<\/a> prima della fine di dicembre.<\/p>","protected":false},"excerpt":{"rendered":"<p>L'addestramento di modelli di intelligenza artificiale come il GPT-4 si \u00e8 basato principalmente su set di dati costituiti da testo e immagini. Il dataset di percezione multimodale Ego-Exo4D di Meta offre agli scienziati dei dati un nuovo e ricco set di dati di addestramento. Si pu\u00f2 imparare una nuova abilit\u00e0 leggendo un libro, ma \u00e8 molto pi\u00f9 facile quando qualcuno ti mostra come fare qualcosa e te lo spiega. Questo \u00e8 l'obiettivo del team FAIR (Fundamental Artificial Intelligence Research) di Meta per Ego-Exo4D. Il set di dati consiste in video in prima persona (Ego) e in terza persona (Exo) di persone che svolgono diverse attivit\u00e0 umane qualificate. Si pu\u00f2 trattare di qualsiasi cosa, come cucinare, ballare, suonare,<\/p>","protected":false},"author":6,"featured_media":8009,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,105,131],"class_list":["post-8006","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-machine-learning","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/it\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Training AI models like GPT-4 has relied mostly on datasets consisting of text and images. Meta\u2019s Ego-Exo4D multimodal perception dataset presents data scientists with a rich new set of training data. You can learn a new skill by reading a book, but it\u2019s so much easier when someone shows you how to do something while explaining it to you. This is the goal Meta\u2019s FAIR (Fundamental Artificial Intelligence Research) team has for Ego-Exo4D. The dataset consists of first-person (Ego) and third-person (Exo) perspective videos of people performing different skilled human activities. These could be anything from cooking, dancing, playing music,\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/it\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-05T09:02:39+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-12-05T13:15:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"666\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Meta releases Ego-Exo4D, a multimodal perception dataset\",\"datePublished\":\"2023-12-05T09:02:39+00:00\",\"dateModified\":\"2023-12-05T13:15:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"},\"wordCount\":662,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"keywords\":[\"Computer vision\",\"machine learning\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"it-IT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\",\"name\":\"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"datePublished\":\"2023-12-05T09:02:39+00:00\",\"dateModified\":\"2023-12-05T13:15:00+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/12\\\/augmented-reality-car-repair.jpg\",\"width\":1000,\"height\":666},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta releases Ego-Exo4D, a multimodal perception dataset\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/it\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta rilascia Ego-Exo4D, un dataset di percezione multimodale | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/it\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","og_locale":"it_IT","og_type":"article","og_title":"Meta releases Ego-Exo4D, a multimodal perception dataset | DailyAI","og_description":"Training AI models like GPT-4 has relied mostly on datasets consisting of text and images. Meta\u2019s Ego-Exo4D multimodal perception dataset presents data scientists with a rich new set of training data. You can learn a new skill by reading a book, but it\u2019s so much easier when someone shows you how to do something while explaining it to you. This is the goal Meta\u2019s FAIR (Fundamental Artificial Intelligence Research) team has for Ego-Exo4D. The dataset consists of first-person (Ego) and third-person (Exo) perspective videos of people performing different skilled human activities. These could be anything from cooking, dancing, playing music,","og_url":"https:\/\/dailyai.com\/it\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","og_site_name":"DailyAI","article_published_time":"2023-12-05T09:02:39+00:00","article_modified_time":"2023-12-05T13:15:00+00:00","og_image":[{"width":1000,"height":666,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Scritto da":"Eugene van der Watt","Tempo di lettura stimato":"3 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Meta releases Ego-Exo4D, a multimodal perception dataset","datePublished":"2023-12-05T09:02:39+00:00","dateModified":"2023-12-05T13:15:00+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"},"wordCount":662,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","keywords":["Computer vision","machine learning","Meta"],"articleSection":["Industry"],"inLanguage":"it-IT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","url":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/","name":"Meta rilascia Ego-Exo4D, un dataset di percezione multimodale | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","datePublished":"2023-12-05T09:02:39+00:00","dateModified":"2023-12-05T13:15:00+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/12\/augmented-reality-car-repair.jpg","width":1000,"height":666},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/meta-releases-ego-exo4d-a-multimodal-perception-dataset\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Meta releases Ego-Exo4D, a multimodal perception dataset"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"La vostra dose quotidiana di notizie sull'intelligenza artificiale","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene proviene da un background di ingegneria elettronica e ama tutto ci\u00f2 che \u00e8 tecnologico. Quando si prende una pausa dal consumo di notizie sull'intelligenza artificiale, lo si pu\u00f2 trovare al tavolo da biliardo.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/it\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8006","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/comments?post=8006"}],"version-history":[{"count":4,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8006\/revisions"}],"predecessor-version":[{"id":8021,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/posts\/8006\/revisions\/8021"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media\/8009"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/media?parent=8006"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/categories?post=8006"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/it\/wp-json\/wp\/v2\/tags?post=8006"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}