{"id":10185,"date":"2024-02-20T07:06:53","date_gmt":"2024-02-20T07:06:53","guid":{"rendered":"https:\/\/dailyai.com\/?p=10185"},"modified":"2024-02-22T09:44:53","modified_gmt":"2024-02-22T09:44:53","slug":"meta-releases-v-jepa-a-predictive-vision-model","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","title":{"rendered":"Meta sl\u00e4pper V-JEPA, en prediktiv visionmodell"},"content":{"rendered":"<p><strong>Meta har sl\u00e4ppt V-JEPA, en prediktiv visionsmodell som \u00e4r n\u00e4sta steg mot Meta Chief AI Scientist Yann LeCuns vision om avancerad maskinintelligens (AMI).<\/strong><\/p>\n<p>F\u00f6r att AI-drivna maskiner ska kunna interagera med objekt i den fysiska v\u00e4rlden m\u00e5ste de tr\u00e4nas, men konventionella metoder \u00e4r mycket ineffektiva. De anv\u00e4nder tusentals videoexempel med f\u00f6rtr\u00e4nade bildkodare, text eller m\u00e4nskliga kommentarer f\u00f6r att en maskin ska l\u00e4ra sig ett enda koncept, f\u00f6r att inte tala om flera f\u00e4rdigheter.<\/p>\n<p>V-JEPA, som st\u00e5r f\u00f6r Joint Embedding Predictive Architectures, \u00e4r en visionsmodell som \u00e4r utformad f\u00f6r att l\u00e4ra sig dessa begrepp p\u00e5 ett mer effektivt s\u00e4tt.<\/p>\n<p>LeCun sa att \"V-JEPA \u00e4r ett steg mot en mer grundad f\u00f6rst\u00e5else av v\u00e4rlden s\u00e5 att maskiner kan uppn\u00e5 mer generaliserade resonemang och planering.\"<\/p>\n<p>V-JEPA l\u00e4r sig hur objekt i den fysiska v\u00e4rlden interagerar p\u00e5 ungef\u00e4r samma s\u00e4tt <a href=\"https:\/\/dailyai.com\/sv\/2024\/02\/chinese-researchers-unveil-a-robot-toddler-named-tong-tong\/\">p\u00e5 samma s\u00e4tt som sm\u00e5barn g\u00f6r<\/a>. En viktig del av hur vi l\u00e4r oss \u00e4r genom att fylla i luckorna f\u00f6r att f\u00f6ruts\u00e4ga saknad information. N\u00e4r en person g\u00e5r bakom en sk\u00e4rm och ut p\u00e5 andra sidan, fyller v\u00e5r hj\u00e4rna i tomrummet med en f\u00f6rst\u00e5else f\u00f6r vad som h\u00e4nde bakom sk\u00e4rmen.<\/p>\n<p>V-JEPA \u00e4r en icke-generativ modell som l\u00e4r sig genom att f\u00f6ruts\u00e4ga saknade eller maskerade delar av en video. Generativa modeller kan \u00e5terskapa en maskerad del av videon pixel f\u00f6r pixel, men V-JEPA g\u00f6r inte det.<\/p>\n<p>Den j\u00e4mf\u00f6r abstrakta representationer av om\u00e4rkta bilder snarare \u00e4n sj\u00e4lva pixlarna. V-JEPA presenteras med en video d\u00e4r en stor del \u00e4r maskerad, med bara tillr\u00e4ckligt mycket av videon f\u00f6r att ge ett visst sammanhang. Modellen ombeds sedan att ge en abstrakt beskrivning av vad som h\u00e4nder i det maskerade utrymmet.<\/p>\n<p>Ist\u00e4llet f\u00f6r att tr\u00e4nas i en specifik f\u00e4rdighet s\u00e4ger Meta att \"den anv\u00e4nde sj\u00e4lv\u00f6vervakad tr\u00e4ning p\u00e5 en rad videor och l\u00e4rde sig ett antal saker om hur v\u00e4rlden fungerar.\"<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\">Idag sl\u00e4pper vi V-JEPA, en metod f\u00f6r att l\u00e4ra maskiner att f\u00f6rst\u00e5 och modellera den fysiska v\u00e4rlden genom att titta p\u00e5 videor. Detta arbete \u00e4r ytterligare ett viktigt steg mot <a href=\"https:\/\/twitter.com\/ylecun?ref_src=twsrc%5Etfw\">@ylecun<\/a>Det \u00e4r en vision om AI-modeller som anv\u00e4nder en inl\u00e4rd f\u00f6rst\u00e5else av v\u00e4rlden f\u00f6r att planera, resonera och... <a href=\"https:\/\/t.co\/5i6uNeFwJp\">pic.twitter.com\/5i6uNeFwJp<\/a><\/p>\n<p>- AI p\u00e5 Meta (@AIatMeta) <a href=\"https:\/\/twitter.com\/AIatMeta\/status\/1758176023588577326?ref_src=twsrc%5Etfw\">15 februari 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Frysta utv\u00e4rderingar<\/h2>\n<p>Meta's <a href=\"https:\/\/ai.meta.com\/research\/publications\/revisiting-feature-prediction-for-learning-visual-representations-from-video\/\" target=\"_blank\" rel=\"noopener\">forskningsrapport<\/a> f\u00f6rklarar att en av de viktigaste sakerna som g\u00f6r V-JEPA s\u00e5 mycket mer effektiv \u00e4n vissa andra modeller f\u00f6r inl\u00e4rning av visioner \u00e4r hur bra den \u00e4r p\u00e5 \"frysta utv\u00e4rderingar\".<\/p>\n<p>Efter att ha genomg\u00e5tt sj\u00e4lv\u00f6vervakad inl\u00e4rning med omfattande om\u00e4rkta data kr\u00e4ver kodaren och prediktorn inte ytterligare tr\u00e4ning n\u00e4r de l\u00e4r sig en ny f\u00e4rdighet. Den f\u00f6rtr\u00e4nade modellen \u00e4r fryst.<\/p>\n<p>Om man tidigare ville finjustera en modell f\u00f6r att l\u00e4ra sig en ny f\u00e4rdighet beh\u00f6vde man uppdatera parametrarna eller vikterna f\u00f6r hela modellen. F\u00f6r att V-JEPA ska kunna l\u00e4ra sig en ny uppgift kr\u00e4vs endast en liten m\u00e4ngd m\u00e4rkta data med endast en liten upps\u00e4ttning uppgiftsspecifika parametrar som optimeras ovanp\u00e5 den frysta ryggraden.<\/p>\n<p>V-JEPAs f\u00f6rm\u00e5ga att effektivt l\u00e4ra sig nya uppgifter \u00e4r lovande f\u00f6r utvecklingen av f\u00f6rkroppsligad AI. Det kan vara nyckeln till att g\u00f6ra det m\u00f6jligt f\u00f6r maskiner att vara kontextuellt medvetna om sin fysiska omgivning och att hantera planering och sekventiellt beslutsfattande.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta har sl\u00e4ppt V-JEPA, en prediktiv visionmodell som \u00e4r n\u00e4sta steg mot Meta Chief AI Scientist Yann LeCuns vision om avancerad maskinintelligens (AMI). F\u00f6r att AI-drivna maskiner ska kunna interagera med objekt i den fysiska v\u00e4rlden m\u00e5ste de utbildas, men konventionella metoder \u00e4r mycket ineffektiva. De anv\u00e4nder tusentals videoexempel med f\u00f6rtr\u00e4nade bildkodare, text eller m\u00e4nskliga kommentarer f\u00f6r att en maskin ska kunna l\u00e4ra sig ett enda koncept, \u00e4n mindre flera f\u00e4rdigheter. V-JEPA, som st\u00e5r f\u00f6r Joint Embedding Predictive Architectures, \u00e4r en visionmodell som \u00e4r utformad f\u00f6r att l\u00e4ra sig dessa koncept p\u00e5 ett mer effektivt s\u00e4tt. LeCun sade<\/p>","protected":false},"author":6,"featured_media":10193,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,131],"class_list":["post-10185","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta releases V-JEPA, a predictive vision model | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta releases V-JEPA, a predictive vision model | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-20T07:06:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-02-22T09:44:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"750\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Meta releases V-JEPA, a predictive vision model\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"wordCount\":525,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"keywords\":[\"Computer vision\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"name\":\"Meta releases V-JEPA, a predictive vision model | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"width\":1000,\"height\":750},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta releases V-JEPA, a predictive vision model\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta sl\u00e4pper V-JEPA, en prediktiv visionmodell | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_locale":"sv_SE","og_type":"article","og_title":"Meta releases V-JEPA, a predictive vision model | DailyAI","og_description":"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said","og_url":"https:\/\/dailyai.com\/sv\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_site_name":"DailyAI","article_published_time":"2024-02-20T07:06:53+00:00","article_modified_time":"2024-02-22T09:44:53+00:00","og_image":[{"width":1000,"height":750,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Eugene van der Watt","Ber\u00e4knad l\u00e4stid":"3 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Meta releases V-JEPA, a predictive vision model","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"wordCount":525,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","keywords":["Computer vision","Meta"],"articleSection":["Industry"],"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","url":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","name":"Meta sl\u00e4pper V-JEPA, en prediktiv visionmodell | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","width":1000,"height":750},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Meta releases V-JEPA, a predictive vision model"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommer fr\u00e5n en bakgrund som elektronikingenj\u00f6r och \u00e4lskar allt som har med teknik att g\u00f6ra. N\u00e4r han tar en paus fr\u00e5n att konsumera AI-nyheter hittar du honom vid snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/sv\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10185","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=10185"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10185\/revisions"}],"predecessor-version":[{"id":10262,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10185\/revisions\/10262"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/10193"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=10185"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=10185"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=10185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}