{"id":10185,"date":"2024-02-20T07:06:53","date_gmt":"2024-02-20T07:06:53","guid":{"rendered":"https:\/\/dailyai.com\/?p=10185"},"modified":"2024-02-22T09:44:53","modified_gmt":"2024-02-22T09:44:53","slug":"meta-releases-v-jepa-a-predictive-vision-model","status":"publish","type":"post","link":"https:\/\/dailyai.com\/da\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","title":{"rendered":"Meta frigiver V-JEPA, en pr\u00e6diktiv synsmodel"},"content":{"rendered":"<p><strong>Meta har udgivet V-JEPA, en pr\u00e6diktiv synsmodel, der er det n\u00e6ste skridt mod Meta Chief AI Scientist Yann LeCuns vision om avanceret maskinintelligens (AMI).<\/strong><\/p>\n<p>For at AI-drevne maskiner kan interagere med objekter i den fysiske verden, skal de tr\u00e6nes, men konventionelle metoder er meget ineffektive. De bruger tusindvis af videoeksempler med pr\u00e6tr\u00e6nede billedkodere, tekst eller menneskelige kommentarer, for at en maskine kan l\u00e6re et enkelt koncept, for slet ikke at tale om flere f\u00e6rdigheder.<\/p>\n<p>V-JEPA, som st\u00e5r for Joint Embedding Predictive Architectures, er en synsmodel, der er designet til at l\u00e6re disse koncepter p\u00e5 en mere effektiv m\u00e5de.<\/p>\n<p>LeCun sagde, at \"V-JEPA er et skridt i retning af en mere velfunderet forst\u00e5else af verden, s\u00e5 maskiner kan opn\u00e5 mere generel r\u00e6sonnering og planl\u00e6gning.\"<\/p>\n<p>V-JEPA l\u00e6rer, hvordan objekter i den fysiske verden interagerer p\u00e5 samme m\u00e5de <a href=\"https:\/\/dailyai.com\/da\/2024\/02\/chinese-researchers-unveil-a-robot-toddler-named-tong-tong\/\">p\u00e5 samme m\u00e5de som sm\u00e5b\u00f8rn g\u00f8r<\/a>. En vigtig del af den m\u00e5de, vi l\u00e6rer p\u00e5, er ved at udfylde de tomme felter for at forudsige manglende information. N\u00e5r en person g\u00e5r bag en sk\u00e6rm og ud p\u00e5 den anden side, udfylder vores hjerne det tomme felt med en forst\u00e5else af, hvad der skete bag sk\u00e6rmen.<\/p>\n<p>V-JEPA er en ikke-generativ model, der l\u00e6rer ved at forudsige manglende eller maskerede dele af en video. Generative modeller kan genskabe et maskeret stykke video pixel for pixel, men det g\u00f8r V-JEPA ikke.<\/p>\n<p>Den sammenligner abstrakte repr\u00e6sentationer af um\u00e6rkede billeder i stedet for selve pixlerne. V-JEPA pr\u00e6senteres for en video, hvor en stor del er maskeret, og hvor der kun er nok af videoen til at give en vis kontekst. Modellen bliver derefter bedt om at give en abstrakt beskrivelse af, hvad der sker i det maskerede omr\u00e5de.<\/p>\n<p>I stedet for at blive tr\u00e6net i en bestemt f\u00e6rdighed, siger Meta, at \"den brugte selvstyret tr\u00e6ning p\u00e5 en r\u00e6kke videoer og l\u00e6rte en r\u00e6kke ting om, hvordan verden fungerer.\"<\/p>\n<blockquote class=\"twitter-tweet\" data-media-max-width=\"560\">\n<p dir=\"ltr\" lang=\"en\">I dag udgiver vi V-JEPA, en metode til at l\u00e6re maskiner at forst\u00e5 og modellere den fysiske verden ved at se videoer. Dette arbejde er endnu et vigtigt skridt i retning af <a href=\"https:\/\/twitter.com\/ylecun?ref_src=twsrc%5Etfw\">@ylecun<\/a>'s skitserede vision om AI-modeller, der bruger en indl\u00e6rt forst\u00e5else af verden til at planl\u00e6gge, r\u00e6sonnere og... <a href=\"https:\/\/t.co\/5i6uNeFwJp\">pic.twitter.com\/5i6uNeFwJp<\/a><\/p>\n<p>- AI p\u00e5 Meta (@AIatMeta) <a href=\"https:\/\/twitter.com\/AIatMeta\/status\/1758176023588577326?ref_src=twsrc%5Etfw\">15. februar 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<h2>Frosne evalueringer<\/h2>\n<p>Meta's <a href=\"https:\/\/ai.meta.com\/research\/publications\/revisiting-feature-prediction-for-learning-visual-representations-from-video\/\" target=\"_blank\" rel=\"noopener\">forskningsartikel<\/a> forklarer, at en af de vigtigste ting, der g\u00f8r V-JEPA s\u00e5 meget mere effektiv end andre modeller for synsl\u00e6ring, er, hvor god den er til \"frosne evalueringer\".<\/p>\n<p>Efter at have gennemg\u00e5et selvoverv\u00e5get l\u00e6ring med omfattende um\u00e6rkede data, kr\u00e6ver koderen og pr\u00e6diktoren ikke yderligere tr\u00e6ning, n\u00e5r de l\u00e6rer en ny f\u00e6rdighed. Den pr\u00e6tr\u00e6nede model er fastfrosset.<\/p>\n<p>Hvis man tidligere ville finjustere en model for at l\u00e6re en ny f\u00e6rdighed, skulle man opdatere parametrene eller v\u00e6gtene i hele modellen. For at V-JEPA kan l\u00e6re en ny opgave, kr\u00e6ver det kun en lille m\u00e6ngde m\u00e6rkede data med kun et lille s\u00e6t opgavespecifikke parametre, der er optimeret oven p\u00e5 den frosne rygrad.<\/p>\n<p>V-JEPA's evne til effektivt at l\u00e6re nye opgaver er lovende for udviklingen af kropsliggjort AI. Det kan v\u00e6re n\u00f8glen til at g\u00f8re det muligt for maskiner at v\u00e6re kontekstuelt bevidste om deres fysiske omgivelser og at h\u00e5ndtere planl\u00e6gning og sekventielle beslutningsopgaver.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta har udgivet V-JEPA, en pr\u00e6diktiv synsmodel, der er det n\u00e6ste skridt mod Meta Chief AI Scientist Yann LeCuns vision om avanceret maskinintelligens (AMI). For at AI-drevne maskiner kan interagere med objekter i den fysiske verden, skal de tr\u00e6nes, men konventionelle metoder er meget ineffektive. De bruger tusindvis af videoeksempler med pr\u00e6tr\u00e6nede billedkodere, tekst eller menneskelige kommentarer, for at en maskine kan l\u00e6re et enkelt koncept, for slet ikke at tale om flere f\u00e6rdigheder. V-JEPA, som st\u00e5r for Joint Embedding Predictive Architectures, er en synsmodel, der er designet til at l\u00e6re disse koncepter p\u00e5 en mere effektiv m\u00e5de. LeCun sagde<\/p>","protected":false},"author":6,"featured_media":10193,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[166,131],"class_list":["post-10185","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-computer-vision","tag-meta"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Meta releases V-JEPA, a predictive vision model | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/da\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:locale\" content=\"da_DK\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta releases V-JEPA, a predictive vision model | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/da\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-20T07:06:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-02-22T09:44:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"750\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skrevet af\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimeret l\u00e6setid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Meta releases V-JEPA, a predictive vision model\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"},\"wordCount\":525,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"keywords\":[\"Computer vision\",\"Meta\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"da-DK\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\",\"name\":\"Meta releases V-JEPA, a predictive vision model | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"datePublished\":\"2024-02-20T07:06:53+00:00\",\"dateModified\":\"2024-02-22T09:44:53+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\"},\"inLanguage\":\"da-DK\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/02\\\/multifunction-robot.jpg\",\"width\":1000,\"height\":750},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/02\\\/meta-releases-v-jepa-a-predictive-vision-model\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta releases V-JEPA, a predictive vision model\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"da-DK\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"da-DK\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/da\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta frigiver V-JEPA, en pr\u00e6diktiv synsmodel | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/da\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_locale":"da_DK","og_type":"article","og_title":"Meta releases V-JEPA, a predictive vision model | DailyAI","og_description":"Meta has released V-JEPA, a predictive vision model that is the next step toward Meta Chief AI Scientist Yann LeCun\u2019s vision of advanced machine intelligence (AMI). For AI-powered machines to interact with objects in the physical world, they need to be trained, but conventional methods are very inefficient. They use thousands of video examples with pre-trained image encoders, text, or human annotations, for a machine to learn a single concept, let alone multiple skills. V-JEPA, which stands for Joint Embedding Predictive Architectures, is a vision model that is designed to learn these concepts in a more efficient way. LeCun said","og_url":"https:\/\/dailyai.com\/da\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","og_site_name":"DailyAI","article_published_time":"2024-02-20T07:06:53+00:00","article_modified_time":"2024-02-22T09:44:53+00:00","og_image":[{"width":1000,"height":750,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skrevet af":"Eugene van der Watt","Estimeret l\u00e6setid":"3 minutter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Meta releases V-JEPA, a predictive vision model","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"},"wordCount":525,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","keywords":["Computer vision","Meta"],"articleSection":["Industry"],"inLanguage":"da-DK"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","url":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/","name":"Meta frigiver V-JEPA, en pr\u00e6diktiv synsmodel | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","datePublished":"2024-02-20T07:06:53+00:00","dateModified":"2024-02-22T09:44:53+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb"},"inLanguage":"da-DK","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/"]}]},{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/02\/multifunction-robot.jpg","width":1000,"height":750},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/02\/meta-releases-v-jepa-a-predictive-vision-model\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Meta releases V-JEPA, a predictive vision model"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Din daglige dosis af AI-nyheder","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"da-DK"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"da-DK","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene har en baggrund som elektronikingeni\u00f8r og elsker alt, hvad der har med teknologi at g\u00f8re. N\u00e5r han tager en pause fra at l\u00e6se AI-nyheder, kan du finde ham ved snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/da\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/10185","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/comments?post=10185"}],"version-history":[{"count":5,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/10185\/revisions"}],"predecessor-version":[{"id":10262,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/posts\/10185\/revisions\/10262"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media\/10193"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/media?parent=10185"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/categories?post=10185"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/da\/wp-json\/wp\/v2\/tags?post=10185"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}