{"id":10786,"date":"2024-03-18T09:35:31","date_gmt":"2024-03-18T09:35:31","guid":{"rendered":"https:\/\/dailyai.com\/?p=10786"},"modified":"2024-03-28T09:35:17","modified_gmt":"2024-03-28T09:35:17","slug":"apple-reveals-mm1-its-first-family-of-multimodal-llms","status":"publish","type":"post","link":"https:\/\/dailyai.com\/sv\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","title":{"rendered":"Apple visar upp MM1, sin f\u00f6rsta familj av multimodala LLM:er"},"content":{"rendered":"<p><strong>Apple har \u00e4nnu inte officiellt lanserat n\u00e5gon AI-modell, men ett nytt forskningsdokument ger en inblick i f\u00f6retagets framsteg n\u00e4r det g\u00e4ller att utveckla modeller med avancerade multimodala funktioner.<\/strong><\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2403.09611.pdf\" target=\"_blank\" rel=\"noopener\">Tidningen<\/a>med titeln \"MM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\", introducerar Apples familj av MLLMs som kallas MM1.<\/p>\n<p>MM1 uppvisar imponerande f\u00f6rm\u00e5gor inom bildtextning, visuellt fr\u00e5gesvar (VQA) och inferens av naturligt spr\u00e5k. Forskarna f\u00f6rklarar att noggranna val av bildtextpar gjorde det m\u00f6jligt f\u00f6r dem att uppn\u00e5 \u00f6verl\u00e4gsna resultat, s\u00e4rskilt i inl\u00e4rningsscenarier med f\u00e5 bilder.<\/p>\n<p>Det som skiljer MM1 fr\u00e5n andra MLLM \u00e4r dess \u00f6verl\u00e4gsna f\u00f6rm\u00e5ga att f\u00f6lja instruktioner \u00f6ver flera bilder och att resonera kring de komplexa scener som den presenteras f\u00f6r.<\/p>\n<p>MM1-modellerna inneh\u00e5ller upp till 30B parametrar, vilket \u00e4r tre g\u00e5nger s\u00e5 mycket som GPT-4V, den komponent som ger OpenAI:s GPT-4 dess visionskapacitet.<\/p>\n<p>H\u00e4r \u00e4r n\u00e5gra exempel p\u00e5 MM1:s VQA-f\u00f6rm\u00e5ga.<\/p>\n<figure id=\"attachment_10788\" aria-describedby=\"caption-attachment-10788\" style=\"width: 1348px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10788 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing.png\" alt=\"\" width=\"1348\" height=\"1084\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing.png 1348w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-300x241.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-1024x823.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-768x618.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-370x298.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-800x643.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-20x16.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-740x595.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-60x48.png 60w\" sizes=\"auto, (max-width: 1348px) 100vw, 1348px\" \/><figcaption id=\"caption-attachment-10788\" class=\"wp-caption-text\">Test av MM1:s f\u00f6rm\u00e5ga att resonera \u00f6ver bilder och texter. K\u00e4lla: arXiv<\/figcaption><\/figure>\n<p>MM1 genomgick storskalig multimodal f\u00f6rtr\u00e4ning p\u00e5 \"en dataset med 500M sammanfl\u00e4tade bild-textdokument, som inneh\u00e5ller 1B bilder och 500B texttokens\".<\/p>\n<p>Omfattningen och m\u00e5ngfalden av dess f\u00f6rtr\u00e4ning g\u00f6r att MM1 kan utf\u00f6ra imponerande f\u00f6ruts\u00e4gelser i kontext och f\u00f6lja anpassad formatering med ett litet antal exempel med f\u00e5 bilder. H\u00e4r \u00e4r exempel p\u00e5 hur MM1 l\u00e4r sig \u00f6nskad utdata och format fr\u00e5n bara 3 exempel.<\/p>\n<figure id=\"attachment_10789\" aria-describedby=\"caption-attachment-10789\" style=\"width: 1578px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10789 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning.png\" alt=\"\" width=\"1578\" height=\"894\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning.png 1578w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-300x170.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-1024x580.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-768x435.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-1536x870.png 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-370x210.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-800x453.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-20x11.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-740x419.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-85x48.png 85w\" sizes=\"auto, (max-width: 1578px) 100vw, 1578px\" \/><figcaption id=\"caption-attachment-10789\" class=\"wp-caption-text\">MM1 kan r\u00e4kna f\u00f6rem\u00e5l, utf\u00f6ra OCR p\u00e5 specifika omr\u00e5den i en bild, anv\u00e4nda sunt f\u00f6rnuft f\u00f6r att resonera kring f\u00f6rem\u00e5l och utf\u00f6ra grundl\u00e4ggande matematiska funktioner. K\u00e4lla: arXiv<\/figcaption><\/figure>\n<p>F\u00f6r att skapa AI-modeller som kan \"se\" och resonera kr\u00e4vs en \"vision-language connector\" som \u00f6vers\u00e4tter bilder och spr\u00e5k till en enhetlig representation som modellen kan anv\u00e4nda f\u00f6r vidare bearbetning.<\/p>\n<p>Forskarna fann att utformningen av vision-language-kontakten hade mindre betydelse f\u00f6r MM1:s prestanda. Intressant nog var det bilduppl\u00f6sningen och antalet bildtokens som hade st\u00f6rst inverkan.<\/p>\n<p>Det \u00e4r intressant att se hur \u00f6ppna Apple har varit n\u00e4r det g\u00e4ller att dela med sig av sin forskning till det bredare AI-samh\u00e4llet. Forskarna s\u00e4ger att \"i det h\u00e4r dokumentet dokumenterar vi MLLM-byggnadsprocessen och f\u00f6rs\u00f6ker formulera designlektioner som vi hoppas kan vara till nytta f\u00f6r samh\u00e4llet.\"<\/p>\n<p>De publicerade resultaten kommer sannolikt att p\u00e5verka vilken riktning andra MMLM-utvecklare tar n\u00e4r det g\u00e4ller arkitektur och val av data f\u00f6r f\u00f6rtr\u00e4ning.<\/p>\n<p>Exakt hur MM1-modellerna kommer att implementeras i Apples produkter \u00e5terst\u00e5r att se. De publicerade exemplen p\u00e5 MM1:s kapacitet antyder att Siri kommer att bli mycket smartare n\u00e4r hon s\u00e5 sm\u00e5ningom l\u00e4r sig att se.<\/p>","protected":false},"excerpt":{"rendered":"<p>Apple har \u00e4nnu inte officiellt lanserat n\u00e5gon AI-modell, men ett nytt forskningsdokument ger en inblick i f\u00f6retagets framsteg n\u00e4r det g\u00e4ller att utveckla modeller med avancerade multimodala funktioner. Papperet, med titeln \"MM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\", introducerar Apples familj av MLLM:er som kallas MM1. MM1 uppvisar imponerande f\u00f6rm\u00e5gor inom bildtextning, visuellt fr\u00e5gesvar (VQA) och inferens av naturligt spr\u00e5k. Forskarna f\u00f6rklarar att noggranna val av bildtextpar gjorde det m\u00f6jligt f\u00f6r dem att uppn\u00e5 \u00f6verl\u00e4gsna resultat, s\u00e4rskilt i inl\u00e4rningsscenarier med f\u00e5 bilder. Det som skiljer MM1 fr\u00e5n andra MLLM \u00e4r dess \u00f6verl\u00e4gsna f\u00f6rm\u00e5ga att f\u00f6lja instruktioner \u00f6ver flera bilder och<\/p>","protected":false},"author":6,"featured_media":10790,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[126,166],"class_list":["post-10786","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-apple","tag-computer-vision"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apple reveals MM1, its first family of multimodal LLMs | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/sv\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\" \/>\n<meta property=\"og:locale\" content=\"sv_SE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apple reveals MM1, its first family of multimodal LLMs | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Apple is yet to officially release an AI model, but a new research paper gives an insight into the company\u2019s progress in developing models with state-of-the-art multimodal capabilities. The paper, titled \u201cMM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\u201d, introduces Apple\u2019s family of MLLMs called MM1. MM1 displays impressive abilities in image captioning, visual question answering (VQA), and natural language inference. The researchers explain that careful choices of image-caption pairs enabled them to achieve superior results, especially in few-shot learning scenarios. What sets MM1 apart from other MLLMs is its superior ability to follow instructions across multiple images and\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/sv\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-18T09:35:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T09:35:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Skriven av\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Ber\u00e4knad l\u00e4stid\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuter\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Apple reveals MM1, its first family of multimodal LLMs\",\"datePublished\":\"2024-03-18T09:35:31+00:00\",\"dateModified\":\"2024-03-28T09:35:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"},\"wordCount\":432,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"keywords\":[\"Apple\",\"Computer vision\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"sv-SE\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\",\"name\":\"Apple reveals MM1, its first family of multimodal LLMs | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"datePublished\":\"2024-03-18T09:35:31+00:00\",\"dateModified\":\"2024-03-28T09:35:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#breadcrumb\"},\"inLanguage\":\"sv-SE\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Apple reveals MM1, its first family of multimodal LLMs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"sv-SE\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"sv-SE\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/sv\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Apple avsl\u00f6jar MM1, sin f\u00f6rsta familj av multimodala LLM:er | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/sv\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","og_locale":"sv_SE","og_type":"article","og_title":"Apple reveals MM1, its first family of multimodal LLMs | DailyAI","og_description":"Apple is yet to officially release an AI model, but a new research paper gives an insight into the company\u2019s progress in developing models with state-of-the-art multimodal capabilities. The paper, titled \u201cMM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\u201d, introduces Apple\u2019s family of MLLMs called MM1. MM1 displays impressive abilities in image captioning, visual question answering (VQA), and natural language inference. The researchers explain that careful choices of image-caption pairs enabled them to achieve superior results, especially in few-shot learning scenarios. What sets MM1 apart from other MLLMs is its superior ability to follow instructions across multiple images and","og_url":"https:\/\/dailyai.com\/sv\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","og_site_name":"DailyAI","article_published_time":"2024-03-18T09:35:31+00:00","article_modified_time":"2024-03-28T09:35:17+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Skriven av":"Eugene van der Watt","Ber\u00e4knad l\u00e4stid":"3 minuter"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Apple reveals MM1, its first family of multimodal LLMs","datePublished":"2024-03-18T09:35:31+00:00","dateModified":"2024-03-28T09:35:17+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"},"wordCount":432,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","keywords":["Apple","Computer vision"],"articleSection":["Industry"],"inLanguage":"sv-SE"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","url":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","name":"Apple avsl\u00f6jar MM1, sin f\u00f6rsta familj av multimodala LLM:er | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","datePublished":"2024-03-18T09:35:31+00:00","dateModified":"2024-03-28T09:35:17+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#breadcrumb"},"inLanguage":"sv-SE","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"]}]},{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Apple reveals MM1, its first family of multimodal LLMs"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DagligaAI","description":"Din dagliga dos av AI-nyheter","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"sv-SE"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DagligaAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"sv-SE","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommer fr\u00e5n en bakgrund som elektronikingenj\u00f6r och \u00e4lskar allt som har med teknik att g\u00f6ra. N\u00e4r han tar en paus fr\u00e5n att konsumera AI-nyheter hittar du honom vid snookerbordet.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/sv\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10786","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/comments?post=10786"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10786\/revisions"}],"predecessor-version":[{"id":10792,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/posts\/10786\/revisions\/10792"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media\/10790"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/media?parent=10786"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/categories?post=10786"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/sv\/wp-json\/wp\/v2\/tags?post=10786"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}