{"id":10786,"date":"2024-03-18T09:35:31","date_gmt":"2024-03-18T09:35:31","guid":{"rendered":"https:\/\/dailyai.com\/?p=10786"},"modified":"2024-03-28T09:35:17","modified_gmt":"2024-03-28T09:35:17","slug":"apple-reveals-mm1-its-first-family-of-multimodal-llms","status":"publish","type":"post","link":"https:\/\/dailyai.com\/de\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","title":{"rendered":"Apple enth\u00fcllt MM1, seine erste Familie multimodaler LLMs"},"content":{"rendered":"<p><strong>Apple hat noch kein offizielles KI-Modell ver\u00f6ffentlicht, aber ein neues Forschungspapier gibt einen Einblick in die Fortschritte des Unternehmens bei der Entwicklung von Modellen mit modernen multimodalen F\u00e4higkeiten.<\/strong><\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2403.09611.pdf\" target=\"_blank\" rel=\"noopener\">Das Papier<\/a>mit dem Titel \"MM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\" (Methoden, Analysen und Erkenntnisse aus der multimodalen LLM-Vorschulung) wird die MLLM-Familie von Apple mit der Bezeichnung MM1 vorgestellt.<\/p>\n<p>MM1 zeigt beeindruckende F\u00e4higkeiten bei der Erfassung von Bildunterschriften, der Beantwortung visueller Fragen (VQA) und der Inferenz nat\u00fcrlicher Sprache. Die Forscher erkl\u00e4ren, dass sie durch die sorgf\u00e4ltige Auswahl von Bild-Beschriftungspaaren hervorragende Ergebnisse erzielen konnten, insbesondere in Lernszenarien mit wenigen Aufnahmen.<\/p>\n<p>Was das MM1 von anderen MLLMs unterscheidet, ist seine \u00fcberragende F\u00e4higkeit, Anweisungen \u00fcber mehrere Bilder hinweg zu befolgen und die komplexen Szenen, die ihm vorgelegt werden, zu verstehen.<\/p>\n<p>Die MM1-Modelle enthalten bis zu 30B Parameter, das ist dreimal so viel wie beim GPT-4V, der Komponente, die dem GPT-4 von OpenAI seine Vision-F\u00e4higkeiten verleiht.<\/p>\n<p>Hier sind einige Beispiele f\u00fcr die VQA-F\u00e4higkeiten des MM1.<\/p>\n<figure id=\"attachment_10788\" aria-describedby=\"caption-attachment-10788\" style=\"width: 1348px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10788 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing.png\" alt=\"\" width=\"1348\" height=\"1084\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing.png 1348w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-300x241.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-1024x823.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-768x618.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-370x298.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-800x643.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-20x16.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-740x595.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-60x48.png 60w\" sizes=\"auto, (max-width: 1348px) 100vw, 1348px\" \/><figcaption id=\"caption-attachment-10788\" class=\"wp-caption-text\">Test der F\u00e4higkeit von MM1, \u00fcber Bilder und Texte hinweg zu denken. Quelle: arXiv<\/figcaption><\/figure>\n<p>MM1 wurde einem umfangreichen multimodalen Vortraining mit \"einem Datensatz von 500M verschachtelten Bild-Text-Dokumenten, die 1B Bilder und 500B Text-Token enthalten\" unterzogen.<\/p>\n<p>Dank des Umfangs und der Vielfalt seines Vortrainings ist MM1 in der Lage, beeindruckende kontextbezogene Vorhersagen zu treffen und benutzerdefinierte Formatierungen mit einer kleinen Anzahl von Beispielen mit wenigen Aufnahmen zu befolgen. Hier sind Beispiele daf\u00fcr, wie MM1 die gew\u00fcnschte Ausgabe und das Format aus nur 3 Beispielen erlernt.<\/p>\n<figure id=\"attachment_10789\" aria-describedby=\"caption-attachment-10789\" style=\"width: 1578px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10789 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning.png\" alt=\"\" width=\"1578\" height=\"894\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning.png 1578w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-300x170.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-1024x580.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-768x435.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-1536x870.png 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-370x210.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-800x453.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-20x11.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-740x419.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-85x48.png 85w\" sizes=\"auto, (max-width: 1578px) 100vw, 1578px\" \/><figcaption id=\"caption-attachment-10789\" class=\"wp-caption-text\">MM1 kann Objekte z\u00e4hlen, OCR f\u00fcr bestimmte Bereiche eines Bildes durchf\u00fchren, den gesunden Menschenverstand auf Objekte anwenden und grundlegende mathematische Funktionen ausf\u00fchren. Quelle: arXiv<\/figcaption><\/figure>\n<p>Die Entwicklung von KI-Modellen, die \"sehen\" und denken k\u00f6nnen, erfordert eine Verbindung zwischen Bild und Sprache, die Bilder und Sprache in eine einheitliche Darstellung \u00fcbersetzt, die das Modell f\u00fcr die weitere Verarbeitung nutzen kann.<\/p>\n<p>Die Forscher fanden heraus, dass das Design des Bild-Sprache-Verbinders weniger ein Faktor f\u00fcr die Leistung von MM1 war. Interessanterweise waren es die Bildaufl\u00f6sung und die Anzahl der Bildtoken, die den gr\u00f6\u00dften Einfluss hatten.<\/p>\n<p>Es ist interessant zu sehen, wie offen Apple seine Forschungsergebnisse mit der breiteren KI-Gemeinschaft teilt. Die Forscher erkl\u00e4ren: \"In diesem Papier dokumentieren wir den MLLM-Bauprozess und versuchen, Design-Lektionen zu formulieren, von denen wir hoffen, dass sie f\u00fcr die Gemeinschaft n\u00fctzlich sind.<\/p>\n<p>Die ver\u00f6ffentlichten Ergebnisse werden wahrscheinlich die Richtung vorgeben, die andere MMLM-Entwickler in Bezug auf die Architektur und die Auswahl der Pre-Training-Daten einschlagen.<\/p>\n<p>Wie genau die MM1-Modelle in die Produkte von Apple implementiert werden, bleibt abzuwarten. Die ver\u00f6ffentlichten Beispiele f\u00fcr die F\u00e4higkeiten von MM1 deuten darauf hin, dass Siri sehr viel intelligenter wird, wenn sie schlie\u00dflich lernt, zu sehen.<\/p>","protected":false},"excerpt":{"rendered":"<p>Apple hat noch kein offizielles KI-Modell ver\u00f6ffentlicht, aber ein neues Forschungspapier gibt einen Einblick in die Fortschritte des Unternehmens bei der Entwicklung von Modellen mit modernen multimodalen F\u00e4higkeiten. Das Papier mit dem Titel \"MM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\" (Methoden, Analysen und Erkenntnisse aus dem multimodalen LLM-Pre-Training) stellt Apples MLLM-Familie namens MM1 vor. MM1 zeigt beeindruckende F\u00e4higkeiten in den Bereichen Bildbeschriftung, Beantwortung visueller Fragen (VQA) und Inferenz nat\u00fcrlicher Sprache. Die Forscher erkl\u00e4ren, dass sie durch die sorgf\u00e4ltige Auswahl von Bild-Beschriftungspaaren hervorragende Ergebnisse erzielen konnten, insbesondere in Lernszenarien mit wenigen Aufnahmen. Was MM1 von anderen MLLMs unterscheidet, ist seine \u00fcberlegene F\u00e4higkeit, Anweisungen \u00fcber mehrere Bilder hinweg zu befolgen und<\/p>","protected":false},"author":6,"featured_media":10790,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[126,166],"class_list":["post-10786","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-apple","tag-computer-vision"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apple reveals MM1, its first family of multimodal LLMs | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/de\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apple reveals MM1, its first family of multimodal LLMs | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Apple is yet to officially release an AI model, but a new research paper gives an insight into the company\u2019s progress in developing models with state-of-the-art multimodal capabilities. The paper, titled \u201cMM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\u201d, introduces Apple\u2019s family of MLLMs called MM1. MM1 displays impressive abilities in image captioning, visual question answering (VQA), and natural language inference. The researchers explain that careful choices of image-caption pairs enabled them to achieve superior results, especially in few-shot learning scenarios. What sets MM1 apart from other MLLMs is its superior ability to follow instructions across multiple images and\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/de\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-18T09:35:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T09:35:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Apple reveals MM1, its first family of multimodal LLMs\",\"datePublished\":\"2024-03-18T09:35:31+00:00\",\"dateModified\":\"2024-03-28T09:35:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"},\"wordCount\":432,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"keywords\":[\"Apple\",\"Computer vision\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"de\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\",\"name\":\"Apple reveals MM1, its first family of multimodal LLMs | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"datePublished\":\"2024-03-18T09:35:31+00:00\",\"dateModified\":\"2024-03-28T09:35:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Apple reveals MM1, its first family of multimodal LLMs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/de\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Apple enth\u00fcllt MM1, seine erste Familie von multimodalen LLMs | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/de\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","og_locale":"de_DE","og_type":"article","og_title":"Apple reveals MM1, its first family of multimodal LLMs | DailyAI","og_description":"Apple is yet to officially release an AI model, but a new research paper gives an insight into the company\u2019s progress in developing models with state-of-the-art multimodal capabilities. The paper, titled \u201cMM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\u201d, introduces Apple\u2019s family of MLLMs called MM1. MM1 displays impressive abilities in image captioning, visual question answering (VQA), and natural language inference. The researchers explain that careful choices of image-caption pairs enabled them to achieve superior results, especially in few-shot learning scenarios. What sets MM1 apart from other MLLMs is its superior ability to follow instructions across multiple images and","og_url":"https:\/\/dailyai.com\/de\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","og_site_name":"DailyAI","article_published_time":"2024-03-18T09:35:31+00:00","article_modified_time":"2024-03-28T09:35:17+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Verfasst von":"Eugene van der Watt","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Apple reveals MM1, its first family of multimodal LLMs","datePublished":"2024-03-18T09:35:31+00:00","dateModified":"2024-03-28T09:35:17+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"},"wordCount":432,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","keywords":["Apple","Computer vision"],"articleSection":["Industry"],"inLanguage":"de"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","url":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","name":"Apple enth\u00fcllt MM1, seine erste Familie von multimodalen LLMs | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","datePublished":"2024-03-18T09:35:31+00:00","dateModified":"2024-03-28T09:35:17+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Apple reveals MM1, its first family of multimodal LLMs"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Ihre t\u00e4gliche Dosis an AI-Nachrichten","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene kommt aus der Elektronikbranche und liebt alles, was mit Technik zu tun hat. Wenn er eine Pause vom Konsum von KI-Nachrichten einlegt, findet man ihn am Snookertisch.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/de\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/10786","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/comments?post=10786"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/10786\/revisions"}],"predecessor-version":[{"id":10792,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/posts\/10786\/revisions\/10792"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media\/10790"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/media?parent=10786"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/categories?post=10786"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/de\/wp-json\/wp\/v2\/tags?post=10786"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}