{"id":10786,"date":"2024-03-18T09:35:31","date_gmt":"2024-03-18T09:35:31","guid":{"rendered":"https:\/\/dailyai.com\/?p=10786"},"modified":"2024-03-28T09:35:17","modified_gmt":"2024-03-28T09:35:17","slug":"apple-reveals-mm1-its-first-family-of-multimodal-llms","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","title":{"rendered":"Apple revela MM1, a sua primeira fam\u00edlia de LLMs multimodais"},"content":{"rendered":"<p><strong>A Apple ainda n\u00e3o lan\u00e7ou oficialmente um modelo de IA, mas um novo documento de investiga\u00e7\u00e3o d\u00e1 uma ideia do progresso da empresa no desenvolvimento de modelos com capacidades multimodais de ponta.<\/strong><\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2403.09611.pdf\" target=\"_blank\" rel=\"noopener\">O jornal<\/a>, intitulado \"MM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\", apresenta a fam\u00edlia de MLLMs da Apple denominada MM1.<\/p>\n<p>O MM1 apresenta capacidades impressionantes na legendagem de imagens, na resposta a perguntas visuais (VQA) e na infer\u00eancia de linguagem natural. Os investigadores explicam que a escolha cuidadosa dos pares imagem-legenda lhes permitiu obter resultados superiores, especialmente em cen\u00e1rios de aprendizagem com poucas imagens.<\/p>\n<p>O que distingue o MM1 de outros MLLMs \u00e9 a sua capacidade superior de seguir instru\u00e7\u00f5es em v\u00e1rias imagens e de raciocinar sobre as cenas complexas que lhe s\u00e3o apresentadas.<\/p>\n<p>Os modelos MM1 cont\u00eam at\u00e9 30B par\u00e2metros, o que \u00e9 tr\u00eas vezes mais do que o GPT-4V, o componente que d\u00e1 ao GPT-4 da OpenAI as suas capacidades de vis\u00e3o.<\/p>\n<p>Eis alguns exemplos das capacidades de VQA da MM1.<\/p>\n<figure id=\"attachment_10788\" aria-describedby=\"caption-attachment-10788\" style=\"width: 1348px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10788 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing.png\" alt=\"\" width=\"1348\" height=\"1084\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing.png 1348w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-300x241.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-1024x823.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-768x618.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-370x298.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-800x643.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-20x16.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-740x595.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Multi-image-processing-60x48.png 60w\" sizes=\"auto, (max-width: 1348px) 100vw, 1348px\" \/><figcaption id=\"caption-attachment-10788\" class=\"wp-caption-text\">Testar a capacidade do MM1 de raciocinar atrav\u00e9s de imagens e textos. Fonte: arXiv<\/figcaption><\/figure>\n<p>A MM1 foi submetida a um pr\u00e9-treino multimodal em grande escala num \"conjunto de dados de 500M documentos de texto-imagem intercalados, contendo 1B imagens e 500B tokens de texto\".<\/p>\n<p>A escala e a diversidade da sua pr\u00e9-treino permitem \u00e0 MM1 efetuar previs\u00f5es impressionantes no contexto e seguir a formata\u00e7\u00e3o personalizada com um pequeno n\u00famero de exemplos de poucos disparos. Aqui est\u00e3o exemplos de como o MM1 aprende a sa\u00edda e o formato desejados a partir de apenas 3 exemplos.<\/p>\n<figure id=\"attachment_10789\" aria-describedby=\"caption-attachment-10789\" style=\"width: 1578px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-10789 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning.png\" alt=\"\" width=\"1578\" height=\"894\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning.png 1578w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-300x170.png 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-1024x580.png 1024w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-768x435.png 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-1536x870.png 1536w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-370x210.png 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-800x453.png 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-20x11.png 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-740x419.png 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/MM1-image-reasoning-85x48.png 85w\" sizes=\"auto, (max-width: 1578px) 100vw, 1578px\" \/><figcaption id=\"caption-attachment-10789\" class=\"wp-caption-text\">O MM1 consegue contar objectos, realizar OCR em \u00e1reas espec\u00edficas de uma imagem, aplicar racioc\u00ednio de senso comum a objectos e realizar fun\u00e7\u00f5es matem\u00e1ticas b\u00e1sicas. Fonte: arXiv<\/figcaption><\/figure>\n<p>Para criar modelos de IA capazes de \"ver\" e raciocinar, \u00e9 necess\u00e1rio um conetor vis\u00e3o-linguagem que traduza imagens e linguagem numa representa\u00e7\u00e3o unificada que o modelo possa utilizar para processamento posterior.<\/p>\n<p>Os investigadores descobriram que a conce\u00e7\u00e3o do conetor vis\u00e3o-linguagem era um fator menos importante para o desempenho do MM1. Curiosamente, foi a resolu\u00e7\u00e3o da imagem e o n\u00famero de s\u00edmbolos de imagem que tiveram o maior impacto.<\/p>\n<p>\u00c9 interessante ver como a Apple tem estado aberta a partilhar a sua investiga\u00e7\u00e3o com a comunidade de IA em geral. Os investigadores afirmam que \"neste artigo, documentamos o processo de constru\u00e7\u00e3o do MLLM e tentamos formular li\u00e7\u00f5es de design, que esperamos que sejam \u00fateis para a comunidade\".<\/p>\n<p>Os resultados publicados ir\u00e3o provavelmente informar a dire\u00e7\u00e3o que outros criadores de MMLM tomam relativamente \u00e0 arquitetura e \u00e0s escolhas de dados de pr\u00e9-treino.<\/p>\n<p>Ainda n\u00e3o se sabe exatamente como \u00e9 que os modelos MM1 ser\u00e3o implementados nos produtos da Apple. Os exemplos publicados das capacidades da MM1 sugerem que a Siri se tornar\u00e1 muito mais inteligente quando aprender a ver.<\/p>","protected":false},"excerpt":{"rendered":"<p>A Apple ainda n\u00e3o lan\u00e7ou oficialmente um modelo de IA, mas um novo documento de investiga\u00e7\u00e3o d\u00e1 uma ideia do progresso da empresa no desenvolvimento de modelos com capacidades multimodais de \u00faltima gera\u00e7\u00e3o. O artigo, intitulado \"MM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\", apresenta a fam\u00edlia de MLLMs da Apple denominada MM1. O MM1 apresenta capacidades impressionantes de legendagem de imagens, resposta a perguntas visuais (VQA) e infer\u00eancia de linguagem natural. Os investigadores explicam que a escolha cuidadosa dos pares imagem-legenda lhes permitiu obter resultados superiores, especialmente em cen\u00e1rios de aprendizagem com poucas imagens. O que distingue o MM1 de outros MLLMs \u00e9 a sua capacidade superior de seguir instru\u00e7\u00f5es em v\u00e1rias imagens e<\/p>","protected":false},"author":6,"featured_media":10790,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[126,166],"class_list":["post-10786","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-apple","tag-computer-vision"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Apple reveals MM1, its first family of multimodal LLMs | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Apple reveals MM1, its first family of multimodal LLMs | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Apple is yet to officially release an AI model, but a new research paper gives an insight into the company\u2019s progress in developing models with state-of-the-art multimodal capabilities. The paper, titled \u201cMM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\u201d, introduces Apple\u2019s family of MLLMs called MM1. MM1 displays impressive abilities in image captioning, visual question answering (VQA), and natural language inference. The researchers explain that careful choices of image-caption pairs enabled them to achieve superior results, especially in few-shot learning scenarios. What sets MM1 apart from other MLLMs is its superior ability to follow instructions across multiple images and\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2024-03-18T09:35:31+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T09:35:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Apple reveals MM1, its first family of multimodal LLMs\",\"datePublished\":\"2024-03-18T09:35:31+00:00\",\"dateModified\":\"2024-03-28T09:35:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"},\"wordCount\":432,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"keywords\":[\"Apple\",\"Computer vision\"],\"articleSection\":[\"Industry\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\",\"name\":\"Apple reveals MM1, its first family of multimodal LLMs | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"datePublished\":\"2024-03-18T09:35:31+00:00\",\"dateModified\":\"2024-03-28T09:35:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2024\\\/03\\\/Apple-MM1.webp\",\"width\":1792,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2024\\\/03\\\/apple-reveals-mm1-its-first-family-of-multimodal-llms\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Apple reveals MM1, its first family of multimodal LLMs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Apple revela MM1, a sua primeira fam\u00edlia de LLMs multimodais | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","og_locale":"pt_PT","og_type":"article","og_title":"Apple reveals MM1, its first family of multimodal LLMs | DailyAI","og_description":"Apple is yet to officially release an AI model, but a new research paper gives an insight into the company\u2019s progress in developing models with state-of-the-art multimodal capabilities. The paper, titled \u201cMM1: Methods, Analysis &amp; Insights from Multimodal LLM Pre-training\u201d, introduces Apple\u2019s family of MLLMs called MM1. MM1 displays impressive abilities in image captioning, visual question answering (VQA), and natural language inference. The researchers explain that careful choices of image-caption pairs enabled them to achieve superior results, especially in few-shot learning scenarios. What sets MM1 apart from other MLLMs is its superior ability to follow instructions across multiple images and","og_url":"https:\/\/dailyai.com\/pt\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","og_site_name":"DailyAI","article_published_time":"2024-03-18T09:35:31+00:00","article_modified_time":"2024-03-28T09:35:17+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","type":"image\/webp"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Apple reveals MM1, its first family of multimodal LLMs","datePublished":"2024-03-18T09:35:31+00:00","dateModified":"2024-03-28T09:35:17+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"},"wordCount":432,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","keywords":["Apple","Computer vision"],"articleSection":["Industry"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","url":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/","name":"Apple revela MM1, a sua primeira fam\u00edlia de LLMs multimodais | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","datePublished":"2024-03-18T09:35:31+00:00","dateModified":"2024-03-28T09:35:17+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2024\/03\/Apple-MM1.webp","width":1792,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2024\/03\/apple-reveals-mm1-its-first-family-of-multimodal-llms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Apple reveals MM1, its first family of multimodal LLMs"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10786","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=10786"}],"version-history":[{"count":3,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10786\/revisions"}],"predecessor-version":[{"id":10792,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/10786\/revisions\/10792"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/10790"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=10786"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=10786"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=10786"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}