{"id":8052,"date":"2023-12-06T17:03:36","date_gmt":"2023-12-06T17:03:36","guid":{"rendered":"https:\/\/dailyai.com\/?p=8052"},"modified":"2024-03-28T00:40:52","modified_gmt":"2024-03-28T00:40:52","slug":"google-launches-its-new-gemini-multi-modal-family-of-models","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","title":{"rendered":"A Google lan\u00e7a a sua inovadora fam\u00edlia Gemini de modelos multimodais"},"content":{"rendered":"<p><strong>A Google lan\u00e7ou a sua fam\u00edlia Gemini de modelos de IA multimodais, uma jogada dram\u00e1tica num sector ainda a ressentir-se dos acontecimentos da OpenAI.<\/strong><\/p>\n<p>Gemini \u00e9 uma fam\u00edlia de modelos multimodais capaz de processar e compreender uma combina\u00e7\u00e3o de texto, imagens, \u00e1udio e v\u00eddeo.<\/p>\n<p>Sundar Pichai, Diretor Executivo da Google, e Demis Hassabis, Diretor Executivo da Google DeepMind, expressam grandes expectativas em rela\u00e7\u00e3o ao Gemini. A Google planeia integr\u00e1-lo nos seus vastos produtos e servi\u00e7os, incluindo a pesquisa, o Maps e o Chrome.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Temos o prazer de anunciar a \ud835\uddda\ud835\uddf2\ud835\uddfa\ud835\uddf6\ud835\uddfb\ud835\uddf6: <a href=\"https:\/\/twitter.com\/Google?ref_src=twsrc%5Etfw\">@Google<\/a>o maior e mais capaz modelo de IA da UE.<\/p>\n<p>Criado para ser nativamente multimodal, pode compreender e operar em texto, c\u00f3digo, \u00e1udio, imagem e v\u00eddeo - e alcan\u00e7a um desempenho topo de gama em muitas tarefas. \ud83e\uddf5 <a href=\"https:\/\/t.co\/mwHZTDTBuG\">https:\/\/t.co\/mwHZTDTBuG<\/a> <a href=\"https:\/\/t.co\/zfLlCGuzmV\">pic.twitter.com\/zfLlCGuzmV<\/a><\/p>\n<p>- Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1732416095355814277?ref_src=twsrc%5Etfw\">6 de dezembro de 2023<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>O Gemini apresenta uma multimodalidade abrangente, processando e interagindo com texto, imagens, v\u00eddeo e \u00e1udio. Embora estejamos habituados ao processamento de texto e imagem, o \u00e1udio e o v\u00eddeo abrem novos caminhos, oferecendo novas e excitantes formas de lidar com os rich media.<\/p>\n<p>Hassabis observa que \"estes modelos compreendem melhor o mundo que os rodeia\".<\/p>\n<p>Pichai sublinhou a liga\u00e7\u00e3o do modelo aos produtos e servi\u00e7os Google, afirmando: \"Uma das coisas poderosas deste momento \u00e9 o facto de se poder trabalhar numa tecnologia subjacente e melhor\u00e1-la, o que se reflecte imediatamente nos nossos produtos.\"<\/p>\n<p>Os g\u00e9meos assumem tr\u00eas formas diferentes, que s\u00e3o:<\/p>\n<ul>\n<li><strong>Gemini Nano:<\/strong> Uma vers\u00e3o mais leve adaptada aos dispositivos Android, que permite funcionalidades offline e nativas.<\/li>\n<li><strong>Gemini Pro:<\/strong> Uma vers\u00e3o mais avan\u00e7ada, destinada a alimentar v\u00e1rios servi\u00e7os de IA da Google, incluindo o Bard.<\/li>\n<li><strong>Gemini Ultra:<\/strong> A itera\u00e7\u00e3o mais poderosa, concebida principalmente para centros de dados e aplica\u00e7\u00f5es empresariais, com lan\u00e7amento previsto para o pr\u00f3ximo ano.<\/li>\n<\/ul>\n<p>Em termos de desempenho, a Google afirma que o Gemini supera o GPT-4 em 30 dos 32 testes de refer\u00eancia, destacando-se particularmente na compreens\u00e3o e intera\u00e7\u00e3o com v\u00eddeo e \u00e1udio. Este desempenho \u00e9 atribu\u00eddo \u00e0 conce\u00e7\u00e3o do Gemini como um modelo multissensorial desde o in\u00edcio.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">O Bard recebe a sua maior atualiza\u00e7\u00e3o de sempre com uma vers\u00e3o especificamente ajustada do Gemini Pro.<\/p>\n<p>A partir de hoje, ser\u00e1 muito mais capaz de fazer coisas como:<br \/>\n\ud83d\udd18 Compreens\u00e3o<br \/>\n\ud83d\udd18 Resumir<br \/>\nRacioc\u00ednio<br \/>\n\ud83d\udd18 Codifica\u00e7\u00e3o<br \/>\n\ud83d\udd18 Planeamento<\/p>\n<p>E mais. \u2193 <a href=\"https:\/\/t.co\/TJR12OioxU\">https:\/\/t.co\/TJR12OioxU<\/a><\/p>\n<p>- Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1732430045275140415?ref_src=twsrc%5Etfw\">6 de dezembro de 2023<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\nAl\u00e9m disso, a Google fez quest\u00e3o de real\u00e7ar a efici\u00eancia do Gemini.<\/p>\n<p>Treinada nas pr\u00f3prias unidades de processamento de tensores (TPU) da Google, \u00e9 mais r\u00e1pida e mais econ\u00f3mica do que os modelos anteriores. Juntamente com o Gemini, a Google est\u00e1 a lan\u00e7ar o TPU v5p para centros de dados, melhorando a efici\u00eancia da execu\u00e7\u00e3o de modelos em grande escala.<\/p>\n<h2>Ser\u00e1 o Gemini o assassino do ChatGPT?<\/h2>\n<p>A Google est\u00e1 claramente entusiasmada com o Gemini. No in\u00edcio do ano, um <a href=\"https:\/\/dailyai.com\/pt\/2023\/09\/googles-gemini-is-expected-to-outperform-gpt-4\/\">'fuga' por Semi An\u00e1lise<\/a> sugeriu que a Gemini poderia arrasar a concorr\u00eancia, vendo a Google passar de um membro perif\u00e9rico da ind\u00fastria da IA generativa para a personagem principal \u00e0 frente da OpenAI.<\/p>\n<p>Para al\u00e9m da sua multimodalidade, o Gemini \u00e9, alegadamente, o primeiro modelo a superar os especialistas humanos no teste de refer\u00eancia MMLU (massive multitask language understanding), que testa o conhecimento do mundo e as capacidades de resolu\u00e7\u00e3o de problemas em 57 disciplinas, como matem\u00e1tica, f\u00edsica, hist\u00f3ria, direito, medicina e \u00e9tica.<\/p>\n<p><iframe loading=\"lazy\" title=\"Matem\u00e1tica e f\u00edsica com IA | Gemini\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/K4pX1VAxaAI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>Pichai afirma que o lan\u00e7amento do Gemini est\u00e1 a anunciar uma \"nova era\" na IA, salientando a forma como o Gemini ir\u00e1 beneficiar do vasto cat\u00e1logo de produtos da Google.<\/p>\n<p>A integra\u00e7\u00e3o dos motores de busca \u00e9 particularmente interessante, uma vez que <a href=\"https:\/\/dailyai.com\/pt\/2023\/09\/google-turns-25-will-ai-herald-another-25-years-of-success\/\">A Google domina este espa\u00e7o<\/a> e tem ao seu alcance as vantagens do \u00edndice de pesquisa mais completo do mundo.<\/p>\n<p>O lan\u00e7amento do Gemini coloca a Google firmemente na atual corrida \u00e0 IA, e as pessoas v\u00e3o fazer tudo para o testar contra o GPT-4.<\/p>\n<h2>Testes e an\u00e1lises de benchmarks Gemini<\/h2>\n<p>Num <a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-ai\/#performance\">publica\u00e7\u00e3o no blogue<\/a>Na semana passada, a Google publicou resultados de testes de refer\u00eancia que mostram como o Gemini Ultra supera o GPT-4 na maioria dos testes. Tamb\u00e9m possui capacidades de codifica\u00e7\u00e3o avan\u00e7adas, com um desempenho not\u00e1vel em testes de refer\u00eancia de codifica\u00e7\u00e3o como o HumanEval e o Natural2Code.<\/p>\n<p><iframe loading=\"lazy\" title=\"Utilizar a IA para resolver problemas complexos | Gemini\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/LvGmVmHv69s?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>Aqui est\u00e3o os dados de refer\u00eancia. Aten\u00e7\u00e3o que estas medidas utilizam a vers\u00e3o Gemini Ultra ainda n\u00e3o lan\u00e7ada. O Gemini n\u00e3o pode ser considerado um assassino do ChatGPT at\u00e9 ao pr\u00f3ximo ano. E pode apostar que a OpenAI vai tentar neutralizar o Gemini o mais rapidamente poss\u00edvel.<\/p>\n<h3>Desempenho de refer\u00eancia de texto\/NLP<\/h3>\n<p><strong>Conhecimentos gerais:<\/strong><\/p>\n<ul>\n<li>MMLU (Massive Multitask Language Understanding):\n<ul>\n<li>Gemini Ultra: 90.0% (Cadeia de pensamento com 32 exemplos)<\/li>\n<li>GPT-4: 86,4% (5 tiros, reportado)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Racioc\u00ednio:<\/strong><\/p>\n<ul>\n<li>Big-Bench Hard (conjunto diversificado de tarefas exigentes que requerem racioc\u00ednio em v\u00e1rias etapas):\n<ul>\n<li>Gemini Ultra: 83,6% (3 tiros)<\/li>\n<li>GPT-4: 83.1% (3 tiros, API)<\/li>\n<\/ul>\n<\/li>\n<li>DROP (Compreens\u00e3o da Leitura, Pontua\u00e7\u00e3o F1):\n<ul>\n<li>Gemini Ultra: 82,4 (disparos vari\u00e1veis)<\/li>\n<li>GPT-4: 80,9 (3 tiros, relatado)<\/li>\n<\/ul>\n<\/li>\n<li>HellaSwag (Racioc\u00ednio de senso comum para tarefas quotidianas):\n<ul>\n<li>Gemini Ultra: 87,8% (10 disparos)<\/li>\n<li>GPT-4: 95,3% (10 tentativas, comunicado)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Matem\u00e1tica:<\/strong><\/p>\n<ul>\n<li>GSM8K (Manipula\u00e7\u00f5es aritm\u00e9ticas b\u00e1sicas, incluindo problemas de matem\u00e1tica do ensino b\u00e1sico):\n<ul>\n<li>Gemini Ultra: 94,4% (maioria a 32 exemplos)<\/li>\n<li>GPT-4: 92.0% (Cadeia de Pensamento de 5 disparos, relatado)<\/li>\n<\/ul>\n<\/li>\n<li>MATEM\u00c1TICA (Problemas de matem\u00e1tica desafiantes, incluindo \u00e1lgebra, geometria, pr\u00e9-c\u00e1lculo e outros):\n<ul>\n<li>Gemini Ultra: 53.2% (4 tiros)<\/li>\n<li>GPT-4: 52,9% (4 tiros, API)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>C\u00f3digo:<\/strong><\/p>\n<ul>\n<li>HumanEval (gera\u00e7\u00e3o de c\u00f3digo Python):\n<ul>\n<li>Gemini Ultra: 74,4% (0 disparos, teste interno)<\/li>\n<li>GPT-4: 67,0% (0 tiros, comunicado)<\/li>\n<\/ul>\n<\/li>\n<li>Natural2Code (gera\u00e7\u00e3o de c\u00f3digo Python, novo conjunto de dados retido, semelhante ao HumanEval, n\u00e3o divulgado na Web):\n<ul>\n<li>Gemini Ultra: 74,9% (0 tiros)<\/li>\n<li>GPT-4: 73,9% (0-tiro, API)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Desempenho multimodal de refer\u00eancia<\/h3>\n<p>As capacidades multimodais do modelo de IA Gemini da Google s\u00e3o tamb\u00e9m comparadas com o GPT-4V da OpenAI.<\/p>\n<p><strong>Compreens\u00e3o e processamento de imagens:<\/strong><\/p>\n<ul>\n<li><strong>MMMU (Multi-discipline College-level Reasoning Problems):<\/strong>\n<ul>\n<li>Gemini Ultra: 59,4% (passagem de 0 disparos@1, apenas p\u00edxeis)<\/li>\n<li>GPT-4V: 56,8% (passagem de 0 disparos@1)<\/li>\n<\/ul>\n<\/li>\n<li><strong>VQAv2 (Compreens\u00e3o Natural de Imagens):<\/strong>\n<ul>\n<li>Gemini Ultra: 77,8% (0 disparos, apenas p\u00edxeis)<\/li>\n<li>GPT-4V: 77.2% (0-tiro)<\/li>\n<\/ul>\n<\/li>\n<li><strong>TextVQA (OCR em imagens naturais):<\/strong>\n<ul>\n<li>Gemini Ultra: 82,3% (0 disparos, apenas p\u00edxeis)<\/li>\n<li>GPT-4V: 78.0% (0-shot)<\/li>\n<\/ul>\n<\/li>\n<li><strong>DocVQA (Compreens\u00e3o de documentos):<\/strong>\n<ul>\n<li>Gemini Ultra: 90,9% (0 disparos, apenas p\u00edxeis)<\/li>\n<li>GPT-4V: 88,4% (0-shot, apenas p\u00edxeis)<\/li>\n<\/ul>\n<\/li>\n<li><strong>Infographic VQA (Compreens\u00e3o de Infogr\u00e1ficos):<\/strong>\n<ul>\n<li>Gemini Ultra: 80.3% (0 disparos, apenas p\u00edxeis)<\/li>\n<li>GPT-4V: 75.1% (0-shot, apenas pixel)<\/li>\n<\/ul>\n<\/li>\n<li><strong>MathVista (Racioc\u00ednio matem\u00e1tico em contextos visuais):<\/strong>\n<ul>\n<li>Gemini Ultra: 53.0% (0 disparos, apenas p\u00edxeis)<\/li>\n<li>GPT-4V: 49,9% (0-tiro)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Processamento de v\u00eddeo:<\/strong><\/p>\n<ul>\n<li><strong>VATEX (legendagem de v\u00eddeo em ingl\u00eas, pontua\u00e7\u00e3o do CIDEr):<\/strong>\n<ul>\n<li>Gemini Ultra: 62,7 (4 tiros)<\/li>\n<li>DeepMind Flamingo: 56.0 (4 tentativas)<\/li>\n<\/ul>\n<\/li>\n<li><strong>Teste de perce\u00e7\u00e3o MCQA (Video Question Answering):<\/strong>\n<ul>\n<li>Gemini Ultra: 54,7% (0 tiros)<\/li>\n<li>SeViLA: 46.3% (0-tiro)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Processamento de \u00e1udio:<\/strong><\/p>\n<ul>\n<li><strong>CoVoST 2 (tradu\u00e7\u00e3o autom\u00e1tica do discurso, 21 l\u00ednguas, pontua\u00e7\u00e3o BLEU):<\/strong>\n<ul>\n<li>Gemini Pro: 40,1<\/li>\n<li>Whisper v2: 29,1<\/li>\n<\/ul>\n<\/li>\n<li><strong>FLEURS (Reconhecimento autom\u00e1tico da fala, 62 l\u00ednguas, taxa de erro de palavras):<\/strong>\n<ul>\n<li>Gemini Pro: 7,6% (quanto mais baixo, melhor)<\/li>\n<li>Whisper v3: 17.6%<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>O compromisso \u00e9tico da Google<\/h2>\n<p class=\"whitespace-pre-wrap\">Num <a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-ai\/#scalable-efficient\">publica\u00e7\u00e3o no blogue<\/a>A Google sublinhou o seu empenhamento em pr\u00e1ticas de IA respons\u00e1veis e \u00e9ticas.<\/p>\n<p class=\"whitespace-pre-wrap\">De acordo com a Google, o Gemini foi submetido a testes mais rigorosos do que qualquer outra IA anterior da Google, avaliando factores como a parcialidade, a toxicidade, as amea\u00e7as \u00e0 ciberseguran\u00e7a e o potencial de utiliza\u00e7\u00e3o indevida. As t\u00e9cnicas contradit\u00f3rias ajudaram a detetar problemas numa fase inicial. Em seguida, especialistas externos fizeram testes de stress e \"red-team\" aos modelos para identificar outros pontos cegos.<\/p>\n<p class=\"whitespace-pre-wrap\">A Google afirma que a responsabilidade e a seguran\u00e7a continuar\u00e3o a ser prioridades no meio do r\u00e1pido progresso da IA. A empresa ajudou a lan\u00e7ar grupos do sector para estabelecer as melhores pr\u00e1ticas, incluindo o MLCommons e o Secure AI Framework (SAIF).<\/p>\n<p class=\"whitespace-pre-wrap\">A Google compromete-se a continuar a colaborar com investigadores, governos e organiza\u00e7\u00f5es da sociedade civil a n\u00edvel mundial.<\/p>\n<h2>Lan\u00e7amento do Gemini Ultra<\/h2>\n<p class=\"whitespace-pre-wrap\">Por enquanto, a Google est\u00e1 a limitar o acesso \u00e0 itera\u00e7\u00e3o do seu modelo mais potente, o Gemini Ultra, que ser\u00e1 lan\u00e7ado no in\u00edcio do pr\u00f3ximo ano.<\/p>\n<p class=\"whitespace-pre-wrap\">Antes disso, alguns programadores e especialistas seleccionados ir\u00e3o experimentar o Ultra para dar feedback. O lan\u00e7amento coincidir\u00e1 com uma nova plataforma de modelos de IA de ponta, ou como a Google chama a uma \"experi\u00eancia\", denominada Bard Advanced.<\/p>\n<h2>Gemini para programadores<\/h2>\n<p>A partir de 13 de dezembro, os programadores e os clientes empresariais ter\u00e3o acesso ao Gemini Pro atrav\u00e9s da API Gemini, dispon\u00edvel no Google AI Studio ou no Google Cloud Vertex AI.<\/p>\n<p><strong>Est\u00fadio de IA da Google:<\/strong> O Google AI Studio \u00e9 uma ferramenta f\u00e1cil de utilizar e baseada na Web, concebida para ajudar os programadores a criar prot\u00f3tipos e lan\u00e7ar aplica\u00e7\u00f5es utilizando uma chave de API. Este recurso gratuito \u00e9 ideal para quem se encontra nas fases iniciais do desenvolvimento de aplica\u00e7\u00f5es.<\/p>\n<p><strong>Vertex AI:<\/strong> Uma plataforma de IA mais abrangente, a Vertex AI oferece servi\u00e7os totalmente geridos. Integra-se perfeitamente com o Google Cloud, oferecendo tamb\u00e9m seguran\u00e7a empresarial, privacidade e conformidade com os regulamentos de governa\u00e7\u00e3o de dados.<\/p>\n<p>Para al\u00e9m destas plataformas, os programadores Android poder\u00e3o aceder ao Gemini Nano para tarefas no dispositivo. Ele estar\u00e1 dispon\u00edvel para integra\u00e7\u00e3o via AICore. Esta nova capacidade do sistema est\u00e1 programada para estrear no Android 14, come\u00e7ando com os dispositivos Pixel 8 Pro.<\/p>\n<h2>Por enquanto, o Google \u00e9 o maior<\/h2>\n<p>A OpenAI e a Google s\u00e3o diferentes num aspeto importante: A Google desenvolve internamente uma s\u00e9rie de outras ferramentas e produtos, incluindo os que s\u00e3o utilizados por milhares de milh\u00f5es de pessoas todos os dias.<\/p>\n<p>Estamos, obviamente, a falar do Android, do Chrome, do Gmail, do Google Workplace e da Pesquisa Google.<\/p>\n<p>A OpenAI, atrav\u00e9s da sua alian\u00e7a com a Microsoft, tem oportunidades semelhantes atrav\u00e9s do Copilot, mas este ainda n\u00e3o arrancou.<\/p>\n<p>E, para sermos honestos, a Google \u00e9 provavelmente a empresa que domina todas estas categorias de produtos.<\/p>\n<p>A Google tem-se mantido na corrida da IA, mas pode ter a certeza de que isto s\u00f3 ir\u00e1 alimentar o impulso da OpenAI em dire\u00e7\u00e3o \u00e0 GPT-5 e \u00e0 AGI.<\/p>","protected":false},"excerpt":{"rendered":"<p>A Google lan\u00e7ou a sua fam\u00edlia Gemini de modelos de IA multimodais, uma jogada dram\u00e1tica numa ind\u00fastria que ainda est\u00e1 a recuperar dos acontecimentos da OpenAI. Gemini \u00e9 uma fam\u00edlia de modelos multimodais capaz de processar e compreender uma mistura de texto, imagens, \u00e1udio e v\u00eddeo. Sundar Pichai, diretor executivo da Google, e Demis Hassabis, diretor executivo da Google DeepMind, manifestam grandes expectativas em rela\u00e7\u00e3o ao Gemini. A Google planeia integr\u00e1-lo nos seus vastos produtos e servi\u00e7os, incluindo a pesquisa, o Maps e o Chrome. Temos o prazer de anunciar \ud835\uddda\ud835\uddf2\ud835\uddfa\ud835\uddf6\ud835\uddfb\ud835\uddf6: O maior e mais capaz modelo de IA da @Google. Criado para ser nativamente multimodal, pode compreender e operar em texto, c\u00f3digo e \u00e1udio,<\/p>","protected":false},"author":2,"featured_media":2402,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[125,147,383,102],"class_list":["post-8052","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-bard","tag-deepmind","tag-gemini","tag-google"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI<\/title>\n<meta name=\"description\" content=\"Just a few days after reports suggested Google&#039;s secretive Gemini project was delayed, they&#039;ve unleashed it upon an AI industry still reeling from events at OpenAI.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Just a few days after reports suggested Google&#039;s secretive Gemini project was delayed, they&#039;ve unleashed it upon an AI industry still reeling from events at OpenAI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T17:03:36+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T00:40:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"Google unleashes its groundbreaking Gemini family of multi-modal models\",\"datePublished\":\"2023-12-06T17:03:36+00:00\",\"dateModified\":\"2024-03-28T00:40:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"},\"wordCount\":1356,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"keywords\":[\"Bard\",\"DeepMind\",\"Gemini\",\"Google\"],\"articleSection\":{\"1\":\"Industry\"},\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\",\"name\":\"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"datePublished\":\"2023-12-06T17:03:36+00:00\",\"dateModified\":\"2024-03-28T00:40:52+00:00\",\"description\":\"Just a few days after reports suggested Google's secretive Gemini project was delayed, they've unleashed it upon an AI industry still reeling from events at OpenAI.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"width\":1000,\"height\":667,\"caption\":\"Google Med-PaLM 2\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google unleashes its groundbreaking Gemini family of multi-modal models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Google lan\u00e7a a sua inovadora fam\u00edlia Gemini de modelos multimodais | DailyAI","description":"Apenas alguns dias depois de ter sido noticiado que o projeto secreto Gemini da Google tinha sido adiado, a empresa lan\u00e7ou-o numa ind\u00fastria de IA que ainda est\u00e1 a recuperar dos acontecimentos da OpenAI.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","og_locale":"pt_PT","og_type":"article","og_title":"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI","og_description":"Just a few days after reports suggested Google's secretive Gemini project was delayed, they've unleashed it upon an AI industry still reeling from events at OpenAI.","og_url":"https:\/\/dailyai.com\/pt\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T17:03:36+00:00","article_modified_time":"2024-03-28T00:40:52+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Sam Jeans","Tempo estimado de leitura":"6 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"Google unleashes its groundbreaking Gemini family of multi-modal models","datePublished":"2023-12-06T17:03:36+00:00","dateModified":"2024-03-28T00:40:52+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"},"wordCount":1356,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","keywords":["Bard","DeepMind","Gemini","Google"],"articleSection":{"1":"Industry"},"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","url":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","name":"Google lan\u00e7a a sua inovadora fam\u00edlia Gemini de modelos multimodais | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","datePublished":"2023-12-06T17:03:36+00:00","dateModified":"2024-03-28T00:40:52+00:00","description":"Apenas alguns dias depois de ter sido noticiado que o projeto secreto Gemini da Google tinha sido adiado, a empresa lan\u00e7ou-o numa ind\u00fastria de IA que ainda est\u00e1 a recuperar dos acontecimentos da OpenAI.","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","width":1000,"height":667,"caption":"Google Med-PaLM 2"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google unleashes its groundbreaking Gemini family of multi-modal models"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Cal\u00e7as de ganga Sam","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam \u00e9 um escritor de ci\u00eancia e tecnologia que trabalhou em v\u00e1rias startups de IA. Quando n\u00e3o est\u00e1 a escrever, pode ser encontrado a ler revistas m\u00e9dicas ou a vasculhar caixas de discos de vinil.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/pt\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8052","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=8052"}],"version-history":[{"count":16,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8052\/revisions"}],"predecessor-version":[{"id":8084,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/8052\/revisions\/8084"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/2402"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=8052"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=8052"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=8052"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}