{"id":8052,"date":"2023-12-06T17:03:36","date_gmt":"2023-12-06T17:03:36","guid":{"rendered":"https:\/\/dailyai.com\/?p=8052"},"modified":"2024-03-28T00:40:52","modified_gmt":"2024-03-28T00:40:52","slug":"google-launches-its-new-gemini-multi-modal-family-of-models","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","title":{"rendered":"Google lance sa famille r\u00e9volutionnaire de mod\u00e8les multimodaux Gemini"},"content":{"rendered":"<p><strong>Google a lanc\u00e9 sa famille Gemini de mod\u00e8les d'IA multimodaux, une initiative spectaculaire dans un secteur encore sous le choc des \u00e9v\u00e9nements de l'OpenAI.<\/strong><\/p>\n<p>Gemini est une famille de mod\u00e8les multimodaux capables de traiter et de comprendre un m\u00e9lange de textes, d'images, de sons et de vid\u00e9os.<\/p>\n<p>Sundar Pichai, PDG de Google, et Demis Hassabis, PDG de Google DeepMind, attendent beaucoup de Gemini. Google pr\u00e9voit de l'int\u00e9grer dans l'ensemble de ses produits et services, notamment la recherche, Maps et Chrome.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Nous avons le plaisir d'annoncer le lancement de \ud835\uddda\ud835\uddf2\ud835\uddfa\ud835\uddf6\ud835\uddfb\ud835\uddf6 : <a href=\"https:\/\/twitter.com\/Google?ref_src=twsrc%5Etfw\">@Google<\/a>Le mod\u00e8le d'IA le plus grand et le plus performant de l'Union europ\u00e9enne.<\/p>\n<p>Con\u00e7ue pour \u00eatre nativement multimodale, elle peut comprendre et fonctionner avec du texte, du code, de l'audio, de l'image et de la vid\u00e9o - et atteint des performances de pointe dans de nombreuses t\u00e2ches. \ud83e\uddf5 <a href=\"https:\/\/t.co\/mwHZTDTBuG\">https:\/\/t.co\/mwHZTDTBuG<\/a> <a href=\"https:\/\/t.co\/zfLlCGuzmV\">pic.twitter.com\/zfLlCGuzmV<\/a><\/p>\n<p>- Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1732416095355814277?ref_src=twsrc%5Etfw\">6 d\u00e9cembre 2023<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Gemini se targue d'une multimodalit\u00e9 compl\u00e8te, traitant et interagissant avec du texte, des images, de la vid\u00e9o et de l'audio. Alors que nous nous sommes habitu\u00e9s au traitement du texte et de l'image, l'audio et la vid\u00e9o ouvrent de nouvelles perspectives, offrant de nouvelles fa\u00e7ons passionnantes de g\u00e9rer les m\u00e9dias riches.<\/p>\n<p>Hassabis note que \"ces mod\u00e8les comprennent mieux le monde qui les entoure\".<\/p>\n<p>M. Pichai a soulign\u00e9 l'interconnexion du mod\u00e8le avec les produits et services de Google, en d\u00e9clarant : \"L'une des grandes forces de ce moment est qu'il est possible de travailler sur une technologie sous-jacente et de l'am\u00e9liorer, et que cela se r\u00e9percute imm\u00e9diatement sur nos produits\".<\/p>\n<p>Les G\u00e9meaux prendront trois formes diff\u00e9rentes, \u00e0 savoir<\/p>\n<ul>\n<li><strong>Gemini Nano :<\/strong> Une version all\u00e9g\u00e9e adapt\u00e9e aux appareils Android, permettant des fonctionnalit\u00e9s hors ligne et natives.<\/li>\n<li><strong>Gemini Pro :<\/strong> Une version plus avanc\u00e9e, destin\u00e9e \u00e0 alimenter de nombreux services d'IA de Google, dont Bard.<\/li>\n<li><strong>Gemini Ultra :<\/strong> L'it\u00e9ration la plus puissante, con\u00e7ue principalement pour les centres de donn\u00e9es et les applications d'entreprise, devrait sortir l'ann\u00e9e prochaine.<\/li>\n<\/ul>\n<p>En termes de performances, Google affirme que Gemini surpasse GPT-4 dans 30 des 32 points de r\u00e9f\u00e9rence, excellant particuli\u00e8rement dans la compr\u00e9hension et l'interaction avec la vid\u00e9o et l'audio. Cette performance est attribu\u00e9e \u00e0 la conception de Gemini en tant que mod\u00e8le multisensoriel d\u00e8s le d\u00e9part.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Bard b\u00e9n\u00e9ficie de sa plus grande mise \u00e0 jour avec une version sp\u00e9cialement adapt\u00e9e de Gemini Pro.<\/p>\n<p>\u00c0 partir d'aujourd'hui, il sera beaucoup plus performant dans des domaines tels que :<br \/>\n\ud83d\udd18 Compr\u00e9hension<br \/>\n\ud83d\udd18 R\u00e9sumer<br \/>\n\ud83d\udd18 Raisonnement<br \/>\n\ud83d\udd18 Codage<br \/>\n\ud83d\udd18 Planification<\/p>\n<p>Et plus encore. \u2193 <a href=\"https:\/\/t.co\/TJR12OioxU\">https:\/\/t.co\/TJR12OioxU<\/a><\/p>\n<p>- Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1732430045275140415?ref_src=twsrc%5Etfw\">6 d\u00e9cembre 2023<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><br \/>\nEn outre, Google a tenu \u00e0 souligner l'efficacit\u00e9 de Gemini.<\/p>\n<p>Form\u00e9 sur les propres unit\u00e9s de traitement tensoriel (TPU) de Google, il est plus rapide et plus rentable que les mod\u00e8les pr\u00e9c\u00e9dents. Parall\u00e8lement \u00e0 Gemini, Google lance TPU v5p pour les centres de donn\u00e9es, afin d'am\u00e9liorer l'efficacit\u00e9 de l'ex\u00e9cution des mod\u00e8les \u00e0 grande \u00e9chelle.<\/p>\n<h2>Gemini est-il le tueur du ChatGPT ?<\/h2>\n<p>Google est manifestement optimiste \u00e0 l'\u00e9gard de Gemini. Plus t\u00f4t dans l'ann\u00e9e, un <a href=\"https:\/\/dailyai.com\/fr\/2023\/09\/googles-gemini-is-expected-to-outperform-gpt-4\/\">Fuite\" par Semi Analysis<\/a> a sugg\u00e9r\u00e9 que Gemini pourrait faire exploser la concurrence et faire passer Google du statut de membre p\u00e9riph\u00e9rique de l'industrie de l'IA g\u00e9n\u00e9rative \u00e0 celui de personnage principal de l'OpenAI.<\/p>\n<p>Outre sa multi-modalit\u00e9, Gemini serait le premier mod\u00e8le \u00e0 surpasser les experts humains dans le cadre de l'\u00e9tude comparative MMLU (massive multitask language understanding), qui teste la connaissance du monde et les capacit\u00e9s de r\u00e9solution de probl\u00e8mes dans 57 domaines, tels que les math\u00e9matiques, la physique, l'histoire, le droit, la m\u00e9decine et l'\u00e9thique.<\/p>\n<p><iframe loading=\"lazy\" title=\"Math\u00e9matiques et physique avec IA | Gemini\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/K4pX1VAxaAI?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>Selon M. Pichai, le lancement de Gemini marque le d\u00e9but d'une \"nouvelle \u00e8re\" dans le domaine de l'IA, en soulignant que Gemini b\u00e9n\u00e9ficiera du vaste catalogue de produits de Google.<\/p>\n<p>L'int\u00e9gration des moteurs de recherche est particuli\u00e8rement int\u00e9ressante, car <a href=\"https:\/\/dailyai.com\/fr\/2023\/09\/google-turns-25-will-ai-herald-another-25-years-of-success\/\">Google domine cet espace<\/a> et b\u00e9n\u00e9ficie de l'index de recherche le plus complet au monde.<\/p>\n<p>La sortie de Gemini place Google fermement dans la course \u00e0 l'IA en cours, et les gens vont tout faire pour le tester par rapport \u00e0 GPT-4.<\/p>\n<h2>Tests et analyses de r\u00e9f\u00e9rence Gemini<\/h2>\n<p>Dans un <a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-ai\/#performance\">article de blog<\/a>Google a publi\u00e9 des r\u00e9sultats de tests qui montrent que Gemini Ultra surpasse GPT-4 dans la majorit\u00e9 des tests. Il se targue \u00e9galement de capacit\u00e9s de codage avanc\u00e9es, avec des performances remarquables dans les tests de codage tels que HumanEval et Natural2Code.<\/p>\n<p><iframe loading=\"lazy\" title=\"Utiliser l&#039;IA pour r\u00e9soudre des probl\u00e8mes complexes | Gemini\" width=\"1080\" height=\"608\" src=\"https:\/\/www.youtube.com\/embed\/LvGmVmHv69s?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>Voici les donn\u00e9es de r\u00e9f\u00e9rence. Attention, ces mesures utilisent la version Gemini Ultra qui n'est pas encore sortie. Gemini ne peut pas \u00eatre consid\u00e9r\u00e9 comme un tueur de ChatGPT avant l'ann\u00e9e prochaine. Et vous pouvez parier sur le fait qu'OpenAI s'efforcera de contrer Gemini d\u00e8s que possible.<\/p>\n<h3>Performances de r\u00e9f\u00e9rence en mati\u00e8re de texte\/NLP<\/h3>\n<p><strong>Connaissances g\u00e9n\u00e9rales :<\/strong><\/p>\n<ul>\n<li>MMLU (Massive Multitask Language Understanding) :\n<ul>\n<li>Gemini Ultra : 90.0% (Cha\u00eene de pens\u00e9e \u00e0 32 exemples)<\/li>\n<li>GPT-4 : 86.4% (5 coups, rapport\u00e9)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Raisonnement :<\/strong><\/p>\n<ul>\n<li>Big-Bench Hard (ensemble diversifi\u00e9 de t\u00e2ches difficiles n\u00e9cessitant un raisonnement en plusieurs \u00e9tapes) :\n<ul>\n<li>Gemini Ultra : 83.6% (3 coups)<\/li>\n<li>GPT-4 : 83.1% (3 coups, API)<\/li>\n<\/ul>\n<\/li>\n<li>DROP (compr\u00e9hension de l'\u00e9crit, score F1) :\n<ul>\n<li>Gemini Ultra : 82,4 (tirs variables)<\/li>\n<li>GPT-4 : 80,9 (3 coups, rapport\u00e9)<\/li>\n<\/ul>\n<\/li>\n<li>HellaSwag (raisonnement sens\u00e9 pour les t\u00e2ches quotidiennes) :\n<ul>\n<li>Gemini Ultra : 87.8% (10 coups)<\/li>\n<li>GPT-4 : 95.3% (10 coups, rapport\u00e9)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Math :<\/strong><\/p>\n<ul>\n<li>GSM8K (manipulations arithm\u00e9tiques de base, y compris les probl\u00e8mes math\u00e9matiques de l'\u00e9cole primaire) :\n<ul>\n<li>Gemini Ultra : 94,4% (majorit\u00e9 \u00e0 32 exemples)<\/li>\n<li>GPT-4 : 92.0% (Cha\u00eene de pens\u00e9e \u00e0 5 coups, rapport\u00e9e)<\/li>\n<\/ul>\n<\/li>\n<li>MATH (Probl\u00e8mes math\u00e9matiques difficiles, y compris alg\u00e8bre, g\u00e9om\u00e9trie, pr\u00e9-calcul, et autres) :\n<ul>\n<li>Gemini Ultra : 53.2% (4 coups)<\/li>\n<li>GPT-4 : 52.9% (4 coups, API)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Code :<\/strong><\/p>\n<ul>\n<li>HumanEval (g\u00e9n\u00e9ration de code Python) :\n<ul>\n<li>Gemini Ultra : 74,4% (0 coup, test interne)<\/li>\n<li>GPT-4 : 67.0% (0 coup, rapport\u00e9)<\/li>\n<\/ul>\n<\/li>\n<li>Natural2Code (g\u00e9n\u00e9ration de code Python, nouvel ensemble de donn\u00e9es retenu, semblable \u00e0 HumanEval, non divulgu\u00e9 sur le web) :\n<ul>\n<li>Gemini Ultra : 74.9% (0 tir)<\/li>\n<li>GPT-4 : 73.9% (0 tir, API)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3>Performances de r\u00e9f\u00e9rence multimodales<\/h3>\n<p>Les capacit\u00e9s multimodales du mod\u00e8le d'IA Gemini de Google sont \u00e9galement compar\u00e9es \u00e0 celles du mod\u00e8le GPT-4V d'OpenAI.<\/p>\n<p><strong>Compr\u00e9hension et traitement des images :<\/strong><\/p>\n<ul>\n<li><strong>MMMU (Multi-discipline College-level Reasoning Problems) :<\/strong>\n<ul>\n<li>Gemini Ultra : 59.4% (0-shot pass@1, pixel seulement)<\/li>\n<li>GPT-4V : 56.8% (0-shot pass@1)<\/li>\n<\/ul>\n<\/li>\n<li><strong>VQAv2 (Natural Image Understanding) :<\/strong>\n<ul>\n<li>Gemini Ultra : 77.8% (0-shot, pixel seulement)<\/li>\n<li>GPT-4V : 77.2% (0 coup)<\/li>\n<\/ul>\n<\/li>\n<li><strong>TextVQA (OCR sur images naturelles) :<\/strong>\n<ul>\n<li>Gemini Ultra : 82.3% (0-shot, pixel seulement)<\/li>\n<li>GPT-4V : 78.0% (0 coup)<\/li>\n<\/ul>\n<\/li>\n<li><strong>DocVQA (compr\u00e9hension des documents) :<\/strong>\n<ul>\n<li>Gemini Ultra : 90.9% (0-shot, pixel uniquement)<\/li>\n<li>GPT-4V : 88.4% (0-shot, pixel seulement)<\/li>\n<\/ul>\n<\/li>\n<li><strong>VQA infographique (compr\u00e9hension de l'infographie) :<\/strong>\n<ul>\n<li>Gemini Ultra : 80.3% (0-shot, pixel seulement)<\/li>\n<li>GPT-4V : 75.1% (0-shot, pixel uniquement)<\/li>\n<\/ul>\n<\/li>\n<li><strong>MathVista (raisonnement math\u00e9matique dans des contextes visuels) :<\/strong>\n<ul>\n<li>Gemini Ultra : 53.0% (0-shot, pixel uniquement)<\/li>\n<li>GPT-4V : 49.9% (0 coup)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Traitement vid\u00e9o :<\/strong><\/p>\n<ul>\n<li><strong>VATEX (sous-titrage vid\u00e9o en anglais, score CIDEr) :<\/strong>\n<ul>\n<li>Gemini Ultra : 62,7 (4 coups)<\/li>\n<li>DeepMind Flamingo : 56.0 (4 coups)<\/li>\n<\/ul>\n<\/li>\n<li><strong>Test de perception MCQA (Video Question Answering) :<\/strong>\n<ul>\n<li>Gemini Ultra : 54.7% (0 tir)<\/li>\n<li>SeViLA : 46.3% (0 tir)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><strong>Traitement audio :<\/strong><\/p>\n<ul>\n<li><strong>CoVoST 2 (traduction automatique de la parole, 21 langues, score BLEU) :<\/strong>\n<ul>\n<li>Gemini Pro : 40.1<\/li>\n<li>Whisper v2 : 29.1<\/li>\n<\/ul>\n<\/li>\n<li><strong>FLEURS (Reconnaissance automatique de la parole, 62 langues, taux d'erreur de mots) :<\/strong>\n<ul>\n<li>Gemini Pro : 7.6% (plus c'est bas, mieux c'est)<\/li>\n<li>Whisper v3 : 17.6%<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h2>L'engagement \u00e9thique de Google<\/h2>\n<p class=\"whitespace-pre-wrap\">Dans un <a href=\"https:\/\/blog.google\/technology\/ai\/google-gemini-ai\/#scalable-efficient\">article de blog<\/a>Google a soulign\u00e9 son engagement en faveur de pratiques responsables et \u00e9thiques en mati\u00e8re d'IA.<\/p>\n<p class=\"whitespace-pre-wrap\">Selon Google, Gemini a fait l'objet de tests plus rigoureux que toutes les autres IA de Google, \u00e9valuant des facteurs tels que la partialit\u00e9, la toxicit\u00e9, les menaces pour la cybers\u00e9curit\u00e9 et le potentiel d'utilisation abusive. Des techniques contradictoires ont permis de d\u00e9tecter les probl\u00e8mes \u00e0 un stade pr\u00e9coce. Des experts externes ont ensuite soumis les mod\u00e8les \u00e0 des tests de r\u00e9sistance et \u00e0 des \"\u00e9quipes rouges\" afin d'identifier d'autres points faibles.<\/p>\n<p class=\"whitespace-pre-wrap\">Google affirme que la responsabilit\u00e9 et la s\u00e9curit\u00e9 resteront des priorit\u00e9s au milieu des progr\u00e8s rapides de l'IA. L'entreprise a contribu\u00e9 \u00e0 la cr\u00e9ation de groupes industriels charg\u00e9s d'\u00e9tablir les meilleures pratiques, notamment MLCommons et le Secure AI Framework (SAIF).<\/p>\n<p class=\"whitespace-pre-wrap\">Google s'engage \u00e0 poursuivre sa collaboration avec les chercheurs, les gouvernements et les organisations de la soci\u00e9t\u00e9 civile du monde entier.<\/p>\n<h2>Gemini Ultra release<\/h2>\n<p class=\"whitespace-pre-wrap\">Pour l'instant, Google limite l'acc\u00e8s \u00e0 son mod\u00e8le le plus puissant, Gemini Ultra, qui sera disponible au d\u00e9but de l'ann\u00e9e prochaine.<\/p>\n<p class=\"whitespace-pre-wrap\">Avant cela, des d\u00e9veloppeurs et des experts tri\u00e9s sur le volet exp\u00e9rimenteront Ultra pour donner leur avis. Le lancement co\u00efncidera avec celui d'une nouvelle plateforme de mod\u00e8les d'IA de pointe, ou, comme Google l'appelle, d'une \"exp\u00e9rience\", baptis\u00e9e Bard Advanced.<\/p>\n<h2>Gemini pour les d\u00e9veloppeurs<\/h2>\n<p>\u00c0 partir du 13 d\u00e9cembre, les d\u00e9veloppeurs et les entreprises clientes auront acc\u00e8s \u00e0 Gemini Pro via l'API Gemini, disponible dans Google AI Studio ou Google Cloud Vertex AI.<\/p>\n<p><strong>Google AI Studio :<\/strong> Outil convivial bas\u00e9 sur le web, Google AI Studio est con\u00e7u pour aider les d\u00e9veloppeurs \u00e0 cr\u00e9er des prototypes et \u00e0 lancer des applications \u00e0 l'aide d'une cl\u00e9 API. Cette ressource gratuite est id\u00e9ale pour ceux qui en sont aux premi\u00e8res \u00e9tapes du d\u00e9veloppement d'applications.<\/p>\n<p><strong>Vertex AI :<\/strong> Plateforme d'IA plus compl\u00e8te, Vertex AI offre des services enti\u00e8rement g\u00e9r\u00e9s. Elle s'int\u00e8gre de mani\u00e8re transparente \u00e0 Google Cloud, tout en garantissant la s\u00e9curit\u00e9 de l'entreprise, la confidentialit\u00e9 et la conformit\u00e9 aux r\u00e9glementations en mati\u00e8re de gouvernance des donn\u00e9es.<\/p>\n<p>En plus de ces plateformes, les d\u00e9veloppeurs Android pourront acc\u00e9der \u00e0 Gemini Nano pour des t\u00e2ches sur l'appareil. Il sera possible de l'int\u00e9grer via AICore. Cette nouvelle capacit\u00e9 du syst\u00e8me devrait faire ses d\u00e9buts dans Android 14, en commen\u00e7ant par les appareils Pixel 8 Pro.<\/p>\n<h2>Google a les cartes en main, pour l'instant<\/h2>\n<p>OpenAI et Google se distinguent sur un point important : Google d\u00e9veloppe en interne de nombreux autres outils et produits, y compris ceux utilis\u00e9s par des milliards de personnes chaque jour.<\/p>\n<p>Il s'agit bien s\u00fbr d'Android, de Chrome, de Gmail, de Google Workplace et de Google Search.<\/p>\n<p>OpenAI, gr\u00e2ce \u00e0 son alliance avec Microsoft, a des possibilit\u00e9s similaires avec Copilot, mais cela n'a pas encore vraiment d\u00e9coll\u00e9.<\/p>\n<p>Et si nous sommes honn\u00eates, Google a probablement une influence sur ces cat\u00e9gories de produits.<\/p>\n<p>Google a poursuivi sa course \u00e0 l'IA, mais vous pouvez \u00eatre s\u00fbr que cela ne fera qu'alimenter la progression d'OpenAI vers le GPT-5 et l'AGI.<\/p>","protected":false},"excerpt":{"rendered":"<p>Google a lanc\u00e9 sa famille Gemini de mod\u00e8les d'IA multimodaux, une initiative spectaculaire dans un secteur encore sous le choc des \u00e9v\u00e9nements de l'OpenAI. Gemini est une famille de mod\u00e8les multimodaux capables de traiter et de comprendre un m\u00e9lange de textes, d'images, de sons et de vid\u00e9os. Sundar Pichai, PDG de Google, et Demis Hassabis, PDG de Google DeepMind, attendent beaucoup de Gemini. Google pr\u00e9voit de l'int\u00e9grer dans l'ensemble de ses produits et services, notamment la recherche, Maps et Chrome. Nous sommes ravis d'annoncer \ud835\uddda\ud835\uddf2\ud835\uddfa\ud835\uddf6\ud835\uddfb\ud835\uddf6 : Le mod\u00e8le d'IA le plus grand et le plus performant de Google. Construit pour \u00eatre nativement multimodal, il peut comprendre et op\u00e9rer \u00e0 travers le texte, le code, l'audio,<\/p>","protected":false},"author":2,"featured_media":2402,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[84],"tags":[125,147,383,102],"class_list":["post-8052","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-industry","tag-bard","tag-deepmind","tag-gemini","tag-google"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI<\/title>\n<meta name=\"description\" content=\"Just a few days after reports suggested Google&#039;s secretive Gemini project was delayed, they&#039;ve unleashed it upon an AI industry still reeling from events at OpenAI.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Just a few days after reports suggested Google&#039;s secretive Gemini project was delayed, they&#039;ve unleashed it upon an AI industry still reeling from events at OpenAI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-06T17:03:36+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-03-28T00:40:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sam Jeans\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sam Jeans\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"},\"author\":{\"name\":\"Sam Jeans\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\"},\"headline\":\"Google unleashes its groundbreaking Gemini family of multi-modal models\",\"datePublished\":\"2023-12-06T17:03:36+00:00\",\"dateModified\":\"2024-03-28T00:40:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"},\"wordCount\":1356,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"keywords\":[\"Bard\",\"DeepMind\",\"Gemini\",\"Google\"],\"articleSection\":{\"1\":\"Industry\"},\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\",\"name\":\"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"datePublished\":\"2023-12-06T17:03:36+00:00\",\"dateModified\":\"2024-03-28T00:40:52+00:00\",\"description\":\"Just a few days after reports suggested Google's secretive Gemini project was delayed, they've unleashed it upon an AI industry still reeling from events at OpenAI.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/shutterstock_552493561.jpg\",\"width\":1000,\"height\":667,\"caption\":\"Google Med-PaLM 2\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/12\\\/google-launches-its-new-gemini-multi-modal-family-of-models\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google unleashes its groundbreaking Gemini family of multi-modal models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/711e81f945549438e8bbc579efdeb3c9\",\"name\":\"Sam Jeans\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g\",\"caption\":\"Sam Jeans\"},\"description\":\"Sam is a science and technology writer who has worked in various AI startups. When he\u2019s not writing, he can be found reading medical journals or digging through boxes of vinyl records.\",\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/sam-jeans-6746b9142\\\/\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/samjeans\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Google lance sa famille r\u00e9volutionnaire de mod\u00e8les multimodaux Gemini | DailyAI","description":"Quelques jours seulement apr\u00e8s que des rapports aient sugg\u00e9r\u00e9 que le projet secret Gemini de Google avait \u00e9t\u00e9 retard\u00e9, il a \u00e9t\u00e9 d\u00e9voil\u00e9 \u00e0 une industrie de l'IA encore sous le choc des \u00e9v\u00e9nements de l'OpenAI.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","og_locale":"fr_FR","og_type":"article","og_title":"Google unleashes its groundbreaking Gemini family of multi-modal models | DailyAI","og_description":"Just a few days after reports suggested Google's secretive Gemini project was delayed, they've unleashed it upon an AI industry still reeling from events at OpenAI.","og_url":"https:\/\/dailyai.com\/fr\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","og_site_name":"DailyAI","article_published_time":"2023-12-06T17:03:36+00:00","article_modified_time":"2024-03-28T00:40:52+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","type":"image\/jpeg"}],"author":"Sam Jeans","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Sam Jeans","Dur\u00e9e de lecture estim\u00e9e":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"},"author":{"name":"Sam Jeans","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9"},"headline":"Google unleashes its groundbreaking Gemini family of multi-modal models","datePublished":"2023-12-06T17:03:36+00:00","dateModified":"2024-03-28T00:40:52+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"},"wordCount":1356,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","keywords":["Bard","DeepMind","Gemini","Google"],"articleSection":{"1":"Industry"},"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","url":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/","name":"Google lance sa famille r\u00e9volutionnaire de mod\u00e8les multimodaux Gemini | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","datePublished":"2023-12-06T17:03:36+00:00","dateModified":"2024-03-28T00:40:52+00:00","description":"Quelques jours seulement apr\u00e8s que des rapports aient sugg\u00e9r\u00e9 que le projet secret Gemini de Google avait \u00e9t\u00e9 retard\u00e9, il a \u00e9t\u00e9 d\u00e9voil\u00e9 \u00e0 une industrie de l'IA encore sous le choc des \u00e9v\u00e9nements de l'OpenAI.","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/shutterstock_552493561.jpg","width":1000,"height":667,"caption":"Google Med-PaLM 2"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/12\/google-launches-its-new-gemini-multi-modal-family-of-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Google unleashes its groundbreaking Gemini family of multi-modal models"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/711e81f945549438e8bbc579efdeb3c9","name":"Sam Jeans","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a24a4a8f8e2a1a275b7491dc9c9f032c401eabf23c3206da4628dc84b6dac5c8?s=96&d=robohash&r=g","caption":"Sam Jeans"},"description":"Sam est un r\u00e9dacteur scientifique et technologique qui a travaill\u00e9 dans diverses start-ups sp\u00e9cialis\u00e9es dans l'IA. Lorsqu'il n'\u00e9crit pas, on peut le trouver en train de lire des revues m\u00e9dicales ou de fouiller dans des bo\u00eetes de disques vinyles.","sameAs":["https:\/\/www.linkedin.com\/in\/sam-jeans-6746b9142\/"],"url":"https:\/\/dailyai.com\/fr\/author\/samjeans\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8052","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=8052"}],"version-history":[{"count":16,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8052\/revisions"}],"predecessor-version":[{"id":8084,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/8052\/revisions\/8084"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/2402"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=8052"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=8052"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=8052"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}