{"id":5405,"date":"2023-09-13T13:45:33","date_gmt":"2023-09-13T13:45:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=5405"},"modified":"2023-09-13T13:50:49","modified_gmt":"2023-09-13T13:50:49","slug":"nvidia-software-supercharges-h100-inference-performance","status":"publish","type":"post","link":"https:\/\/dailyai.com\/fr\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","title":{"rendered":"Le logiciel Nvidia augmente les performances de l'inf\u00e9rence H100"},"content":{"rendered":"<p><strong>Nvidia a annonc\u00e9 un nouveau logiciel open source qui, selon elle, augmentera les performances d'inf\u00e9rence sur ses GPU H100.<\/strong><\/p>\n<p>Une grande partie de la demande actuelle pour les GPU de Nvidia concerne la puissance de calcul n\u00e9cessaire \u00e0 l'apprentissage de nouveaux mod\u00e8les. Mais une fois form\u00e9s, ces mod\u00e8les doivent \u00eatre utilis\u00e9s. L'inf\u00e9rence en IA fait r\u00e9f\u00e9rence \u00e0 la capacit\u00e9 d'un LLM comme ChatGPT \u00e0 tirer des conclusions ou \u00e0 faire des pr\u00e9dictions \u00e0 partir de donn\u00e9es sur lesquelles il a \u00e9t\u00e9 entra\u00een\u00e9 et \u00e0 g\u00e9n\u00e9rer des r\u00e9sultats.<\/p>\n<p>Lorsque vous essayez d'utiliser ChatGPT et qu'un message s'affiche pour vous indiquer que ses serveurs sont \u00e0 bout de souffle, c'est parce que le mat\u00e9riel informatique a du mal \u00e0 r\u00e9pondre \u00e0 la demande d'inf\u00e9rence.<\/p>\n<p>Nvidia affirme que son nouveau logiciel, TensorRT-LLM, peut faire fonctionner le mat\u00e9riel existant beaucoup plus rapidement et de mani\u00e8re plus \u00e9conome en \u00e9nergie.<\/p>\n<p>Le logiciel comprend des versions optimis\u00e9es des mod\u00e8les les plus populaires, notamment Meta Llama 2, OpenAI GPT-2 et GPT-3, Falcon, Mosaic MPT et BLOOM.<\/p>\n<p>Il utilise des techniques astucieuses telles qu'un regroupement plus efficace des t\u00e2ches d'inf\u00e9rence et des techniques de quantification pour am\u00e9liorer les performances.<\/p>\n<p>Les LLM utilisent g\u00e9n\u00e9ralement des valeurs \u00e0 virgule flottante de 16 bits pour repr\u00e9senter les poids et les activations. La quantification prend ces valeurs et les r\u00e9duit \u00e0 des valeurs \u00e0 virgule flottante de 8 bits pendant l'inf\u00e9rence. La plupart des mod\u00e8les parviennent \u00e0 conserver leur exactitude avec cette pr\u00e9cision r\u00e9duite.<\/p>\n<p>Les entreprises qui disposent d'une infrastructure informatique bas\u00e9e sur les GPU H100 de Nvidia peuvent s'attendre \u00e0 une am\u00e9lioration consid\u00e9rable des performances d'inf\u00e9rence sans avoir \u00e0 d\u00e9penser un centime en utilisant TensorRT-LLM.<\/p>\n<p>Nvidia a utilis\u00e9 un exemple d'ex\u00e9cution d'un petit mod\u00e8le open source, GPT-J 6, pour r\u00e9sumer des articles dans l'ensemble de donn\u00e9es CNN\/Daily Mail. Son ancienne puce A100 est utilis\u00e9e comme vitesse de r\u00e9f\u00e9rence, puis compar\u00e9e \u00e0 la H100 sans puis avec TensorRT-LLM.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5412 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg\" alt=\"Augmentation des performances d&#039;inf\u00e9rence de Nvidia avec TensorRT-LLM\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Source : <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>Et voici une comparaison avec le Llama 2 de Meta<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5413 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg\" alt=\"L&#039;inf\u00e9rence de Nvidia boost\u00e9e par Llama 2\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Source : <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>Nvidia a d\u00e9clar\u00e9 que ses tests ont montr\u00e9 que, selon le mod\u00e8le, un H100 ex\u00e9cutant TensorRT-LLM utilise entre 3,2 et 5,6 fois moins d'\u00e9nergie qu'un A100 pendant l'inf\u00e9rence.<\/p>\n<p>Si vous ex\u00e9cutez des mod\u00e8les d'IA sur du mat\u00e9riel H100, cela signifie que non seulement vos performances en mati\u00e8re d'inf\u00e9rence vont presque doubler, mais aussi que votre facture d'\u00e9nergie sera beaucoup moins \u00e9lev\u00e9e une fois que vous aurez install\u00e9 ce logiciel.<\/p>\n<p>TensorRT-LLM sera \u00e9galement disponible pour la plate-forme Nvidia <a href=\"https:\/\/dailyai.com\/fr\/2023\/08\/nvidias-updated-superchip-promises-huge-ai-advancements\/\">Les superpuces de Grace Hopper<\/a> mais la soci\u00e9t\u00e9 n'a pas publi\u00e9 de chiffres sur les performances du GH200 \u00e9quip\u00e9 de son nouveau logiciel.<\/p>\n<p>Le nouveau logiciel n'\u00e9tait pas encore pr\u00eat lorsque Nvidia a soumis son Superchip GH200 aux tests d'\u00e9valuation des performances de l'IA MLPerf, un standard de l'industrie. Les r\u00e9sultats ont montr\u00e9 que la GH200 \u00e9tait jusqu'\u00e0 17% plus performante qu'une puce H100 SXM.<\/p>\n<p>Si Nvidia parvient \u00e0 augmenter ne serait-ce que modestement les performances d'inf\u00e9rence en utilisant TensorRT-LLM avec le GH200, l'entreprise se placera loin devant ses plus proches rivaux. \u00catre repr\u00e9sentant commercial pour Nvidia doit \u00eatre le travail le plus facile au monde \u00e0 l'heure actuelle.<\/p>","protected":false},"excerpt":{"rendered":"<p>Nvidia a annonc\u00e9 un nouveau logiciel open source qui, selon elle, permettra d'augmenter les performances d'inf\u00e9rence sur ses GPU H100. Une grande partie de la demande actuelle pour les GPU de Nvidia concerne la puissance de calcul n\u00e9cessaire \u00e0 l'apprentissage de nouveaux mod\u00e8les. Mais une fois form\u00e9s, ces mod\u00e8les doivent \u00eatre utilis\u00e9s. L'inf\u00e9rence en IA fait r\u00e9f\u00e9rence \u00e0 la capacit\u00e9 d'un LLM comme ChatGPT \u00e0 tirer des conclusions ou \u00e0 faire des pr\u00e9dictions \u00e0 partir des donn\u00e9es sur lesquelles il a \u00e9t\u00e9 entra\u00een\u00e9 et \u00e0 g\u00e9n\u00e9rer des r\u00e9sultats. Lorsque vous essayez d'utiliser ChatGPT et qu'un message s'affiche pour vous indiquer que ses serveurs sont \u00e0 bout de souffle, c'est parce que le mat\u00e9riel informatique a du mal \u00e0 faire face \u00e0 la demande.<\/p>","protected":false},"author":6,"featured_media":973,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[83],"tags":[99,106],"class_list":["post-5405","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-product","tag-ai-race","tag-nvidia"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Nvidia software supercharges H100 inference performance | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/fr\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Nvidia software supercharges H100 inference performance | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/fr\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-13T13:45:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-13T13:50:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Nvidia software supercharges H100 inference performance\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"wordCount\":467,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"keywords\":[\"AI race\",\"Nvidia\"],\"articleSection\":[\"Product\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"name\":\"Nvidia software supercharges H100 inference performance | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"width\":1000,\"height\":667,\"caption\":\"nvidia stock\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Nvidia software supercharges H100 inference performance\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/fr\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Le logiciel de Nvidia augmente les performances de l'inf\u00e9rence H100 | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/fr\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_locale":"fr_FR","og_type":"article","og_title":"Nvidia software supercharges H100 inference performance | DailyAI","og_description":"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with","og_url":"https:\/\/dailyai.com\/fr\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_site_name":"DailyAI","article_published_time":"2023-09-13T13:45:33+00:00","article_modified_time":"2023-09-13T13:50:49+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"\u00c9crit par":"Eugene van der Watt","Dur\u00e9e de lecture estim\u00e9e":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Nvidia software supercharges H100 inference performance","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"wordCount":467,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","keywords":["AI race","Nvidia"],"articleSection":["Product"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","url":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","name":"Le logiciel de Nvidia augmente les performances de l'inf\u00e9rence H100 | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","width":1000,"height":667,"caption":"nvidia stock"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Nvidia software supercharges H100 inference performance"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Votre dose quotidienne de nouvelles sur l'IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eug\u00e8ne van der Watt","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene a une formation d'ing\u00e9nieur en \u00e9lectronique et adore tout ce qui touche \u00e0 la technologie. Lorsqu'il fait une pause dans sa consommation d'informations sur l'IA, vous le trouverez \u00e0 la table de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/fr\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/5405","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/comments?post=5405"}],"version-history":[{"count":8,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/5405\/revisions"}],"predecessor-version":[{"id":5417,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/posts\/5405\/revisions\/5417"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media\/973"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/media?parent=5405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/categories?post=5405"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/fr\/wp-json\/wp\/v2\/tags?post=5405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}