{"id":5405,"date":"2023-09-13T13:45:33","date_gmt":"2023-09-13T13:45:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=5405"},"modified":"2023-09-13T13:50:49","modified_gmt":"2023-09-13T13:50:49","slug":"nvidia-software-supercharges-h100-inference-performance","status":"publish","type":"post","link":"https:\/\/dailyai.com\/pt\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","title":{"rendered":"O software da Nvidia aumenta o desempenho da infer\u00eancia H100"},"content":{"rendered":"<p><strong>A Nvidia anunciou um novo software de c\u00f3digo aberto que, segundo a empresa, ir\u00e1 melhorar o desempenho da infer\u00eancia nas suas GPUs H100.<\/strong><\/p>\n<p>Grande parte da procura atual de GPUs da Nvidia \u00e9 para criar capacidade de computa\u00e7\u00e3o para treinar novos modelos. Mas, uma vez treinados, esses modelos precisam de ser utilizados. A infer\u00eancia em IA refere-se \u00e0 capacidade de um LLM como o ChatGPT para tirar conclus\u00f5es ou fazer previs\u00f5es a partir dos dados em que foi treinado e gerar resultados.<\/p>\n<p>Quando se tenta utilizar o ChatGPT e aparece uma mensagem a dizer que os servidores est\u00e3o a ficar sobrecarregados, \u00e9 porque o hardware de computa\u00e7\u00e3o est\u00e1 a ter dificuldades em acompanhar a procura de infer\u00eancia.<\/p>\n<p>A Nvidia afirma que o seu novo software, TensorRT-LLM, pode fazer com que o seu hardware atual funcione muito mais rapidamente e seja tamb\u00e9m mais eficiente em termos energ\u00e9ticos.<\/p>\n<p>O software inclui vers\u00f5es optimizadas dos modelos mais populares, incluindo Meta Llama 2, OpenAI GPT-2 e GPT-3, Falcon, Mosaic MPT e BLOOM.<\/p>\n<p>Utiliza algumas t\u00e9cnicas inteligentes, como o agrupamento mais eficiente de tarefas de infer\u00eancia e t\u00e9cnicas de quantiza\u00e7\u00e3o, para conseguir o aumento do desempenho.<\/p>\n<p>As LLM utilizam geralmente valores de v\u00edrgula flutuante de 16 bits para representar os pesos e as activa\u00e7\u00f5es. A quantiza\u00e7\u00e3o pega nesses valores e reduz-os para valores de ponto flutuante de 8 bits durante a infer\u00eancia. A maioria dos modelos consegue manter a sua exatid\u00e3o com esta precis\u00e3o reduzida.<\/p>\n<p>As empresas que possuem infra-estruturas de computa\u00e7\u00e3o baseadas nas GPUs H100 da Nvidia podem esperar uma enorme melhoria no desempenho da infer\u00eancia sem terem de gastar um c\u00eantimo ao utilizarem o TensorRT-LLM.<\/p>\n<p>A Nvidia usou um exemplo de execu\u00e7\u00e3o de um pequeno modelo de c\u00f3digo aberto, GPT-J 6, para resumir artigos no conjunto de dados CNN\/Daily Mail. O seu chip A100 mais antigo \u00e9 utilizado como velocidade de base e depois comparado com o H100 sem e depois com o TensorRT-LLM.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5412 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg\" alt=\"Aumento do desempenho de infer\u00eancia da Nvidia com o TensorRT-LLM\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Fonte: <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>E aqui est\u00e1 uma compara\u00e7\u00e3o com o Meta's Llama 2<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5413 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg\" alt=\"Aumento da infer\u00eancia da Nvidia com Llama 2\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Fonte: <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>A Nvidia afirmou que os seus testes mostraram que, dependendo do modelo, um H100 com TensorRT-LLM utiliza entre 3,2 e 5,6 vezes menos energia do que um A100 durante a infer\u00eancia.<\/p>\n<p>Se estiver a executar modelos de IA em hardware H100, isto significa que n\u00e3o s\u00f3 o seu desempenho de infer\u00eancia vai quase duplicar, como tamb\u00e9m a sua fatura energ\u00e9tica vai ser muito menor depois de instalar este software.<\/p>\n<p>O TensorRT-LLM tamb\u00e9m ser\u00e1 disponibilizado para o <a href=\"https:\/\/dailyai.com\/pt\/2023\/08\/nvidias-updated-superchip-promises-huge-ai-advancements\/\">Grace Hopper Superchips<\/a> mas a empresa ainda n\u00e3o divulgou os valores de desempenho do GH200 com o seu novo software.<\/p>\n<p>O novo software ainda n\u00e3o estava pronto quando a Nvidia submeteu o seu superchip GH200 aos testes de benchmarking de desempenho MLPerf AI padr\u00e3o da ind\u00fastria. Os resultados mostraram que o GH200 teve um desempenho at\u00e9 17% melhor do que um H100 SXM de chip \u00fanico.<\/p>\n<p>Se a Nvidia conseguir mesmo um modesto aumento de desempenho de infer\u00eancia utilizando o TensorRT-LLM com o GH200, isso colocar\u00e1 a empresa muito \u00e0 frente dos seus rivais mais pr\u00f3ximos. Ser um representante de vendas da Nvidia deve ser o trabalho mais f\u00e1cil do mundo neste momento.<\/p>","protected":false},"excerpt":{"rendered":"<p>A Nvidia anunciou um novo software de fonte aberta que, segundo a empresa, ir\u00e1 aumentar o desempenho de infer\u00eancia nas suas GPUs H100. Grande parte da procura atual de GPUs da Nvidia consiste em criar capacidade de computa\u00e7\u00e3o para treinar novos modelos. Mas, uma vez treinados, esses modelos precisam de ser utilizados. A infer\u00eancia em IA refere-se \u00e0 capacidade de um LLM como o ChatGPT para tirar conclus\u00f5es ou fazer previs\u00f5es a partir dos dados em que foi treinado e gerar resultados. Quando se tenta usar o ChatGPT e aparece uma mensagem a dizer que os servidores est\u00e3o a ficar sobrecarregados, isso deve-se ao facto de o hardware de computa\u00e7\u00e3o estar a ter dificuldades em acompanhar o ritmo de trabalho.<\/p>","protected":false},"author":6,"featured_media":973,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[83],"tags":[99,106],"class_list":["post-5405","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-product","tag-ai-race","tag-nvidia"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Nvidia software supercharges H100 inference performance | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/pt\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Nvidia software supercharges H100 inference performance | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/pt\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-13T13:45:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-13T13:50:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Nvidia software supercharges H100 inference performance\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"wordCount\":467,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"keywords\":[\"AI race\",\"Nvidia\"],\"articleSection\":[\"Product\"],\"inLanguage\":\"pt-PT\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"name\":\"Nvidia software supercharges H100 inference performance | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"width\":1000,\"height\":667,\"caption\":\"nvidia stock\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Nvidia software supercharges H100 inference performance\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/pt\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Software da Nvidia aumenta o desempenho da infer\u00eancia H100 | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/pt\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_locale":"pt_PT","og_type":"article","og_title":"Nvidia software supercharges H100 inference performance | DailyAI","og_description":"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with","og_url":"https:\/\/dailyai.com\/pt\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_site_name":"DailyAI","article_published_time":"2023-09-13T13:45:33+00:00","article_modified_time":"2023-09-13T13:50:49+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tempo estimado de leitura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Nvidia software supercharges H100 inference performance","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"wordCount":467,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","keywords":["AI race","Nvidia"],"articleSection":["Product"],"inLanguage":"pt-PT"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","url":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","name":"Software da Nvidia aumenta o desempenho da infer\u00eancia H100 | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"]}]},{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","width":1000,"height":667,"caption":"nvidia stock"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Nvidia software supercharges H100 inference performance"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"A sua dose di\u00e1ria de not\u00edcias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"pt-PT"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene vem de uma forma\u00e7\u00e3o em engenharia eletr\u00f3nica e adora tudo o que \u00e9 tecnologia. Quando faz uma pausa no consumo de not\u00edcias sobre IA, pode encontr\u00e1-lo \u00e0 mesa de snooker.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/pt\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/5405","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/comments?post=5405"}],"version-history":[{"count":8,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/5405\/revisions"}],"predecessor-version":[{"id":5417,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/posts\/5405\/revisions\/5417"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media\/973"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/media?parent=5405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/categories?post=5405"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/pt\/wp-json\/wp\/v2\/tags?post=5405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}