{"id":5405,"date":"2023-09-13T13:45:33","date_gmt":"2023-09-13T13:45:33","guid":{"rendered":"https:\/\/dailyai.com\/?p=5405"},"modified":"2023-09-13T13:50:49","modified_gmt":"2023-09-13T13:50:49","slug":"nvidia-software-supercharges-h100-inference-performance","status":"publish","type":"post","link":"https:\/\/dailyai.com\/es\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","title":{"rendered":"El software de Nvidia mejora el rendimiento de inferencia del H100"},"content":{"rendered":"<p><strong>Nvidia ha anunciado un nuevo software de c\u00f3digo abierto que, seg\u00fan afirma, mejorar\u00e1 el rendimiento de inferencia de sus GPU H100.<\/strong><\/p>\n<p>Gran parte de la demanda actual de GPU de Nvidia se destina a crear potencia de c\u00e1lculo para entrenar nuevos modelos. Pero una vez entrenados, esos modelos necesitan ser utilizados. La inferencia en IA se refiere a la capacidad de un LLM como ChatGPT para extraer conclusiones o realizar predicciones a partir de los datos con los que se ha entrenado y generar resultados.<\/p>\n<p>Cuando intentas utilizar ChatGPT y aparece un mensaje que te dice que sus servidores se est\u00e1n sobrecargando, es porque el hardware inform\u00e1tico est\u00e1 luchando por mantener el ritmo de la demanda de inferencia.<\/p>\n<p>Nvidia afirma que su nuevo software, TensorRT-LLM, puede hacer que su hardware actual funcione mucho m\u00e1s r\u00e1pido y con mayor eficiencia energ\u00e9tica.<\/p>\n<p>El software incluye versiones optimizadas de los modelos m\u00e1s populares, como Meta Llama 2, OpenAI GPT-2 y GPT-3, Falcon, Mosaic MPT y BLOOM.<\/p>\n<p>Utiliza algunas t\u00e9cnicas inteligentes, como una agrupaci\u00f3n m\u00e1s eficaz de las tareas de inferencia y t\u00e9cnicas de cuantificaci\u00f3n para aumentar el rendimiento.<\/p>\n<p>Los LLM suelen utilizar valores de coma flotante de 16 bits para representar pesos y activaciones. La cuantizaci\u00f3n toma esos valores y los reduce a valores de coma flotante de 8 bits durante la inferencia. La mayor\u00eda de los modelos consiguen mantener su exactitud con esta precisi\u00f3n reducida.<\/p>\n<p>Las empresas que dispongan de infraestructura de c\u00e1lculo basada en las GPU H100 de Nvidia pueden esperar una enorme mejora del rendimiento de inferencia sin tener que gastar un c\u00e9ntimo utilizando TensorRT-LLM.<\/p>\n<p>Nvidia utiliz\u00f3 un ejemplo de ejecuci\u00f3n de un peque\u00f1o modelo de c\u00f3digo abierto, GPT-J 6, para resumir art\u00edculos en el conjunto de datos CNN\/Daily Mail. Su antiguo chip A100 se utiliza como velocidad de referencia y luego se compara con el H100 sin TensorRT-LLM y despu\u00e9s con \u00e9l.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5412 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg\" alt=\"Aumento del rendimiento de inferencia de Nvidia con TensorRT-LLM\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/8X-Inference-Performance-Nvidia-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Fuente: <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>Y aqu\u00ed hay una comparaci\u00f3n cuando se ejecuta Meta's Llama 2<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-5413 size-full\" src=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg\" alt=\"Nvidia potencia la inferencia con Llama 2\" width=\"832\" height=\"666\" srcset=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2.jpg 832w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-300x240.jpg 300w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-768x615.jpg 768w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-370x296.jpg 370w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-800x640.jpg 800w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-20x16.jpg 20w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-740x592.jpg 740w, https:\/\/dailyai.com\/wp-content\/uploads\/2023\/09\/4.6X-Performance-Llama2-60x48.jpg 60w\" sizes=\"auto, (max-width: 832px) 100vw, 832px\" \/><\/p>\n<p style=\"text-align: center;\">Fuente: <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\/\">Nvidia<\/a><\/p>\n<p>Nvidia afirma que sus pruebas han demostrado que, dependiendo del modelo, un H100 que ejecuta TensorRT-LLM consume entre 3,2 y 5,6 veces menos energ\u00eda que un A100 durante la inferencia.<\/p>\n<p>Si est\u00e1 ejecutando modelos de IA en hardware H100, esto significa que no s\u00f3lo su rendimiento de inferencia va a ser casi el doble, sino que su factura de energ\u00eda va a ser mucho menor una vez que instale este software.<\/p>\n<p>TensorRT-LLM tambi\u00e9n estar\u00e1 disponible para la plataforma <a href=\"https:\/\/dailyai.com\/es\/2023\/08\/nvidias-updated-superchip-promises-huge-ai-advancements\/\">Superchips Grace Hopper<\/a> pero la empresa no ha publicado cifras de rendimiento de la GH200 con su nuevo software.<\/p>\n<p>El nuevo software a\u00fan no estaba listo cuando Nvidia someti\u00f3 a su GH200 Superchip a las pruebas de rendimiento de IA MLPerf, est\u00e1ndares del sector. Los resultados mostraron que el GH200 rend\u00eda hasta 17% mejor que un H100 SXM de un solo chip.<\/p>\n<p>Si Nvidia consigue incluso un modesto aumento del rendimiento de inferencia utilizando TensorRT-LLM con el GH200, pondr\u00e1 a la compa\u00f1\u00eda muy por delante de sus rivales m\u00e1s cercanos. Ser representante de ventas de Nvidia debe de ser el trabajo m\u00e1s f\u00e1cil del mundo en estos momentos.<\/p>","protected":false},"excerpt":{"rendered":"<p>Nvidia ha anunciado un nuevo software de c\u00f3digo abierto que, seg\u00fan afirma, mejorar\u00e1 el rendimiento de inferencia de sus GPU H100. En la actualidad, gran parte de la demanda de GPU de Nvidia se destina a crear potencia de c\u00e1lculo para entrenar nuevos modelos. Pero una vez entrenados, esos modelos deben utilizarse. La inferencia en IA se refiere a la capacidad de un LLM como ChatGPT para extraer conclusiones o realizar predicciones a partir de los datos con los que se ha entrenado y generar resultados. Cuando intentas utilizar ChatGPT y aparece un mensaje diciendo que sus servidores est\u00e1n sobrecargados, es porque el hardware inform\u00e1tico no da abasto para mantener el ritmo.<\/p>","protected":false},"author":6,"featured_media":973,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[83],"tags":[99,106],"class_list":["post-5405","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-product","tag-ai-race","tag-nvidia"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Nvidia software supercharges H100 inference performance | DailyAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/dailyai.com\/es\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Nvidia software supercharges H100 inference performance | DailyAI\" \/>\n<meta property=\"og:description\" content=\"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with\" \/>\n<meta property=\"og:url\" content=\"https:\/\/dailyai.com\/es\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/\" \/>\n<meta property=\"og:site_name\" content=\"DailyAI\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-13T13:45:33+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-13T13:50:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"667\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Eugene van der Watt\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:site\" content=\"@DailyAIOfficial\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Eugene van der Watt\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"author\":{\"name\":\"Eugene van der Watt\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\"},\"headline\":\"Nvidia software supercharges H100 inference performance\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"},\"wordCount\":467,\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"keywords\":[\"AI race\",\"Nvidia\"],\"articleSection\":[\"Product\"],\"inLanguage\":\"es\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\",\"name\":\"Nvidia software supercharges H100 inference performance | DailyAI\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"datePublished\":\"2023-09-13T13:45:33+00:00\",\"dateModified\":\"2023-09-13T13:50:49+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#primaryimage\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/05\\\/shutterstock_1742705531.jpg\",\"width\":1000,\"height\":667,\"caption\":\"nvidia stock\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/2023\\\/09\\\/nvidia-software-supercharges-h100-inference-performance\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/dailyai.com\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Nvidia software supercharges H100 inference performance\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#website\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"name\":\"DailyAI\",\"description\":\"Your Daily Dose of AI News\",\"publisher\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/dailyai.com\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#organization\",\"name\":\"DailyAI\",\"url\":\"https:\\\/\\\/dailyai.com\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/06\\\/Daily-Ai_TL_colour.png\",\"width\":4501,\"height\":934,\"caption\":\"DailyAI\"},\"image\":{\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/x.com\\\/DailyAIOfficial\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/dailyaiofficial\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@DailyAIOfficial\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/#\\\/schema\\\/person\\\/7ce525c6d0c79838b7cc7cde96993cfa\",\"name\":\"Eugene van der Watt\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"url\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"contentUrl\":\"https:\\\/\\\/dailyai.com\\\/wp-content\\\/uploads\\\/2023\\\/07\\\/Eugine_Profile_Picture-96x96.png\",\"caption\":\"Eugene van der Watt\"},\"description\":\"Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.\",\"sameAs\":[\"www.linkedin.com\\\/in\\\/eugene-van-der-watt-16828119\"],\"url\":\"https:\\\/\\\/dailyai.com\\\/es\\\/author\\\/eugene\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"El software de Nvidia mejora el rendimiento de inferencia del H100 | DailyAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/dailyai.com\/es\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_locale":"es_ES","og_type":"article","og_title":"Nvidia software supercharges H100 inference performance | DailyAI","og_description":"Nvidia announced new open source software that it says will supercharge inference performance on its H100 GPUs. A lot of the current demand for Nvidia\u2019s GPUs is to build computing power for training new models. But once trained, those models need to be used. Inference in AI refers to the ability of an LLM like ChatGPT to draw conclusions or make predictions from data it\u2019s been trained on and generate output. When you try to use ChatGPT and a message pops up to say its servers are taking strain, it\u2019s because the computing hardware is struggling to keep up with","og_url":"https:\/\/dailyai.com\/es\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","og_site_name":"DailyAI","article_published_time":"2023-09-13T13:45:33+00:00","article_modified_time":"2023-09-13T13:50:49+00:00","og_image":[{"width":1000,"height":667,"url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","type":"image\/jpeg"}],"author":"Eugene van der Watt","twitter_card":"summary_large_image","twitter_creator":"@DailyAIOfficial","twitter_site":"@DailyAIOfficial","twitter_misc":{"Escrito por":"Eugene van der Watt","Tiempo de lectura":"3 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#article","isPartOf":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"author":{"name":"Eugene van der Watt","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa"},"headline":"Nvidia software supercharges H100 inference performance","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","mainEntityOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"},"wordCount":467,"publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","keywords":["AI race","Nvidia"],"articleSection":["Product"],"inLanguage":"es"},{"@type":"WebPage","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","url":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/","name":"El software de Nvidia mejora el rendimiento de inferencia del H100 | DailyAI","isPartOf":{"@id":"https:\/\/dailyai.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"image":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage"},"thumbnailUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","datePublished":"2023-09-13T13:45:33+00:00","dateModified":"2023-09-13T13:50:49+00:00","breadcrumb":{"@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#primaryimage","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/05\/shutterstock_1742705531.jpg","width":1000,"height":667,"caption":"nvidia stock"},{"@type":"BreadcrumbList","@id":"https:\/\/dailyai.com\/2023\/09\/nvidia-software-supercharges-h100-inference-performance\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/dailyai.com\/"},{"@type":"ListItem","position":2,"name":"Nvidia software supercharges H100 inference performance"}]},{"@type":"WebSite","@id":"https:\/\/dailyai.com\/#website","url":"https:\/\/dailyai.com\/","name":"DailyAI","description":"Su dosis diaria de noticias sobre IA","publisher":{"@id":"https:\/\/dailyai.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/dailyai.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/dailyai.com\/#organization","name":"DailyAI","url":"https:\/\/dailyai.com\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/06\/Daily-Ai_TL_colour.png","width":4501,"height":934,"caption":"DailyAI"},"image":{"@id":"https:\/\/dailyai.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/DailyAIOfficial","https:\/\/www.linkedin.com\/company\/dailyaiofficial\/","https:\/\/www.youtube.com\/@DailyAIOfficial"]},{"@type":"Person","@id":"https:\/\/dailyai.com\/#\/schema\/person\/7ce525c6d0c79838b7cc7cde96993cfa","name":"Eugene van der Watt","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","url":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","contentUrl":"https:\/\/dailyai.com\/wp-content\/uploads\/2023\/07\/Eugine_Profile_Picture-96x96.png","caption":"Eugene van der Watt"},"description":"Eugene es ingeniero electr\u00f3nico y le encanta todo lo relacionado con la tecnolog\u00eda. Cuando descansa de consumir noticias sobre IA, lo encontrar\u00e1 jugando al billar.","sameAs":["www.linkedin.com\/in\/eugene-van-der-watt-16828119"],"url":"https:\/\/dailyai.com\/es\/author\/eugene\/"}]}},"_links":{"self":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/5405","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/comments?post=5405"}],"version-history":[{"count":8,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/5405\/revisions"}],"predecessor-version":[{"id":5417,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/posts\/5405\/revisions\/5417"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media\/973"}],"wp:attachment":[{"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/media?parent=5405"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/categories?post=5405"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dailyai.com\/es\/wp-json\/wp\/v2\/tags?post=5405"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}