NVIDIA’s AI odyssey: from humble origins to a $2 trillion company

April 6, 2024

  • NVIDIA has risen from a gaming GPU manufacturer to a top-5 global company
  • Generative AI and the market for GPUs pushed its market cap to $2 trillion
  • NVIDIA plays a core role in the tech industry and this is set to continue

NVIDIA, a name synonymous with cutting-edge technology and innovation, was founded just some three decades ago in 1993. 

From humble beginnings as a graphics chip designer focused on the gaming industry, NVIDIA has evolved into a global leader in AI and high-performance computing. 

NVIDIA was valued at ‘just’ around $100 billion in 2019. It’s now worth some $2 trillion, placing it as the third largest company in the world by market cap, below Microsoft and Apple and ahead of Saudi Aramco, Amazon, Google, and Meta Platforms. 

NVIDIA was founded by Jensen Huang, Chris Malachowsky, and Curtis Priem, who shared a vision of revolutionizing computer graphics.

In the early 1990s, the trio recognized the untapped potential of specialized graphics processors and set out to create a company that would transform the burgeoning gaming industry.

One of the company’s early triumphs came from a moment of serendipity.

In 1995, Sega was developing its next-generation gaming console, the Sega Saturn. Sega was looking for a 3D graphics chip to power the console and had initially partnered with NVIDIA competitor 3Dfx Interactive.

However, a chance meeting between an NVIDIA engineer and a Sega executive at a conference saw NVIDIA demonstrate the company’s NV1 chip, which impressed Sega. Sega decided to use NVIDIA’s chip in the Saturn instead of 3Dfx’s.

Interestingly, the NV1 chip used in the Sega Saturn was not a commercial success for NVIDIA in the PC market. The company’s subsequent product, the RIVA 128 (NV3), was its first successful PC GPU and laid the groundwork for its future dominance in the graphics card market.

Another early breakthrough came in 1999 with the GeForce 256, marketed as the world’s first GPU. 

This laid the foundation for NVIDIA’s dominance in the gaming industry, and the GeForce line of GPUs quickly became a household name among gaming enthusiasts.

The NVIDIA GeForce brand of gaming GPUs.

As NVIDIA continued to push the boundaries of graphics technology throughout the early 2000s, releasing increasingly powerful GPUs that delivered immersive gaming experiences, the company’s R&D established it as a leader in parallel processing. 

That would later be instrumental in NVIDIA’s future AI and high-performance computing success.

Beyond gaming: the rise of GPGPU and CUDA

While the gaming industry catalyzed NVIDIA’s early success, the company’s leadership recognized GPUs’ potential beyond graphics rendering alone. 

In 2006, NVIDIA introduced Compute Unified Device Architecture (CUDA), a programming model that allowed developers to harness the parallel processing power of GPUs for general-purpose computing (GPGPU).

CUDA simplified the process of programming GPUs, enabling developers to write code using familiar languages like C and C++. This opened up new opportunities for NVIDIA in scientific research, oil and gas exploration, financial simulations, and medical imaging, thus opening a myriad of new partnerships for NVIDIA. 

This also showed how NVIDIA would become fundamental in high-tech critical infrastructure, expanding its clientele beyond corporate buyers to governments and public institutions.

Semiconductors: a notoriously tricky market to conquer

The semiconductor industry is notoriously complex and highly competitive, with only a handful of companies making their mark.

A key reason for the limited number of large semiconductor manufacturers is the extreme cost and complexity of the manufacturing process.

Semiconductor fabrication requires state-of-the-art facilities, known as foundries, which can cost billions of dollars to build and maintain.

These foundries must operate in extremely clean environments to prevent even the tiniest particles from interfering with the manufacturing process.

Additionally, the equipment used for semiconductor manufacture, such as lithography machines, is highly specialized and expensive, with some machines costing upwards of $100 million.

Together, this creates massive entry barriers for new industry players, which has helped keep NVIDIA at the top of the pecking order despite competition from AMD, Intel, and Qualcomm.

The AI revolution

As the demand for AI and machine learning grew in the 2010s, NVIDIA was perfectly positioned to capitalize on this emerging trend. 

With parallel processing R&D under its belt, the company’s GPUs became the preferred hardware for training deep neural networks and powering AI workloads.

Recognizing AI’s immense potential, NVIDIA made strategic investments in the field, collaborating with leading research institutions and technology companies to advance AI technologies.

The company’s early support for OpenAI showed its ability to tap into cutting-edge industries and take risks to expand its customer base.

NVIDIA also developed specialized compute modules, such as the DGX series, specifically designed to accelerate the training of large language models (LLMs) and other AI architectures. These powerful systems quickly became the go-to hardware for AI researchers and developers worldwide.

And that’s a pivotally important point. When it comes to high-end AI hardware, there is NVIDIA, and then there are the others.

It’s an unusual setup, even in Big Tech. Google, Amazon, Meta, Apple, and Microsoft are not so different when you boil down their core business units.

There are so few players in the semiconductor market, partly because it’s tough and partly because NVIDIA has made it so through strategic investment.

NVIDIA’s cohesive ecosystem also provides certainty to developers, as NVIDIA has become so dependable. This is a company free from the controversies of Big Tech, the leadership tussles, regulatory action, and reliance on less tangible digital technologies like social media.

NVIDIA understands this, using software and hardware to tighten dominance over the AI ecosystem and create a suite of software tools and libraries that enhance go-to-market strategies for their customers. 

NVIDIA’s role in generative AI

The rise of generative AI further solidified NVIDIA’s position as an AI powerhouse. This is the stage upon which NVIDIA truly established itself as one of the most influential companies in the world. 

Generative AI involves training models on vast data to create new content based on learned patterns and styles, such as text, images, and music.

Recognizing its immense potential, NVIDIA introduced AI Foundations, a cloud-based platform that democratized access to state-of-the-art generative AI models. 

AI Foundations allows businesses and developers to harness the power of generative AI without the need for extensive in-house resources or expertise.

NVIDIA’s AI Foundations initially included pre-trained models, such as NeMo for natural language processing and Picasso for image and video generation.

Again, this shows NVIDIA’s commitment to building an ecosystem rather than individual products. This is where they differentiate from other manufacturers, particularly competitors in semiconductor manufacturing.

NVIDIA is a one-stop-shop for cutting-edge AI development, offering hardware, software, and strong collaborations with cloud resources via Google, Microsoft, Amazon, and others.

NVIDIA’s GPUs

In the midst of the generative AI boom, NVIDIA has vastly expanded its AI chip portfolio, introducing several groundbreaking processors designed to push the limits of AI and computing technologies across various sectors. 

Let’s take a closer look at these chips and their contributions:

  1. A100 and H100: The H100 quickly became NVIDIA’s flagship for AI applications, clocking speeds 6x faster than its predecessor, the A100.
  2. HGX H200 GPU: Based on the Hopper architecture, the H200 introduces HBM3e memory, providing nearly double the capacity and 2.4 times more bandwidth than its predecessor, the A100. It’s designed to double the inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100. The H200 is compatible with various data center configurations and is scheduled for release in early-to-mid 2024.
  3. GH200 Grace Hopper Superchip: The GH200 combines the HGX H200 GPU with an Arm-based NVIDIA Grace CPU. It’s aimed at supercomputing applications to tackle complex AI and HPC applications. The GH200 is expected to be utilized in over 40 AI supercomputers worldwide, including significant projects like the JUPITER system in Germany, which is projected to be the world’s most powerful AI system upon its 2024 installation.
  4. Blackwell GPU: Unveiled at GTC 2024, the Blackwell GPU is NVIDIA’s next-generation processor, succeeding the H100 and H200 GPUs. Touted as the world’s most powerful chip by NVIDIA, Blackwell is designed specifically for the demands of generative AI. It offers a 30x performance increase over the H100 for LLM workloads with 25x better energy efficiency.

Blackwell will be massive, with NVIDIA’s press release showcasing interest from a roster of Big Tech’s biggest names, such as Microsoft’s Satya Nadella, Google and DeepMind’s Sundar Pichai and Demis Hassabis, OpenAI’s Sam Altman, and numerous others.

Blackwell
NVIDIA’s Blackwell Platform. Source: NVIDIA.

NVIDIA outsmarts the US government

NVIDIA’s success extends to its agile corporate strategy, governance, and response to market pressures. That includes swerving the US government’s efforts to curb high-end hardware exports to China, one of its biggest customers.

In August 2022, the US Commerce Department imposed licensing requirements on importing certain high-end GPUs, including NVIDIA’s A100 and H100 chips, to China and Russia. This caused its stock to temporarily dip by almost 8%.

The restrictions were designed to prevent these chips from being used in military applications, such as supercomputers and AI systems.

In October 2022, the US further tightened its export controls, introducing a sweeping set of rules that aimed to cut China off from certain semiconductor chips made anywhere in the world with US equipment. These rules also restricted the export of US-made tools and components essential for chip manufacturing.

With each iteration of these rules, NVIDIA has found ways to navigate them by altering its chips to specifically evade export bans.

For example, in November, NVIDIA released three new products – HGX H20, L20 PCle, and L2 PCle – based on NVIDIA’s powerful H100 chip but designed to comply with export restrictions.

These chips are less powerful than the previously restricted A100 and H800 models but still offer effective performance capabilities for AI tasks.

As noted by SemiAnalysis, “Nvidia is perfectly straddling the line on peak performance and performance density with these new chips to get them through the new US regulations.”

According to the South China Post, an estimated 20 to 25% of NVIDIA’s data center revenue is generated from Chinese buyers, even despite ever-stricter export bans.

Robotics with Project GR00T and Jetson Thor

NVIDIA supports cutting-edge and emerging technologies through its enterprise robotics development platforms. 

At the GTC 2024 conference, the company announced Project GR00T and Jetson Thor. GR00T intends to revolutionize humanoid robotics by providing a general-purpose foundation model that enables robots to learn from human actions and rapidly learn coordination, dexterity, and other skills. 

Jetson Thor, introduced alongside Project GR00T, is a new computing platform designed for these humanoid robots. It’s equipped with a next-generation GPU based on NVIDIA’s Blackwell architecture.

NVIDIA is also actively developing its Isaac Robotics Platform to support the development of sophisticated robots with natural asynchronous movement and dexterity. 

NVIDIA’s financial performance and market dominance

NVIDIA’s success in gaming, AI, and high-performance computing translated into remarkable financial performance. In 2023, the company’s revenue increased 61% from the previous year.

With this growth, the company’s market cap flew past the $1 trillion mark in mid-2023 and continued until it hit the $2 trillion mark, where it sits today.

The data center segment, which includes AI and high-performance computing, accounted for $11.2 billion, or 42% of the total revenue, highlighting the growing importance of these areas for NVIDIA’s business.

Impressively, NVIDIA’s gaming segment continued to thrive, contributing $9.3 billion, or 35% of the total revenue, demonstrating its ability to maintain its leadership in the gaming industry while simultaneously expanding into new markets.

NVIDIA’s financial success reached new heights in the first quarter of fiscal year 2024, with revenue rocketing to $13.5 billion, an impressive 88% increase from the previous quarter. The data center segment was the primary driver, with record sales surpassing $10 billion. 

Will NVIDIA’s rise continue?

The tech industry, on the whole, is experiencing a tremendous couple of years, with Alphabet, Meta, and Microsoft reporting impressive results in 2023.

Alphabet, Amazon, NVIDIA, Apple, Meta, and Microsoft dominate the S&P 500 index, accounting for 9% of its sales, 16% of its net profits, and some 25% of its market cap.

NVIDIA Stock
NVIDIA’s stock price in 2023.

NVIDIA’s revenue last year was about $60 billion, a 126% increase from the prior year. Its high valuation and stock price are based on that revenue and its predicted continued growth. 

For comparison, Amazon has a lower market value than NVIDIA, yet made almost $575 billion in sales last year.

This disparity shows the steep path NVIDIA must navigate to book large enough profits to justify its $2 trillion valuation, especially as competition in the AI chip market intensifies.

But despite that, analysts have increased their price targets for NVIDIA, with UBS analyst Timothy Arcuri recently raising it to 1,100 from 800, citing the potential for NVIDIA to capture demand from global enterprises and governments with Blackwell.

However, some believe that NVIDIA’s stock chart shows signs of weakening. Indeed, it’s extremely high for a company that has yet to ship the vast majority of its A100 and H100 orders. 

Looking ahead, the future of big tech and NVIDIA’s growth remains uncertain. While the growth potential is immense, companies must also contend with the possibility of a cooling AI love affair, technological limitations, and regulatory hurdles. 

Traffic to ChatGPT, for example, has dropped off since May 2023, and some investors are slowing down their investments in AI-related companies. There is some concern that generative AI has come on too fast, quickly obtaining a peak that it might struggle to surpass in the near future.

Moreover, brute force computing is resource-heavy, both for NVIDIA and its customers. When summed across global AI workloads, chips need constant power that rivals the capacity of small nations

And it’s not just power, but water too, which is pumped through data centers to the tune of billions of gallons a day. Natural resources required to build high-end AI hardware, such as rare earth metals, are also not limitless. 

NVIDIA is very much conscious of the industry’s energy challenges, hence why their new chips are considerably more energy efficient.

At GTC 2024, Huang said, “Accelerated computing has reached the tipping point. General-purpose computing has run out of steam. We need another way of doing computing so that we can continue to scale, so that we can continue to drive down the cost of computing, so that we can continue to consume more and more computing while being sustainable.”

At least Huang is realistic about these issues.

You can be sure that NVIDIA will channel more funds into unlocking energy-efficient AI growth that rids the industry from the shackles of brute-force accelerated computing.

If that’s achieved, NVIDIA’s rise may have no obvious bounds.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions