Cloud platform Snowflake and Nvidia have teamed up to launch a platform that enables Snowflake users to create bespoke AI applications with their data.
The partnership was announced at the Snowflake Summit 2023, where Nvidia CEO Jensen Huang was presented with a snowboard as a gift by Snowflake CEO Frank Slootman.
The collaboration involves integrating Nvidia’s NeMo platform for large language models (LLMs) into the Snowflake Data Cloud. This will enable businesses to use the data stored in Snowflake accounts to develop LLMs.
Here’s how it works:
- Many businesses use Snowflake to store internal data already.
- Businesses use Nvidia’s NeMo framework to build AIs with their data.
- They train and deploy those applications with Nvidia’s GPUs.
Microsoft Azure offered a similar product earlier in the year, showing that big tech companies want to offer enterprises more options for using their own data to build AI models.
Nvidia’s NeMo, a cloud-native enterprise platform, enables users to build, customize, and deploy generative AI models with billions of parameters. The plan is for Snowflake to host and run NeMo within the Snowflake Data Cloud, paving the way for customers to develop and roll out custom LLMs for generative AI applications.
Manuvir Das, Nvidia’s head for enterprise computing, discussed how this enables Snowflake customers to take advantage of Nvidia’s cutting-edge AI infrastructure.
“They can work with their proprietary data to build… leading-edge generative AI applications without moving them out of the secure Data Cloud environment. This will reduce costs and latency while maintaining data security,” he said.
Huang highlighted the role of data in crafting generative AI applications work with enterprise data, “Together, Nvidia and Snowflake will create an AI factory that helps enterprises turn their valuable data into custom generative AI models to power groundbreaking new applications – right from the cloud platform that they use to run their businesses.”
This setup enables enterprises to use data stored in Snowflake already – they don’t need to move it from elsewhere. Not only is that much faster than building AI development pipelines from scratch, but it’s also more secure.
Businesses are looking to leverage their data for AI
Businesses want to take full advantage of AI, but they often lack the technology to do it – even if they possess data science and engineering expertise.
Some businesses are turning to open-source models built and deployed using their resources, but Nvidia’s solution is scalable to Big Data applications that involve terabytes or even petabytes of data.
Das posits that businesses deploying custom generative AI models trained on their own proprietary data will be a step ahead of those using vendor-specific models. It’s the best of both worlds, where proprietary data comes together with Nvidia’s cutting-edge AI tools within the Snowflake ecosystem.
“More than 8,000 Snowflake customers store exabytes of data in Snowflake Data Cloud. As enterprises look to add generative AI capabilities to their applications and services, this data is fuel for creating custom generative AI models,” Das explained.
Nvidia provides accelerated computing and AI software as part of this partnership. The two companies are currently co-engineering Nvidia’s AI engine with Snowflake’s Data Cloud.
In Das’ words, “Generative AI is a multi-trillion-dollar opportunity and has the potential to transform every industry as enterprises begin to build and deploy custom models using their valuable data.”
Nvidia’s tremendous surge in value this year has fueled a vast number of new software and hardware projects.
For example, their new H100 GPUs set new benchmarks in AI training performance, enabling new generations of bigger, better models.