Salesforce challenges trends in AI with the tiny yet mighty xLAM-1B and 7B models

July 7, 2024

  • Salesforce unveiled two compact AI models designed for function-calling
  • They come in 1- and 7-billion parameter forms, outperforming much larger models
  • The 7-billion model beats GPT-4 on function-calling tasks, which has trillions of parameters

Salesforce, an enterprise software company, has unveiled two compact AI models that challenge the “bigger is better” paradigm in AI. 

Despite their compact size, the 1- and 7-billion parameter xLAM models outperform many larger models in function-calling tasks.

These tasks involve an AI system interpreting and translating a natural language request into specific function calls or API requests. 

For example, if you ask an AI system to “find flights to New York for next weekend under $500,” the model needs to understand this request, identify the relevant functions (e.g., search_flights, filter_by_price), and execute them with the correct parameters.

“We demonstrate that models trained with our curated datasets, even with only 7B parameters, can achieve state-of-the-art performance on the Berkeley Function-Calling Benchmark, outperforming multiple GPT-4 models,” the researchers write in their paper. 

“Moreover, our 1B model achieves exceptional performance, surpassing GPT-3.5-Turbo and Claude-3 Haiku.”

The Berkeley Function-Calling Benchmark, referenced in the study, is an evaluation framework designed to assess the function-calling capabilities of AI models. 

Key stats from the study include:

  1. The xLAM-7B model (7 billion parameters) ranked 6th on the Berkeley Function-Calling Leaderboard, outperforming GPT-4 and Gemini-1.5-Pro.
  2. The smaller xLAM-1B model outperformed larger models like Claude-3 Haiku and GPT-3.5-Turbo, demonstrating exceptional efficiency.

What makes this achievement particularly impressive is the model’s size compared to its competitors:

  • xLAM-1B: 1 billion parameters
  • xLAM-7B: 7 billion parameters
  • GPT-3: 175 billion parameters
  • GPT-4: Estimated 1.7 trillion parameters
  • Claude-3 Opus: Undisclosed, but likely hundreds of billions
  • Gemini Ultra: Undisclosed, estimated similar to GPT-4

This shows that efficient design and high-quality training data can be more important than sheer size. 

To train the model specifically for function-calling, the Salesforce team developed APIGen, a pipeline for creating diverse, high-quality datasets for function-calling tasks. 

APIGen works by sampling from a vast library of 3,673 executable APIs across 21 categories, creating realistic scenarios for the AI to learn from.

Potential applications of xLAM-1B’s capabilities include enhanced customer relationship management (CRM) systems, which Salesforce develops, more capable digital assistants, improved interfaces for smart home devices, efficient AI processing for autonomous vehicles, and real-time language translation on edge devices.

These xLAM models challenge researchers to rethink their AI architecture and training approaches by demonstrating that smaller, more efficient models can compete with larger ones.

As Salesforce CEO Marc Benioff explained, Tiny Giant highlights the potential for “on-device agentic AI,” perfect for smartphones and IoT devices.

The future of AI will not just involve ever-larger models but smarter, more efficient ones that can bring advanced features to a broader range of devices and applications.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions