OpenAI partners with Scale to fine-tune GPT-3.5 Turbo

August 28, 2023

OpenAI Regulation

OpenAI has partnered with Scale to provide customization of OpenAI’s GPT-3.5 Turbo model which was recently made available for fine-tuning.

Scale helps companies create custom AI products with a strong focus on machine learning, data set creation, and labeling. 

The performance that a fine-tuned version of GPT-3.5 Turbo offers makes it an appealing prospect for companies looking to build their own AI platforms. But the fine-tuning process can be challenging. 

This is why OpenAI says it’s partnering with Scale “given their experience helping enterprises securely and effectively leverage data for AI.”

OpenAI COO, Brad Lightcap, said, “Scale extends our ability to bring the power of fine-tuning to more companies, building on their enterprise AI experience to help businesses better apply OpenAI models for their unique needs.”

Scale illustrated the opportunity that fine-tuning GPT-3.5 presents by pointing to the results the company achieved with one of its financial services clients called Brex.

In its blog post Scale said, “By using the GPT-3.5 fine-tuning API on Brex data annotated with Scale’s Data Engine, we saw that the fine-tuned GPT-3.5 model outperformed the stock GPT-3.5 turbo model 66% of the time.”

The CEO of Brex said that using a fine-tuned GPT-3.5 “has been a game changer for us, enabling us to deliver high-quality AI experiences, comparable to GPT-4, with much lower cost and lower latency.”

The performance of OpenAI’s GPT models has never been in question, but the cost to access and run them has. Fine-tuning its models allows for shorter prompts with more efficient responses.

The faster and more accurate performance means that fewer tokens are sent during API calls and that offers an immediate cost saving over the base models.

While Meta continues to dish out its models for free, the cost-to-performance ratio may still make OpenAI’s paid offering a better buy.

The privacy and security of data is still a sticking point for a lot of companies though. And in its announcement of the partnership OpenAI again said that “As always, data sent in and out of the fine-tuning API is owned by the customer and is not used by OpenAI, or any other organization, to train other models.”

If companies don’t buy that refrain then OpenAI is going to have to rethink its proprietary approach to its models. OpenAI will need to find a way to offer secure, private versions of its GPT models that can run locally.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions