Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On

OnePlus 13s Key Specifications Revealed via Amazon Listing Ahead of June 5 Launch

4 June 2025

ICE Quietly Scales Back Rules for Courthouse Raids

4 June 2025

Elon Musk discovers Trump doesn’t stay bought

4 June 2025

iPhone 18 Pro, iPhone 18 Pro Max and iPhone 18 Fold Said to Debut With 2nm A20 Chipset in 2026

4 June 2025

The Best Smart Scales

4 June 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Wednesday, June 4
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » Databricks Has a Trick That Lets AI Models Improve Themselves
News

Databricks Has a Trick That Lets AI Models Improve Themselves

By News Room25 March 20253 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

Databricks, a company that helps big businesses build custom artificial intelligence models, has developed a machine learning trick that can boost the performance of an AI model without the need for clean labelled data.

Jonathan Frankle, chief AI scientist at Databricks, spent the past year talking to customers about the key challenges they face in getting AI to work reliably.

The problem, Frankle says, is dirty data.

”Everybody has some data, and has an idea of what they want to do,” Frankle says. But the lack of clean data makes it challenging to fine-tune a model to perform a specific task.. “Nobody shows up with nice, clean fine-tuning data that you can stick into a prompt or an [application programming interface],” for a model.

Databricks’ model could allow companies to eventually deploy their own agents to perform tasks, without data quality standing in the way.

The technique offers a rare look at some of the key tricks that engineers are now using to improve the abilities of advanced AI models, especially when good data is hard to come by. The method leverages ideas that have helped produce advanced reasoning models by combining reinforcement learning, a way for AI models to improve through practice, with “synthetic,” or AI-generated training data.

The latest models from OpenAI, Google, and DeepSeek all rely heavily on reinforcement learning as well as synthetic training data. WIRED revealed that Nvidia plans to acquire Gretel, a company that specializes in synthetic data. “We’re all navigating this space,” Frankle says.

The Databricks method exploits the fact that, given enough tries, even a weak model can score well on a given task or benchmark. Researchers call this method of boosting a model’s performance “best-of-N”. Databricks trained a model to predict which best-of-N result human testers would prefer, based on examples. The Databricks reward model, or DBRM, can then be used to improve the performance of other models without the need for further labelled data.

DBRM is then used to select the best outputs from a given model. This creates synthetic training data for further fine-tuning the model so that it produces a better output first time. Databricks calls its new approach Test-time Adaptive Optimization or TAO. “This method we’re talking about uses some relatively lightweight reinforcement learning to basically bake the benefits of best-of-N into the model itself,” Frankle says.

He adds that the research done by Databricks shows that the TAO method improves as it is scaled up to larger, more capable models. Reinforcement learning and synthetic data are already widely used but combining them in order to improve language models is a relatively new and technically challenging technique.

Databricks is unusually open about how it develops AI because it wants to show customers that it has the skills needed to create powerful custom models for them. The company previously revealed to WIRED how it developed DBX, a cutting-edge open source large language model (LLM) from scratch.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

ICE Quietly Scales Back Rules for Courthouse Raids

4 June 2025

Elon Musk discovers Trump doesn’t stay bought

4 June 2025

The Best Smart Scales

4 June 2025

Microsoft’s LinkedIn chief is now running Office as part of an AI reorg

4 June 2025

The Race to Build Trump’s ‘Golden Dome’ Missile Defense System Is On

4 June 2025

Samsung phones are getting a weird AI shopping platform nobody asked for

4 June 2025
Top Articles

How to Buy Ethical and Eco-Friendly Electronics

22 April 202532 Views

Honor Power Smartphone Set to Launch on April 15; Tipped to Get 7,800mAh Battery

8 April 202518 Views

The Best Gifts for Book Lovers

16 May 202516 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss

Microsoft’s LinkedIn chief is now running Office as part of an AI reorg

4 June 2025

As we build out the next phase of the agentic web, we have a tremendous…

Vi, Vivo Partner to Offer Vivo V50e Buyers in India an Exclusive 5G Bundled Plan

4 June 2025

The Race to Build Trump’s ‘Golden Dome’ Missile Defense System Is On

4 June 2025

Samsung phones are getting a weird AI shopping platform nobody asked for

4 June 2025
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.