in

Researchers create reasoning model for under $50, performs similar to OpenAI’s o1


Why it matters: Everyone’s coming up with new and innovative ways to work around the massive costs involved with training and creating new AI models. After DeepSeek’s impressive debut, which shook Silicon Valley, a group of researchers has developed an open rival that reportedly matches the reasoning abilities of OpenAI’s o1.

Stanford and University of Washington researchers devised a technique to create a new AI model dubbed “s1.” They have already open-sourced it on GitHub, along with the code and data used to build it. A paper published last Friday explained how the team achieved these results through clever technical tricks.

Rather than training a reasoning model from scratch, an expensive endeavor costing millions, they took an existing off-the-shelf language model and “fine-tuned” it using distillation. They extracted the reasoning capabilities from one of Google’s AI models – specifically, Gemini 2.0 Flash Thinking Experimental. They then trained the base model to mimic its step-by-step problem-solving process on a small dataset.

Others have used this approach before. In fact, distillation is what OpenAI was accusing DeepSeek of doing. However, the Stanford/UW team found an ultra-low-cost way to implement it through “supervised fine-tuning.”

This process involves explicitly teaching the model how to reason using curated examples. Their full dataset consisted of only 1,000 carefully selected questions and solutions pulled from Google’s model.

TechCrunch notes that the training process took 30 minutes, using 16 Nvidia H100 GPUs. Of course, these GPUs cost a small fortune – around $25,000 per unit – but renting works out to under $50 in cloud compute credits.

The researchers also discovered a neat trick to boost s1’s capabilities even further. They instructed the model to “wait” before providing its final answer. This command allowed it more time to check its reasoning to arrive at slightly improved solutions.

The model is not without its caveats. Since the team used Google’s model as its teacher, there is the question that s1’s skills, while impressive for its minuscule cost, may not be able to scale up to match the best AI has to offer just yet. There is also the potential for Google to protest. It could be waiting to see how OpenAI’s case goes.



Source link

Rookie robocallers impersonate FCC, accidentally target actual FCC employees

Scientists discover brain mechanism that helps us overcome fear