AI researchers at Stanford and the University of Washington were able to train an AI "reasoning" model for under $50 in cloud compute credits, according
The other way around. They started with Alibaba’s Qwen, then fine tuned it to match the thinking process behind 1000 hand picked queries on Google’s Gemini 2.0.
That $50 proce tag is kind of silly, but it’s like picking an old car and copying the mpg, seats, and paint job from a new car. It’s still an old car underneath, only it looks and behaves like a new one in some aspects.
I think it’s interesting that old models could be “upgraded” for such a low price. It points to something many have been suspecting for some time: LLMs are actually “too large”, they don’t need all that size to show some of the more interesting behaviors.
The other way around. They started with Alibaba’s Qwen, then fine tuned it to match the thinking process behind 1000 hand picked queries on Google’s Gemini 2.0.
That $50 proce tag is kind of silly, but it’s like picking an old car and copying the mpg, seats, and paint job from a new car. It’s still an old car underneath, only it looks and behaves like a new one in some aspects.
I think it’s interesting that old models could be “upgraded” for such a low price. It points to something many have been suspecting for some time: LLMs are actually “too large”, they don’t need all that size to show some of the more interesting behaviors.