DeepSeek says that closed source models are accelerating at a faster rate than open sourced models. Thus, the gap instead of narrowing, is actually widening
The Intelligence Index score on DeepSeek V3.2 Speciale is limited due to its tool call support, which is why it is currently lower than the regular thinking variant. Dev expect the final score with tool call to be ~68 but we shall see.DeepSeek has a model which is nearly as good as Kimi K2 yet is much cheaper.
View attachment 165659
This is a long-term problem for MoonshotAI, because while Kimi has made strides, it still is less popular and has less mindshare than DeepSeek. If it is not more performant, while being more expensive, then what is the argument for using Kimi?
The only real problem is that most Chinese labs have too small models. Not everyone is a codemonkey. World knowledge requires much bigger models. DeepSeek also acknowledged this in their model card. We should expect V4 to be substantially bigger than V3.
All of this re-iterates my long-held view that the top 2 labs in China are DeepSeek and Moonshot. Alibaba/Qwen is a distant third; they are the kings of edge but that's about it.
no, DeepSeek V3.2 is actually quite a bit better than MoonshotAI. The regular thinking version uses much fewer token than Kimi and is already at the same level in intelligence. The special version is SOTA. On the same level as Gemini 3. They just need to do some work still to get the tool calling worked out. There will be another major release before Chinese New Year, I'm sure of that.DeepSeek has a model which is nearly as good as Kimi K2 yet is much cheaper.
View attachment 165659
This is a long-term problem for MoonshotAI, because while Kimi has made strides, it still is less popular and has less mindshare than DeepSeek. If it is not more performant, while being more expensive, then what is the argument for using Kimi?
The only real problem is that most Chinese labs have too small models. Not everyone is a codemonkey. World knowledge requires much bigger models. DeepSeek also acknowledged this in their model card. We should expect V4 to be substantially bigger than V3.
All of this re-iterates my long-held view that the top 2 labs in China are DeepSeek and Moonshot. Alibaba/Qwen is a distant third; they are the kings of edge but that's about it.
you go to IPO at this point only if you cannot raise enough private funding. These guys are just all doing media tour at this point to raise the value of their asset.
Anthropic reportedly preparing for one of the largest IPOs ever in race with OpenAI: FT
- Anthropic is weighing a massive IPO while also exploring fresh private funding over the $300 billion mark, per the FT.
- The AI firm has reportedly engaged Wilson Sonsini and major banks as the startup races OpenAI for a public listing.
- The potential listings would test investor appetite for high-burn AI firms amid bubble fears and surging valuations.
Can't believe these guys are serious about this.
DeepSeek says that closed source models are accelerating at a faster rate than open sourced models. Thus, the gap instead of narrowing, is actually widening