Alibaba's release cadence is very impressive, but it's worth keeping in mind that the likes of Deep Seek, Zhipu, and Moon Shot are making bigger models to try and compete with Open AI, Google, Anthropic, and xAI directly in the cutting edge LLM space. Alibaba is taking a different strategy of making a variety of small to medium models that are practical to run on the edge, which boast near-frontier performance while not breaking the bank, and
can (though are not currently) be used in an ensemble offering to cut API costs.
In fact, this seems to be a strategy the Western AI industry is also pursuing - Open AI is also starting to make smaller models that they can redirect simple queries to, in order to cut down aggregate inference costs, and Google's models have reportedly been smaller from the start. The industry could be shifting to cost cutting after realizing how much money they've been burning and that AGI is not just around the corner. Although, US labs are still burning immense amounts of money:
View attachment 161407
My guess is, if AGI is not around the corner, and there's a decent chance that it is not, then the question will soon become who is able to offer the most bang for the buck while not setting investors' cash on fire. All that expensive infrastructure is not going to be easy to maintain if there's a race to the bottom on costs.
In such an environment, China should an advantage.