Kling 1.5 has been released now and it looks great
They didn't train a new 405B model
They fine-tuned it on a dataset
Good for gaining experience on how to effectively utilize large domestic GPU clusters and testing the stability of cluster
They need to start building 100K H100 class GPU clusters if they want to stay competitive in 2025
Any news about ascend 910C??
Its production volume & performance??
yes, they fine tuned it. I was repeating what they said in there, but I don't see why it wasn't obvious what I meant.
And just why do they need 100k H100 GPU cluster to stay competitive? Alibaba just trained its latest Qwen 2.5 on 18 trillion tokens
It got released just a few months after Qwen-2.0.
How much non duplicate tokens are out there globally that you can use? How much larger do their cluster really need to get?