Huawei's play here with this AI cloud that have all the models. everyone needs more comptuation power and AI chips are in hot demand and too expensive and short supply
no problem, use my datacenter here with pre configured models and everything, it will be more efficient.
The article is a bit scarce of technical details. Form this
The multi-cloud collaborative architecture allows industry large models to be trained on the public cloud, fine-tuned on the hybrid cloud based on local enterprise data
One can assume that the basic idea is to use Huawei cloud for pre-training on general, non sensible data. This is the most compute intensive part that can easily take months for big models.
Then the pre-trained model is fine-tuned on private data, possibly even on the corporate premise and on the corporate's own hardware. This part is more then an order of magnitude less compute intensive than pre-training. Usually lasts few days, maybe a couple of weeks if done on customer's limited hardware.
So customers can have their cake and eat it too: pre-training with huge but general data on Huawei cloud and lean fine tuning with private data on private hardware.
For the specific model to use, I'd guess customer can choose an off-the shelf one, maybe even a pre-trained one, or its own original architecture: model architecture does not seem to be an issue as long as code is compatible with Huawei's MindSpore or even Pythorh now that .