Delusions of politicians who never had to train a ML model in their life.Can someone be in the shoe of the US and explain to me why trying to ban "advanced" AI chips which are basically glorified GPU's going to prevent China from developing AI and be competitive with US AI models?
1. Machine learning with Neural Networks is by far one of the most lazy parallel tasks in the field of Computing. Which means it can be infinitely parallelized. Which means instead of using 1 fast chip you can just run 10 or 100 slower chip to do the same thing. It might require more power or cost more due to the whole extra infrastracture required. But is making Machine Learning Training more costly that much a of a big deal?
2. Even a slower chip from Nvdia should be quite useful for China to train AI models
3. What is preventing China from using the so called game console GPU's which are not banned to run the same code. Game GPU's are extremely fast as well and can run the same code using CUDA. Maybe less efficient but very viable workaround.
4. Machine Learning training is basically a one time task. Once you train your model, you can run the model for future prediction tasks with a much smaller chip or network of chips. A trained model that might need 1000 GPU's to train will be able to run on a phone GPU. So, the amount of "Advanced" Chips china needs could be much smaller than consumers need to run say AI models for Self-Driving Cars or Autonomous Drones
5. What is preventing Citizens of Friends of US who are not banned like India or Vietnam from buying up thousands of chips. putting them in suitcases or small boxes and shipping to China? Chips are made by the millions and circulated all over the world. A smuggling network will develop in a heartbeat
6. Why not transfer your data to Vietnam, train your model, transfer the trained model back to China? No need for a Chinese company to do this. Just setup a Vietnamese company in name only, with a Vietnamese owner paid by you behind everyone's back and essentially run the company from behind.
7. Why not optimize Machine Learning models using model optimization techniques which significantly reduce the size of your model without degrading performance too much. Then you can run the "light" model on a phone GPU