Amidst hype about AI in general and DeepSeek in particular, NVDA is mentioned non-stop and lots of people talking about CUDA. Some random tidbits here about NVDA.
IIRC, the primary purpose of CUDA is for GPU adaptation layer for general purpose computing. That was what I learned when I was trading INTC/AMD/NVDA shares long time ago. The notion of GPGPU, General Purpose Graphical Processing Unit, came about due to this CUDA functionality for general purpose computing, such as SIMD programming. In software engineering terms, CUDA is an API framework for general purpose programming of NVDA GPUs, no more no less.
Before AI bubble, NVDA has been the number one GPU vendor due to 2 things it has done right:
(1) CUDA gives graphics and video developers familiar synopsis of API programming, while AMD/ATI and other minor GPU vendors never delivered.
(2) It was said that NVDA develops the best device driver for GPU in the industry. I have never written GPU device driver, but I have written gazillions of device drivers for HDLC, Ethernet, UART, I2C etc. Correct me if I am wrong: GPU device drivers were considered most technically challenging among all types of device drivers. And for a large part, a GPU performance depends on quality of its device driver.
Fast forward to this AI bubbling era, people are hyping NVDA as the savior of AI etc. But the fact of the matter is that, NVDA got lucky, partially due to Jensen Huang's vision in providing a programming API that would be more friendly to game developers.
DeepSeek used PTX instead of CUDA to fine-tune its AI model training. In layman's words, PTX is for assembly programming while CUDA is for C programming. There is no myth in CUDA to those who wrote assembly code or device drivers or BSPs. Usually, people would use lower level language or instruction set to optimize.
Reality sometimes is a real bitch. Like Smalltalk vs Java. Commercial success is technological advancement in capitalism.
That is the THING in my mind when it comes to the significance of DeepSeek releases. Somewhat, to some degree, it punches that myth in NVDA vis-a-vis CUDA as long as AI is concerned.