Very interesting interview
Dylan says that SMIC has 60K WPM capacity in Shanghai
SMIC can make 50-80 dies of Ascend 910B from 1 wafer
Is Ascend 910C on 5 nm node?
I love that channel
Very interesting interview
Dylan says that SMIC has 60K WPM capacity in Shanghai
SMIC can make 50-80 dies of Ascend 910B from 1 wafer
Is Ascend 910C on 5 nm node?
The GPT4 totally sucks in hardware description languages. I know from my job. Very probably because there is far less code on the internet compared to Python and JavaScript.Seriously whenever I hear this crap I wonder what environment they're working in. I have been using GPT 4, Co-pilot etc. tailored to our (large codebase) systems and it just produces slop all the time. Useful slop, yes, but slop nonetheless.
It makes it so I don't have to break out regex as often and reduces boiler plate, but that's something compilers have been doing since forever. It's also fundamentally reliant on problems being common. But common coding was never difficult or time-consuming to begin with. It's always been obscure bugs, constraints that are hidden in 1 commit in a 10,000 of a public library, dealing with obtuse management requests, etc. that consume the most time and energy and that's not something LLMs help with.
Going from 3 to 3.5 to 4 has improved the quality of the slop but hasn't tackled any of the other issues that taking developer time. The limitations are baked in to the way LLMs work.
r/LocalLlama on reddit think OpenAI underperforms in usage compared to benchmark scores. Users find better performance from Llama 3.1 and Claude Sonnet.The GPT4 totally sucks in hardware description languages. I know from my job. Very probably because there is far less code on the internet compared to Python and JavaScript.
DeepSeek is at par with it. But Llama and Qwen are one generation behind. Also reddit now think that OpenAI is fudging benchmarks.Claude is currently the best software development AI bar non.
It's Jiutian multi-model foundational model is trained on 10k+ domestic card GPU cluster using 17 types of AI chips from 11 suppliers. That seems to be too many suppliers, but they have worked with pretty much all domestic players based on previous reporting.九天善智多模态基座大模型是基于万卡国产算力集群和国产算法框架训练的全栈国产化全模态基座模型,适配了 11 个厂家 17 款国产 AI 芯片,支持模型在异构芯片间的平滑转化和续训。