News on China's scientific and technological development.

manqiangrexue

Brigadier

Why do Chinese tech giants with a lot of revenues not spend more on R&D? This should be such a low hanging fruit.
R&D spending is not at all a low-hanging fruit; most companies don't even bother with it/can't afford it. R&D spending is probaby the highest hanging fruit there is, as it is an intelligent longterm investment to push the frontiers of science and human knowledge. A well-designed and executed plan is much more important than dumping money for the sake of looking like a big spender. The better question is why's the US (and EU) getting caught up with and run over by Chinese tech despite spending more money?
 

FairAndUnbiased

Brigadier
Registered Member

sunnymaxi

Major
Registered Member
R&D spending is not at all a low-hanging fruit; most companies don't even bother with it/can't afford it. R&D spending is probaby the highest hanging fruit there is, as it is an intelligent longterm investment to push the frontiers of science and human knowledge. A well-designed and executed plan is much more important than dumping money for the sake of looking like a big spender. The better question is why's the US (and EU) getting caught up with and run over by Chinese tech despite spending more money?
another big difference.

we cannot compare Chinese R&D in dollars coz of inflation and cost in Mainland as compared to USA.

Huawei spending 165 Billion RMB on R&D but only 24 Billion in dollars. this amount is far more effective and useful than most of the US firms.

so we should count RMB for Mainland firms.
 

Maikeru

Major
Registered Member
Western firms often get tax breaks in return for R&D spending they don't get for other spending, perhaps that explains a lot of it? Incentive to classify spending as R&D.
 

luosifen

Senior Member
Registered Member
Please, Log in or Register to view URLs content!

2023-04-04 15:29:14Xinhua Editor : Li Yan
Please, Log in or Register to view URLs content!


China's independently developed high temperature superconducting electric maglev transportation system has completed its first suspension run, according to its developer CRRC Changchun Railway Vehicles Co., Ltd. in northeast China's Jilin Province.
Composed of the sub-systems of vehicle, track, traction power supply and operation communication, the high temperature superconducting electric maglev transportation system is suitable for the application scenarios of high-speed, ultra-high-speed and low vacuum pipelines.
It can operate at a speed of 600 km/h or above.
In future, the superconducting electric maglev transportation system is expected to be an important candidate for rapid transportation between large cities and developed economic regions.
Since the early 1990s, CRRC Changchun Railway Vehicles has been committed to the research, development and manufacturing of maglev trains.
In recent years, it built a 200-meter high temperature superconducting maglev traffic test line and independently developed automotive high temperature superconducting magnets that can operate completely without power, as well as electric maglev sample vehicles and a high strength non-magnetic track.
Here's video of the test btw:

 

xypher

Senior Member
Registered Member
AI is a glorified programming. At the very base it is not different from logic circuit controlling your wash machine which does "if A > B then do C". The difference is that in AI library it does "If A is 80% of B then do C, Store the decision and 80% as threshold. Examine if C leads to failure then reduce threshold to 75%." The program just keeps trying and looping through new input values of A, in the mean time adjusting the threshold variable stored in memory. This is called machine learning, or building knowledge and experience. So the logic is still "if then else" as the conventional program. The only "new" thing is the automatic adjusting of variables according to past input. It looks like it is learning, but it is not different from existing self adjusting algorithms such as auto exposure computing in the decades old Canon/Nikon cameras. AI just makes that learning loop much longer over hundreds or thousands circles instead of just a few. AI also vastly increased the number of variables to learn. While so-called conventional programming has very few variables to learn and few circles to learn.

Seriously AI is just marketing trick or bragging right for AI code-farmer.
Sure, at the very basic level even human decision-making could be explained as "if - else" but with much more complex conditions that change based on outside conditions, experience, knowledge, etc. where the latter two are acquired through the "learning" process. The key question is how we form these conditions behind the "if - else" clauses? This is the "new" thing that we need to discover. We can more or less understand the logic behind some simplistic things like self-preservation but not in the general case. If we did, then indeed we could actually write hardcoded algorithms that would beat all those fancy neural networks because they are predictable and error-free when correctly implemented. That is where machine or deep learning come in - we are instead trying to simulate the processes going inside our head with whatever level of understanding we could muster in hopes that eventually they will form the much-sought condition-making logic. The problem is that we don't even have a high degree of understanding behind the laws that govern those processes, so we build different neuron\"brain" models hoping that these mathematical abstractions would be close to the actual structure inside our heads.

Another thing is the learning process that you mentioned - supervised, unsupervised, reinforcement, etc. learning processes are all attempts at simulating the human learning. The supervised & reinforcement learning are essentially action & reaction which is quite similar to how we learn at schools - correct actions\answers are rewarded with good marks, incorrect with bad marks. In the case of NNs, good & bad marks are "hidden" in the value of the respective loss function used to train the model. If you train a neural network on garbage dataset, then it will produce garbage. This is a key difference between NNs and algorithms like "auto contrast" that will work irrespective of the training. "Conventional" algorithms do not learn anything, every reaction to the input has to be hardcoded - even iterative refining would follow a strict set of pre-defined rules that you write yourself. For a neural network, you only define a high-level structure of the black box and then feed it with data, hoping that it will eventually produce the correct algorithm logic. So NNs do indeed "learn", while classic algorithms already come "learned" by us because we wrote the very thing that NNs try to reproduce.

Therefore, if you have a good enough understanding of some task to be able to write a step-by-step algorithm then you should do exactly that. If you don't, then that's where ML/DL should come in - e.g. many computer vision tasks (I will talk about them as this is my field and I don't have as much of an intricate knowledge of NLP) like object detection, segmentation, super resolution, image generation, etc. Prior to the rapid rise of convolutional neural networks (CNNs), people had to come up with really creative ways of encoding the features (e.g. Haar features) that they thought were crucial in our decision-making process and input them to classic ML algorithms like gradient-boosted decision trees. Look up Viola-Jones detector for example. However, CNNs just blown away all those approaches with the better models surpassing human-level performance - e.g. take a look
Please, Log in or Register to view URLs content!
- which was impossible before. So saying that NNs are merely a marketing trick is really not it - it works, maybe not perfectly but we have numerous cases where neural networks outperform the previous state-of-the-art approaches to the tasks. It is especially evident in the complex areas like computer vision or natural language processing.

So yes, machine & deep learning do involve programming but the overall approach is distinctly different from how you would typically approach a task in conventional software development. It is closer to mathematical programming in that regard - like computational physics, optimization, etc. - we only use programming to define our simulation setup (model architecture, optimization routine, etc.) but the main part of the work is done behind the scenes when we design the simulation setup itself (create\pick the model itself, come up with a loss function, think about a proper optimization algorithm, etc.) so that it would solve the task at hand. There is actually distinction between engineers & researchers in AI - the latter often write shitty code and just use programming as tools to check their ideas while the former are essentially software engineers where they take the models produced by researchers and try to carefully "move" them to a production-ready state with necessary speed optimizations, etc.
 

tokenanalyst

Brigadier
Registered Member
"Development of major scientific instruments and equipment" key special 2023 project declaration guidelines released for comments.

On March 28, the Ministry of Science and Technology issued a notice on soliciting opinions on the 2023 annual project application guidelines for three key special projects including the National Key R&D Program "Frontier Research on Large Scientific Devices". Among them, it includes the 2023 annual project declaration guidelines (draft for comments) of the key special project of "basic scientific research conditions and major scientific equipment research and development"

According to the draft for comments, the 2023 guidelines will be deployed around four directions: scientific instruments, scientific research reagents, experimental animals, and scientific data. It is planned to support 124 projects, and at the same time, it is planned to support 11 young scientist projects. Since reprinting is strictly prohibited for soliciting comments in the guidelines, this article only sorts out the 124 projects to be supported for reference by industry professionals. For details, please log on to the public service platform of the National Science and Technology Management Information System (
Please, Log in or Register to view URLs content!
) , and view it in the menu bar of "Public Publicity-Guideline Opinion Collection"

Please, Log in or Register to view URLs content!

Lots of high end equipment, some even necessary for semiconductor fabrication.
 
Top