Naval missile guidance thread - SAM systems

nlalyst

Junior Member
Registered Member
And it gives the indication of the the experienced improvement in processing speed is related to the tweaking of the circuit design rather to the manufacturing methods.

So, the 45/22/3nm technology doesn't give edge and improvement - the cpu and compiler design gives .
You have a point, to a degree. There has been little improvement in desktop CPU clockrate in the last 15 years and that has largely put a break on single thread performance, outside of architectural improvements that allow for extracting more instruction level parallelism, together with better OoO engines and caches. It's important to understand that the latter have been helped with node improvements through higher transistor density and smaller power consumption, so that new processors have more registers, larger internal buffers, etc.

At the same time, we've seen substantial improvements in aggregate performance and perf/watt. Problems that are parallelizable continue to be solved faster and with less energy with each new iteration of CPUs. This is nowhere more apparent than in the mobile device arena. Yet, we remain many orders of magnitude above the theoretical limit in perf/watt. It is well understood that large problem sizes facilitate parallelization and in this age of data abundance there appears to be no shortage of problems solvable by throwing ever larger parallel computers at them. Don't get me wrong: single thread is king. All else equal, a 100GHz CPU is better than 10x10GHz CPUs. While we have hit a limit as to how far we can innovate with the former, we have not yet hit a limit with the latter.

Once we approach the limit in how small we can make the transistors on silicon, and we still have quite a bit to go, the innovation might continue in the direction of HEMT. I am thinking of something like the failed GaAs Cray-3 and Cray-4 supercomputers. We might also start seeing more fixed function analog computer blocks, for example like the ones that solve FT with OFT methods.
 

nlalyst

Junior Member
Registered Member
@Anlsvrthng
An issue I have with using commercial IC manufacturing state-of-art as a benchmark for determining military computing capabilities is the unclear degree of COS HW present in military systems.

It is well known that USAAF is using radiation hardened computers, implementations of which have used technologies quite different from what we find in consumer products: GaAs, ECL, SoS.

I wouldn't be surprised if there is quite a bit that we don't know being cooked behind closed curtains.
 
@Anlsvrthng
An issue I have with using commercial IC manufacturing state-of-art as a benchmark for determining military computing capabilities is the unclear degree of COS HW present in military systems.

It is well known that USAAF is using radiation hardened computers, implementations of which have used technologies quite different from what we find in consumer products: GaAs, ECL, SoS.

I wouldn't be surprised if there is quite a bit that we don't know being cooked behind closed curtains.
yeah at some point in time the Pentagon may stop using 8" floppies:
Please, Log in or Register to view URLs content!
 

Anlsvrthng

Captain
Registered Member
I am curious to know why you consider single thread performance critical for radar application and why has the software defined radar project failed because of that?

It is true that single thread performance saw lackluster improvements since 2006 (come Conroe). However, aggregate performance continued to increase thanks to the increasing number of transistors per chip and continuous improvements in perf/watt. An Iphone of today can run circles around a consumer desktop computer of 10 years ago while consuming 25x less peak power and two orders of magnitude less power for light loads. Correct me if I am wrong, but this must have pretty substantial implications for airborne radars.

The most recent top end iphone cpu can show 2* performance compared to a 12 years old desktop CPU.

I can't call it as "impressive" .

6 watt vs 130 watt, that is significant.


Just to put it into perspective, due to manufacturing improvements in the same timeframe between the 486 vs pentium iii cpu there was a good hundred times increase in computing performance, with 6 time increase in peak power.


The super, multi core CPUs showing this characteristic by utilising close to 10-20 cm2 die area.

In the 90s the mainstream CPU had 1 cm2 area.
As someone else pointed out, FPGAs have been commonplace in radar architectures. This
Please, Log in or Register to view URLs content!
, highlights FFT and matrix factorization (Cholesky, QR) as two problems relevant to radar application where FPGAs can outperform GPUs, let alone CPUs, (on problem sizes relevant to radar application) all the while having substantially better perf/watt.

10 GHz radar can generate 10-20 gbyte/sec of data stream per receiver, a typical 1000 element AESA generate 20 tbyte/sec of data.

Best memory systems can handle 700 gbyte/sec data stream from a memory in sequential access , fraction of it in random access.

The bandwidth increase by 7 fold in 12 years between 2008-2020.
Between 95-08 the increase was 300 times.

The bandwidth double between 2008-2012, and tripled between 2012-2020.

See ?

It is relatively simple to make lot of FPUs , the pentium had 3 million transistor, and capable to pump out two float multiplication in a cycle .

Easy to make a CPU with 30 billion transistor , that can contain 10 000 pentium CPU, and pump out 2 float multiplication per cycle.

The Nvidia doing exactly this, dumping thousand of vector CPUs onto a 7 cm2 die, capable of doing simple vector calculations on linear arrays, but even with that they have to leave out 90% of the functionality of the pentiums.

The best that anyone can get out of this is to calculate the base transformation of a large vertex buffer. Maybe calculate same gigantic google matrix, but there is only limited amount of things that possible with it.
BPS tree calculation is out of question , but theese kind of stuff that can generate dynamic stuff and deep data analysis.

This is the problem with the DSPs and FPGAs is they are cheap, fast, flexible, but generally designed to have small memory, do few simple calculation on an object and that is all .
 

Anlsvrthng

Captain
Registered Member
Can you please provide a technical reference to support your assertion.
I posted previously the tooling/design cost curve of the ICs on different size from 90nm-5nm.

In effect you think we are morons to accept that there is a causal relationship because you say so.

There is a whole body of scientific evidence that the backbone of missile and radar guidance is predominantly dependent on digital processors for targeting and discrimination because the guidance system is dealing with signals.

As an example, from the book "Modem Navigation, Guidance, and Control Processing by Ching-Fang Lin"

A schematic of a typical guidance design and the role of signal processors.

Question :
Does the new semiconductor manufacturing technologies gives any advantage in radar technology to the USA example ?

And frankly , it is not about what is practical to use in an AESA element , an I7 CPU or a xilinx FGPGA , it is more about that was there any advancement in the semiconductor manufacturing in the past 12-18 years that gives meaningful edge in the radar systems ?
 

nlalyst

Junior Member
Registered Member
The bandwidth increase by 7 fold in 12 years between 2008-2020.
Between 95-08 the increase was 300 times.

The bandwidth double between 2008-2012, and tripled between 2012-2020.
You raise a good point! I would like to mention that the latency "improvements" were almost non-existant (with the exception of RLDRAM) and this is what put a strong damper on sequential CPU performance whereby speed deamons like P4 had to resort to HT and large caches to mitigate the effects of slow DRAM.
latency.png
Source: Understanding and improving the latency of DRAM-Based Memory Systems, K.K. Chang

Another issue with classic DRAM memory solutions is that we are already past the point were fetching data from DRAM consumes more energy than the execution of arithmetic operations on the fetched data. Hence the extensive research and development of solutions that bring memory closer to the CPU and compromise solutions that trade performance for less energy and more capacity like 3D XPoint.
 

Brumby

Major
I posted previously the tooling/design cost curve of the ICs on different size from 90nm-5nm.
Sorry. Wrong answer. Do you actually bother to refer to what you claim in your post? Let me remind you as to what you actually said "
This is the reason why Russia prefer the 90/45nm technology, it is cheap to make new ,low volume chips , that outperform any FPGA produced on 7nm technology."

Kindly produce your evidence to support your assertion that Russian 90/45nm technology outperform any FPGA 7nm technology.

Question :
Does the new semiconductor manufacturing technologies gives any advantage in radar technology to the USA example ?

And frankly , it is not about what is practical to use in an AESA element , an I7 CPU or a xilinx FGPGA , it is more about that was there any advancement in the semiconductor manufacturing in the past 12-18 years that gives meaningful edge in the radar systems ?

Please refer to chart.

1583104491100.png
 

nlalyst

Junior Member
Registered Member
10 GHz radar can generate 10-20 gbyte/sec of data stream per receiver, a typical 1000 element AESA generate 20 tbyte/sec of data.
Could you please explain how you derived a 10-20 gbyte/s (or is it bits?) data stream from a single receiver?

I looked a bit through the catalogues of modern FPGA's and I found Xilinx models that support 8Tbit/s aggregate transceiver bandwidth. Theoratically, they should be able to handle your 1000 T/R radar example from an I/O perspective.
 

Anlsvrthng

Captain
Registered Member
Could you please explain how you derived a 10-20 gbyte/s (or is it bits?) data stream from a single receiver?

I looked a bit through the catalogues of modern FPGA's and I found Xilinx models that support 8Tbit/s aggregate transceiver bandwidth. Theoratically, they should be able to handle your 1000 T/R radar example from an I/O perspective.
X band receiver 16bit/Hz .
Could you please explain how you derived a 10-20 gbyte/s (or is it bits?) data stream from a single receiver?

I looked a bit through the catalogues of modern FPGA's and I found Xilinx models that support 8Tbit/s aggregate transceiver bandwidth. Theoratically, they should be able to handle your 1000 T/R radar example from an I/O perspective.
That is a switch,not a DSP.

But it is easy, X band radar on 10 GHz generate 10-40 gbyte/sec data.

1000 receiver elements generating 10-40 terrabyte/sec data.

This is not simply a radar picture, but a 10 GHz wide holographic representation of the forward hemisphere of the radar.

Means it contain the data of all emitter, reflection in the forward hemisphere for the first 10 GHz (the of course away from the resonant frequency of the radar the data will have very weak signal)
 

nlalyst

Junior Member
Registered Member
X band receiver 16bit/Hz .

This is not simply a radar picture, but a 10 GHz wide holographic representation of the forward hemisphere of the radar.

Means it contain the data of all emitter, reflection in the forward hemisphere for the first 10 GHz (the of course away from the resonant frequency of the radar the data will have very weak signal)

I will have to digest this a bit. I was originally thinking that the data rate would be derived from the radar bandwidth, which I expected to be in the range of 400MHz and the sampling rate would not need to be higher than 800MHz in that case ...

After googling a bit, I ran into this guy:
Please, Log in or Register to view URLs content!


It's a RF sampling receiver (ADC) from TI supporting 9GHz input bandwidth, but with only 6.4 giga samples/s max sampling rate. That should lead to some aliasing in the output? Also, the resolution is less than in your example: 12 bit.
 
Top