Naval missile guidance thread - SAM systems

Anlsvrthng

Captain
Registered Member
What you describe is better known for TVM missiles. There is also command guided ones, like HQ-7, where the radar and EO on board the ship controls the missile until it hits the target. I think Osa works that way too.

Difference between SARH vs. TVM, the missile doesn't return its own radar data back to the radar.
Imagine the next picture:
1. The active radar guided missile has one unit of detection range
2. the search /targeting radar has 20 unit detection range

This actually means on 80% of the trajectory the missile going blind, it has no clue what its targeting.
If the painted aircraft detect the targeting radar, and started immediate evasion manoeuvre (example, turning 90 degree to right) then the missile won't have chance to detect it. When it reach the calculated position the plane will be way out of its detection range.

If the target is an anti ship missile then it will do evasion in the last few 100-200 km, means it pulling more than 10g turns, and can be 50-100 km away than the plotted trajectory calculated at the moment of the SM-2 missile launch.


And finally, if there is more than one SM-2 missile in the air ( most probably) then receiving/processing/commanding them way more effective strategy than to let the missile with the small, weak radar to try to found the target on its own.
Software code level of execution is just not fast enough for DSP. It needs to be in microcode and hardwired to get the necessary speed. That is why FPGA is heavily involved inside each and every module. General CPU like Intel Pentium chip needs to be downstream from the array, to the back end of the radar for controlling the radar in general

Hm.

Do you know how a generic processor works ,and the relationship between microcode/assembly code , RISC/CISC/DSP and so on?

And finally, the relationship between the clock and the execution speed, importance of conditional branching in analysis of large data and correlation finding, random access memory ?

Rambus-AI-memory-systems-fig1.jpg

Please, Log in or Register to view URLs content!
So, from 90-05 there was 200* increase in processing speed and 200* in transistor count, between 05-2020 was a good 200 * increase in transistor count and good 5 * increase in performance. And the speed increase slowing(ed) down to 0 .



Bye bye software defined radar.
I am sure there was a "whops" moment at the Pentagon back in 2010 when they realised that the radar systems/processing capabilities stopped to increase as expected back in 90s. Suddenly the small mistakes in the allice / simulators/radar software become extreme obstacles, without the help of the 100 GHz single thread computers / DSPs.

IF china / russia can produce equivalent like the USA technology from 2005 then they are fine, married with a bigger airframe like su35 or J-11 they can outsmart the F35.
 

Tam

Brigadier
Registered Member
Imagine the next picture:
1. The active radar guided missile has one unit of detection range
2. the search /targeting radar has 20 unit detection range

This actually means on 80% of the trajectory the missile going blind, it has no clue what its targeting.
If the painted aircraft detect the targeting radar, and started immediate evasion manoeuvre (example, turning 90 degree to right) then the missile won't have chance to detect it. When it reach the calculated position the plane will be way out of its detection range.

Missiles don't have a narrow FOV. Their FOV is quite wide. The antenna is fairly small considering the wavelength, so that means the beamwidth is pretty wide. Its also a given fact that active and semi-active radar homing missiles use monopulse and inverse monopulse respectively for their terminal tracking method.

Targeted aircraft will know it is illuminated by radar, but it won't know if the radar has already fired the missile. It only knows its being detected and tracked. The pilot and operators on the aircraft can assume a missile is fired (or not). At this point, the radar is on its TWS (Track While Scan) mode while it is guiding the missile from the ground, and the missile isn't live yet. Just because your ESM tells you a ground radar is operating on TWS, doesn't mean a missile is fired, since TWS is used all the time.

When the misslie gets close enough for its own seeker, then it goes into terminal stage. This is where you light up the target illumination radar or fire control radar if it is SARH or TVM, or this is where the missile seeker goes active with its own built in radar. Onboard the target aircraft, the waveform being detected by its ESM goes from S-band with low PRF, to X-band on CW. The targeted aircraft knows things have become hot and enacts countermeasures.

If the target is an anti ship missile then it will do evasion in the last few 100-200 km, means it pulling more than 10g turns, and can be 50-100 km away than the plotted trajectory calculated at the moment of the SM-2 missile launch.

If the target is an antiship missile over the horizon, you won't be using the ship's radar for targeting that ship in the first place, since the ship's radar won't be capable of reaching that target with the Earth's curvature. I am not sure if you know how Harpoon operates, because that is exactly how I described it earlier --- it uses a series of waypoints that are laid by the ship's combat computer and tactical officer. The target ship's position is determined by other means, for example, using a scouting aircraft.

And finally, if there is more than one SM-2 missile in the air ( most probably) then receiving/processing/commanding them way more effective strategy than to let the missile with the small, weak radar to try to found the target on its own.

That's all SAMs are guided by land or sea based computer through datalink, until the terminal stage. Then the missile goes off on its own.

TVM (Track Via Missile) and SARH both require the targets to be lit from the ground or ship, but the difference between TVM from SARH is that TVM works like you described --- the missile is sending its radar and telemetry to the ground, and the ground computer is sending flight instructions back to the missile. With SARH, the target is lighted by your ground station, but flight control and pathing is done onboard the missile.


Hm.

Do you know how a generic processor works ,and the relationship between microcode/assembly code , RISC/CISC/DSP and so on?

And finally, the relationship between the clock and the execution speed, importance of conditional branching in analysis of large data and correlation finding, random access memory ?

Yes, and it is way too slow, to be used for noise filtering and anti-clutter processing. DSP is a dedicated co-processor on its own, designed to conduct complex mathematical operations right on hardware. Another analogy would be the use of GPU; you don't use the generic processor to draw the screen, you use a GPU to do it. For the kind of speed you need for radio frequency processing, it needs to be implemented directly in hardware using microcode, using FPGA for prototyping and ASIC for mass production, unless the end application is too small a quantity for mass producing ASIC over it, then you apply use of FPGA. FPGA and ASIC are heavily used on base stations as they are on radars.
 
Last edited:

Anlsvrthng

Captain
Registered Member
Missiles don't have a narrow FOV. Their FOV is quite wide. The antenna is fairly small considering the wavelength, so that means the beamwidth is pretty wide. Its also a given fact that active and semi-active radar homing missiles use monopulse and inverse monopulse respectively for their terminal tracking method.
Interesting, where you read FOV ?

I wrote DETECTION RANGE.

And because of the above, the detection range way smaller for the missile than for the tracking radar.
Targeted aircraft will know it is illuminated by radar, but it won't know if the radar has already fired the missile. It only knows its being detected and tracked. The pilot and operators on the aircraft can assume a missile is fired (or not). At this point, the radar is on its TWS (Track While Scan) mode while it is guiding the missile from the ground, and the missile isn't live yet. Just because your ESM tells you a ground radar is operating on TWS, doesn't mean a missile is fired, since TWS is used all the time.
I don't think that there is this stupid aircrew on the earth.

If they painted by a targeting radar then just turn around and get out of the range of it.
The launched missiles can kiss the water in that case.

You expecting the next :
1. the targeted aircraft doesn't know the capabilities of the naval or surface SAM
2. The targeted aircraft doesn't has radars, IR,UV or other means to detect the missile launch
3. The target doesn't change randomly its direction.

If the above true, then there is no issue to kill any intruder with the SAM.

So, even an S-200 can kill an F-35 if the above rules are kept by the f35 pilot.





If the target is an antiship missile over the horizon, you won't be using the ship's radar for targeting that ship in the first place, since the ship's radar won't be capable of reaching that target with the Earth's curvature. I am not sure if you know how Harpoon operates, because that is exactly how I described it earlier --- it uses a series of waypoints that are laid by the ship's combat computer and tactical officer. The target ship's position is determined by other means, for example, using a scouting aircraft.

the ship radar needed to feed the data to the missile, and another radar that feed the actual position to the interceptor thorught the ships radar.
Again, the AShM will manoeuvre like hell, how the missile will know where it will be in 50 sec time, after running for further 40-50 km ?
And generally, it is not an argument against that I wrote , it just put additional steps into the missile command line.
That's all SAMs are guided by land or sea based computer through datalink, until the terminal stage. Then the missile goes off on its own.

Exactly.
So not only the dumb missiles needs data updates, but even the active radar homing as well
TVM (Track Via Missile) and SARH both require the targets to be lit from the ground or ship, but the difference between TVM from SARH is that TVM works like you described --- the missile is sending its radar and telemetry to the ground, and the ground computer is sending flight instructions back to the missile. With SARH, the target is lighted by your ground station, but flight control and pathing is done onboard the missile.
Radar and telemetry doesn't necessary, but dramatically increase the chance of interception, and feed important data back to the system operators/ designer .

You can launch the missile on a ballistic trajectory, and accept that from 10 missile one will have a chance to "see" the target, and may one from 30 will hit it.

Or use "data fusion" and increase the chance of hit from 1:30 to 1:4 .

Yes, and it is way too slow, to be used for noise filtering and anti-clutter processing. DSP is a dedicated co-processor on its own, designed to conduct complex mathematical operations right on hardware. Another analogy would be the use of GPU; you don't use the generic processor to draw the screen, you use a GPU to do it. For the kind of speed you need for radio frequency processing, it needs to be implemented directly in hardware using microcode, using FPGA for prototyping and ASIC for mass production, unless the end application is too small a quantity for mass producing ASIC over it, then you apply use of FPGA. FPGA and ASIC are heavily used on base stations as they are on radars.

Ok, so you know that the DSP (or GPU) is nothing else just a numbed down CPU, with less pipeline steps, no predictive branching, no microcode decoder , no security rings or page protection and lacking many other feature that needed for generic code execution.

And again, the answer you wrote doesn't has business with my argument about the end of the advantage of USA in semiconductor manufacturing .
 

Tam

Brigadier
Registered Member
Interesting, where you read FOV ?

I wrote DETECTION RANGE.

And because of the above, the detection range way smaller for the missile than for the tracking radar.

That is why the radar range of the missile seeker is called the terminal range. This represents the final stage of the missile's flight where the seeker achieves target lock on and the missile is heading for the crew.

The range of the semi-active or TVM missile seeker might be greater than active missile seeker, since the former relies on a much more powerful beam from a large external source, while the active seeker requires a portable emitter powered by a battery. The advantage of SARH for a longer seeker range is subject to target range from the illuminating radar, of course, the farther the range, the weaker this illumination beam is, the lower the potential kill ability of the missile. However, the nearer the engagement, the stronger the benefit goes to the SARH system over the ARH system except in large scale multiple engagement.

I don't think that there is this stupid aircrew on the earth.

If they painted by a targeting radar then just turn around and get out of the range of it.
The launched missiles can kiss the water in that case.

That is what is usually done. But by the time the aircraft gets either the targeting radar from the ground or surface, that means the missile is already within range of its seeker. Simply, the ground crew withholds the use of the targeting radar until the last moment to reduce the potential warning time given to the target. The target is tracked by the search radar, and the targeting radar is not on, except its datalink guiding the SAM, until the missile is within target range of its seeker, and the illumination and tracking radar lights up.

If you try to use the fire control radar --- tracking and illumination with X-band + FMCW waveform --- too early, you will give far too much a warning time to the target. Example of such FCRs would be the Russian Tombstone and Flap Lid radar for the S-300 SAM. These are backed by search radars like Big Bird.

Just to clarify, search radars on the S and L band have their own tracking ability, although its not as tight as a fire control radar running on X-band. It is however sufficient to bring a missile within its terminal range, which can be around 20km or so. When the missile is close enough to use its seeker, you light up the target for the missile using the FCR.

If the missile is the active sort, like the Aster missiles of the SAMP-T complex, the target is continually tracked by the search radar that also has a tracking ability. Once the missile is within its seeker range, it is instructed to turn the seeker live.

You do not use the illumination radar early in the missile's flight. You only use it at the final moment.

You expecting the next :
1. the targeted aircraft doesn't know the capabilities of the naval or surface SAM
2. The targeted aircraft doesn't has radars, IR,UV or other means to detect the missile launch
3. The target doesn't change randomly its direction.

If the above true, then there is no issue to kill any intruder with the SAM.

So, even an S-200 can kill an F-35 if the above rules are kept by the f35 pilot.

Any missile can kill an F-35 if the pilot is not careful.

Full capabilities of naval and surface SAM are never fully known. That is OPSEC. Numbers published in brochures and the internet can be taken with a pinch of salt.

Aircraft uses a RWR or Radar Warning Reciever, which is an ESM to indicate if a radar is scanning it. This also warns against a radar missile lock on. IR-UV is mainly used against infrared missiles that won't warn on RWRs. In more sophisticated form of RWR, the radar signals are matched against a database of signals which allows the plane to identify the radar being used against it and the mode this radar is on.

If you have CLOS type fully command guided missiles, these are short range missiles like Crotale, HQ-7, HQ-17 and Tor M1, you don't fire until the target is only around under 15km or less than the target. The missile launch vehcile and its radar remains inactive until the target is literally flying close or over it. Some of these systems are also supported by optical guidance.

the ship radar needed to feed the data to the missile, and another radar that feed the actual position to the interceptor thorught the ships radar.
Again, the AShM will manoeuvre like hell, how the missile will know where it will be in 50 sec time, after running for further 40-50 km ?
And generally, it is not an argument against that I wrote , it just put additional steps into the missile command line.

AshM functions more like a drone. It works like an self guiding kamikaze aircraft. It has its own telemetry, GPS, built in radar --- all AshM are active radar seeking since this type of missile is invented, including all those Seersuckers and Silkworms. At first antiship missiles are only radar guided with an inertial guidance system using gyroscopes, but as range increases, they added data link updates, telemetry and GPS.

If the AshM is maneuvering, it can increase the chances of evading kinetic defenses, such as missiles and AA guns. Do note that SAMs and AA guns have self exploding ordinance, if they miss, they explode, and the target missile better hope its not within the splash radius of the blast and its fragments (HE-FRAG). Guns can also use track predicting algorithms with the gunnery computers, so the gun tries to keep its lead target against the evading missiles. It becomes a battle of robots and electronics.

Finally, even if the missile manages to evade all the kinetic defenses, it has to deal with the soft kill defenses --- decoys, electronic jamming and spoofing. Here again, the antiship missile may have its own methods to counter decoys, jamming and spoofing.

Exactly.
So not only the dumb missiles needs data updates, but even the active radar homing as well

Bingo.

You can launch the missile on a ballistic trajectory, and accept that from 10 missile one will have a chance to "see" the target, and may one from 30 will hit it.

Or use "data fusion" and increase the chance of hit from 1:30 to 1:4 .

All the missile will get to the point they will see the target, because your radar is guiding them to it. Do note that you cannot launch a lot of missiles at once, even if the missiles are actively guided. That is because the ground and surface system only have a limited number of datalink channels available, and a limited ability of the combat system to process and manage for the missiles.

One of the main benefit to using an active radar seeking missile is that the missile can engage a target below the radar horizon of the ship. At this point due to the radar horizon and no more line of sight with the datalinks, the missile needs to go autonomous and hunt the target on its own, which should an antiship missile skimming low near the surface.

So the SAM is on a ballistic trajectory headed down, up to some point it can no longer communicate with the ship, but is already active and guiding to the target antiship missile on its own.

So how can the antiship missile defend against this? It probably should have its own radar and RWR system, so if it detects the radar signal of the SAM locking on to it, it will start its evasive maneuvers. From here on its robot vs. robot.

Ok, so you know that the DSP (or GPU) is nothing else just a numbed down CPU, with less pipeline steps, no predictive branching, no microcode decoder , no security rings or page protection and lacking many other feature that needed for generic code execution.

And again, the answer you wrote doesn't has business with my argument about the end of the advantage of USA in semiconductor manufacturing .

For embedding within a TRM, super fast but highly specialized CPU like DSP is all you need, with a CPU that does the monitoring and management of the entire system. More complex general purpose CPUs is used towards the radar's backend, in the radar's own central computer that manages the entire set.

As for the 'end' advantage of the USA in semiconductor manufacturing, their chips are also fabbed by TSMC. Recently, US politicians and DoD is putting pressure on TSMC to open a US plant that can be used to fab chips used for the US military.

There is another thread in the forum that explains why China needs to develop its own cutting edge chip making equipment.
 
Last edited:

Anlsvrthng

Captain
Registered Member
Nice to see that there can be an agreement in a discussion on a forum : )
One of the main benefit to using an active radar seeking missile is that the missile can engage a target below the radar horizon of the ship. At this point due to the radar horizon and no more line of sight with the datalinks, the missile needs to go autonomous and hunt the target on its own, which should an antiship missile skimming low near the surface.

So the SAM is on a ballistic trajectory headed down, up to some point it can no longer communicate with the ship, but is already active and guiding to the target antiship missile on its own.

So how can the antiship missile defend against this? It probably should have its own radar and RWR system, so if it detects the radar signal of the SAM locking on to it, it will start its evasive maneuvers. From here on its robot vs. robot.

The AShM needs to be detected and tracked up to the point when the missile can illuminate it.
The SAM with active seeker doesn't needs illumination or precise tracking.


And from a game theory standpoint, IF the AShM know the position (by few km) of the target THEN in the last few hundred km will do evading manoeuvres with full g, regardless of any detected SAM.
Why conserve the fuel ?

For embedding within a TRM, super fast but highly specialized CPU like DSP is all you need, with a CPU that does the monitoring and management of the entire system. More complex general purpose CPUs is used towards the radar's backend, in the radar's own central computer that manages the entire set.

As for the 'end' advantage of the USA in semiconductor manufacturing, their chips are also fabbed by TSMC. Recently, US politicians and DoD is putting pressure on TSMC to open a US plant that can be used to fab chips used for the US military.

There is another thread in the forum that explains why China needs to develop its own cutting edge chip making equipment.

This is extremely simple, there is no "super fast but specialised CPU like DSP" .

IT is easy to get from an intel I3 CPU one vector multiplication/per cycle , that is the best that any GPU vector processor or DSP can push.

Difference is the GPU these days contain thousands of vector processors, doing pre-defined simple matrix-vector multiplications in trillions/per sec, transforming a stable pre-defined stream of coordinates to a new coordinate system.

The fast Fourier is a good candidate for this kind of computing, but sadly to do deep analysis on the incoming radar return you need re-run the data again and again with the processors, and that require something that is more of a generic x86 CPU than a GPU/DSP.
 

nlalyst

Junior Member
Registered Member
So, from 90-05 there was 200* increase in processing speed and 200* in transistor count, between 05-2020 was a good 200 * increase in transistor count and good 5 * increase in performance. And the speed increase slowing(ed) down to 0 .

Bye bye software defined radar.
I am sure there was a "whops" moment at the Pentagon back in 2010 when they realised that the radar systems/processing capabilities stopped to increase as expected back in 90s. Suddenly the small mistakes in the allice / simulators/radar software become extreme obstacles, without the help of the 100 GHz single thread computers / DSPs.

IF china / russia can produce equivalent like the USA technology from 2005 then they are fine, married with a bigger airframe like su35 or J-11 they can outsmart the F35.

I am curious to know why you consider single thread performance critical for radar application and why has the software defined radar project failed because of that?

It is true that single thread performance saw lackluster improvements since 2006 (come Conroe). However, aggregate performance continued to increase thanks to the increasing number of transistors per chip and continuous improvements in perf/watt. An Iphone of today can run circles around a consumer desktop computer of 10 years ago while consuming 25x less peak power and two orders of magnitude less power for light loads. Correct me if I am wrong, but this must have pretty substantial implications for airborne radars.

As someone else pointed out, FPGAs have been commonplace in radar architectures. This
Please, Log in or Register to view URLs content!
, highlights FFT and matrix factorization (Cholesky, QR) as two problems relevant to radar application where FPGAs can outperform GPUs, let alone CPUs, (on problem sizes relevant to radar application) all the while having substantially better perf/watt.
 

Anlsvrthng

Captain
Registered Member
I am curious to know why you consider single thread performance critical for radar application and why has the software defined radar project failed because of that?

It is true that single thread performance saw lackluster improvements since 2006 (come Conroe). However, aggregate performance continued to increase thanks to the increasing number of transistors per chip and continuous improvements in perf/watt. An Iphone of today can run circles around a consumer desktop computer of 10 years ago while consuming 25x less peak power and two orders of magnitude less power for light loads. Correct me if I am wrong, but this must have pretty substantial implications for airborne radars.

As someone else pointed out, FPGAs have been commonplace in radar architectures. This
Please, Log in or Register to view URLs content!
, highlights FFT and matrix factorization (Cholesky, QR) as two problems relevant to radar application where FPGAs can outperform GPUs, let alone CPUs, (on problem sizes relevant to radar application) all the while having substantially better perf/watt.

The cpu single thread performance gives a good benchmark about the improvement in the seminconductor manufacturing technologies.

And it gives the indication of the the experienced improvement in processing speed is related to the tweaking of the circuit design rather to the manufacturing methods.

So, the 45/22/3nm technology doesn't give edge and improvement - the cpu and compiler design gives .


And considering that the floating point processing in the radars require lot of (from CPU design standpoint) simple calculation done by simple circuit it doesn't sound like there is any advantage on the USA side due to any superior technology.

This is the reason why Russia prefer the 90/45nm technology, it is cheap to make new ,low volume chips , that outperform any FPGA produced on 7nm technology.


This is basic data analysis , separating the different components of the system, like manufacturing process, design improvements and new marketing tools.
 

Tam

Brigadier
Registered Member
Nice to see that there can be an agreement in a discussion on a forum : )


The AShM needs to be detected and tracked up to the point when the missile can illuminate it.
The SAM with active seeker doesn't needs illumination or precise tracking.


And from a game theory standpoint, IF the AShM know the position (by few km) of the target THEN in the last few hundred km will do evading manoeuvres with full g, regardless of any detected SAM.
Why conserve the fuel ?

AshM's own first line of defense is its own detectability. A missile has an RCS below 1m2, and for antiship missiles they can go down to 0.5m2 to 0.1m2. They fly low near the water. Even the supersonic missiles can fly low in the water. Maybe the supersonic AshM doesn't fly as low as the subsonic AshM, but still low enough to fall beneath the radar horizon. Earth's curvature, and radar clutter near the water works for the AshM's advantage. The AshM does use more fuel if it is sea skimming, as it bobs up and down near the sea surface. But then again, cruise missiles don't travel in a straight line either, they follow the terrain counture to stay low as much as possible.

AshM missiles have a long range Hi-Lo approach. They can fly high initially at first to conserve fuel, but as they approach the known air defense zones they drop down to a sea skimming altitude to take advantage of the radar horizon and the sea clutter. AshM also have a Lo-Lo mode, if the opposing ships are much closer to one another. This means the AshM is launched, goes soon into low flight profile near the water and stays that way for the flight duration. Not all antiship missile engagements happen over the horizon, they can be up close, with the ships in line of sight with each other, in which case both ships use their fire control radars on each other and shoot the AshMs directly. This can happen in littoral waters, channel straits among smaller vessels.

Perhaps some AshM have built in RWR that detects threat radar so they will perform evasive maneuvers automatically. AshM does not need to evasive maneuvers constantly during its route.

To attain higher and higher range, the engines of the antiship missiles need to be more efficient in time. Along with that, is the exotic fuel grades used for the engine (JP1 to JP10, maybe past that now), using chemistry to raise the joule countent for every liter.


This is extremely simple, there is no "super fast but specialised CPU like DSP" .

IT is easy to get from an intel I3 CPU one vector multiplication/per cycle , that is the best that any GPU vector processor or DSP can push.

Difference is the GPU these days contain thousands of vector processors, doing pre-defined simple matrix-vector multiplications in trillions/per sec, transforming a stable pre-defined stream of coordinates to a new coordinate system.

The fast Fourier is a good candidate for this kind of computing, but sadly to do deep analysis on the incoming radar return you need re-run the data again and again with the processors, and that require something that is more of a generic x86 CPU than a GPU/DSP.

Except that is not just general FP operations. I am referring to operations that are used by RF processing in particular. Its a fact that you don't use general CPUs like i3 inside a TRM, they use SoC in the form of an FPGA. General CPUs are more used in the radar back end, and in the combat data system. Also note this T/R module happens to be Russian, made by Phazotron company. You can see it uses an Altera FPGA.

Zhuk-TR-Module-1S (1).jpg
 

Brumby

Major
Your assertions are typically either specious or spurious in nature sprinkled with entirely off tangent references with remote relevance to the subject matter.

The cpu single thread performance gives a good benchmark about the improvement in the seminconductor manufacturing technologies.

And it gives the indication of the the experienced improvement in processing speed is related to the tweaking of the circuit design rather to the manufacturing methods.

So, the 45/22/3nm technology doesn't give edge and improvement - the cpu and compiler design gives .


And considering that the floating point processing in the radars require lot of (from CPU design standpoint) simple calculation done by simple circuit it doesn't sound like there is any advantage on the USA side due to any superior technology.
This is an example of it. The subject thread is about missile and guidance and not about semi conductor manufacturing. If you wish to invoke it as something of relevance you have to tie back to the subject matter as to why it is not only applicable but specifically relevant. Making assertions that it is without substantiating it is just cow droppings. Where is the nexus? You are only implying there is one.

This is the reason why Russia prefer the 90/45nm technology, it is cheap to make new ,low volume chips , that outperform any FPGA produced on 7nm technology.
Can you please provide a technical reference to support your assertion.

This is basic data analysis , separating the different components of the system, like manufacturing process, design improvements and new marketing tools.
In effect you think we are morons to accept that there is a causal relationship because you say so.

There is a whole body of scientific evidence that the backbone of missile and radar guidance is predominantly dependent on digital processors for targeting and discrimination because the guidance system is dealing with signals.

As an example, from the book "Modem Navigation, Guidance, and Control Processing by Ching-Fang Lin"
1583022888169.png
A schematic of a typical guidance design and the role of signal processors.
1583022994432.png
 
  • Like
Reactions: Tam
Top