Naval missile guidance thread - SAM systems

Tam

Brigadier
Registered Member
It can skip say every 50 scan, and use the frame to communicate with missiles.

Or whatever, it is easy to assemble the timing for time shared search/tracking and communication.

And the main: the communication requiring the smallest amount of resources, it require way less power to transmit than normal radar pulse, and due to the high aperture the missile needs to burst data in continuously updated short highly compressed burst , and the radar can take care of them.
It is fully computer controlled, not analogue circuit, so it doesn't have to follow a pre-defined pattern.

If part of the array used as a small phased array antenna then the search /tracking area of radar proportionally decreased.

So, 10% lost aperture for other jobs will decrease the search area by 10%.

It is only a marketing ploy, the warfare importance of it is similar like a built in MP3 player function in the operator terminal.

There are some scenarios if you wish to interleave a communication signal within a pulse cycle of a pulse radar.

Lets say you have a 100 microsecond message, the radar has a 25 microsecond duty cycle. The subarray you use to send the message will have to work separately from the main array for four duty cycles. You're not going to notice this in real time. After the message is sent, the subarray returns to the normal pulse cycle. If the communication is two way, its going to operate in CW mode, which means in this 100 microsecond period, the subarray would be transmitting and receiving simultaneously (no duty cycle).

The other way is that a 100 microsecond message is divided into chunks of 10 microseconds (example), and transmitted within a small 10 microsecond window within the 25 microsecond duty cycle after the deadtime. This maybe during the receive time of the subarray, so there might be an range ambiguity for the radar echoes received during this time, but its limited only to the subarray, since the rest of the array won't have the issue. The full message will be sent in 10 chunks in 10 duty cycles. After that, the subarray returns to its normal operation. You're not going to feel this in real time.

The Intel failed to deliver the 8GHz P4 CPUs by 2008, and the 50GHz ones by 2015, so the only use of the (AESA like ) one CPU / antenna is the decreased signal loss.

Has nothing to do with CPUs. When radio signal hits the receiver element, it will be in analog form, and you need to convert it to digital form. There is a signal A to D converter downstream of the antenna. The path from the receiver to A/D converter is a place where signal loss can occur, so the closer the A/D converter is brought to the receiver, the lower the loss. Radar systems evolved from a single A/D converter for the entire array situated behind it, to multiple A/D converters in sub arrays of the array, to bringing the A/D converter itself just behind the receiver inside every T/R module.
 

Tam

Brigadier
Registered Member
Assuming that the first versions of HQ-9 used C-band for midcourse guidance, the move to S-band may be a consequence of increased missile range or system simplification that allows them a larger S-band aperture for the radar at a cost of having to perform an additional function. I suspect it is the latter.

For reference, the early 2000s HQ-9s were credited with a 90km range. HQ-9B is reported to have a slant range of 200km, possibly more.

Quite possibly. The C-band range might be around 100 to 150km, at the most 175. MPQ-53 has a range of 170km. Communicating with the S-band would have a longer range. There is a caveat to having your main array as your datalink source however. Having a longer range with your AAW radar and SAM isn't going to mean much if the attacking aircraft and antiship missiles are going to fly low below the radar horizon, which means you are still going to detect and engage them around 30km anyway. If your SAM has an OTH ability, which is possible with an ARH or IR seeker head, its flight path would still be determined by having LOS with the SPY-1 array (or Type 346X for the Chinese side). The missile needs to maintain LOS with the array as long as possible to keep its updates going, and can only drop down below the radar horizon on the terminal stage, which would have to be above the target. This is assuming the missile hasn't been handed over to an aircraft, with the datalink control transferred to the aircraft. For an extended range over the horizon engagement, you should consider putting a datalink somewhere on the highest part of the ship for an extended radar horizon.

Going from the communication in the main array to the white dome datalink on top of the Burke's mast, this one should be used for commmunicating with other missiles, namely the Harpoons, Tomahawks, ASROCs, and the helicopters. A mast position of the datalink offers an extended radar horizon greater than possibly communicating from the deckhouse arrays. This allows the missile to fly low and maintain a LOS contact with the ship, and this should suit antiship and cruise missiles well. But if a SAM is intended to engage an enemy antiship missile or plane beyond and below the horizon, you have the question if you can bring the datalink as high up so the ship can stay in contact and LOS with the missile to have updates as long as possible. So I am wondering if the SM-6 in particular would still use the main array or use the universal datalink for communicating. I am speculating if the 055 may have a new datalink, which can be one of the panels above the X-band radar. There are two panels above the X-band, the other one, the smaller one in the middle, might be for CEC which also works best with an extended radar horizon as it allows the ships to operate from a longer distance while keeping LOS with each other. If the upper panel is a datalink, it can be for the YJ-18 but I wonder if an extended range ARH HHQ-9 might use this to engage OTH.
 

Tetrach

Junior Member
Registered Member
Can you give me a source on this?

Several of my sources clearly contradict what you are saying.

"The AN/SPY-1 radar performs missile and target tracking and also serves as the shipboard data link transceiver. ", Standard Missile: Guidance System Development; Witte and Mcdonald

"The AEGIS uplink, transmitted by the AN/SPY-1 radar to SM or ESSM, is in the S-band ... In the AEGIS system, the missile always responds to an uplink with a downlink transmission. The downlink, also at S-band, uses pulse position modulation and sends back missile status information.", Missile Communication Links, Clifton E. Cole Jr.

Also, communication to the Standard missile occurs through a data link on top of the Burke's mast. It is the white thing on top of the mast, above the IFF ring. You can't use a PESA for communicating with a missile when its already a radar. The PESA is connected to only one big transmission amplifier unit, for communication it would require a a small separate amplifier to generate a signal separately. VVV

Hey, I just found this. Is it what you'all talking about ? Capture d’écran (56).png
 

Anlsvrthng

Captain
Registered Member
There are some scenarios if you wish to interleave a communication signal within a pulse cycle of a pulse radar.

Lets say you have a 100 microsecond message, the radar has a 25 microsecond duty cycle. The subarray you use to send the message will have to work separately from the main array for four duty cycles. You're not going to notice this in real time. After the message is sent, the subarray returns to the normal pulse cycle. If the communication is two way, its going to operate in CW mode, which means in this 100 microsecond period, the subarray would be transmitting and receiving simultaneously (no duty cycle).

The other way is that a 100 microsecond message is divided into chunks of 10 microseconds (example), and transmitted within a small 10 microsecond window within the 25 microsecond duty cycle after the deadtime. This maybe during the receive time of the subarray, so there might be an range ambiguity for the radar echoes received during this time, but its limited only to the subarray, since the rest of the array won't have the issue. The full message will be sent in 10 chunks in 10 duty cycles. After that, the subarray returns to its normal operation. You're not going to feel this in real time.
It degrading the array performance drastically.

Loosing 75% of the radar emitting power has the same effect like loosing half of the aperture for other purposes.

Makes more sense to skip on frame and use it to communicate with everyone.

And it is NOT an advantage of an AESA, a PESA can do this as well - example the IRBIS has to tube.

And it is possible to use the emitter on a subarray during the listening period - but of course that will degrade the distance as well.

Has nothing to do with CPUs. When radio signal hits the receiver element, it will be in analog form, and you need to convert it to digital form. There is a signal A to D converter downstream of the antenna. The path from the receiver to A/D converter is a place where signal loss can occur, so the closer the A/D converter is brought to the receiver, the lower the loss. Radar systems evolved from a single A/D converter for the entire array situated behind it, to multiple A/D converters in sub arrays of the array, to bringing the A/D converter itself just behind the receiver inside every T/R module.

And this is the point where the whole AESA idea hit the brick wall in 2004.

There is no CPU that is fast enough to process the data, and since 2008/2012 the watt/transistor switch stay the same.
Means the AD transformation bandwidth and procession capability limited due to basic thermodynamic / theoretical reasons since 2008.
So, the Chinese / Russians are behind the USA by 10-12 years, so the Russians / Chinese has the same technology now like the USA.

That killed all software defined radar idea.
 

Tam

Brigadier
Registered Member
It degrading the array performance drastically.

Loosing 75% of the radar emitting power has the same effect like loosing half of the aperture for other purposes.

Makes more sense to skip on frame and use it to communicate with everyone.

And it is NOT an advantage of an AESA, a PESA can do this as well - example the IRBIS has to tube.

And it is possible to use the emitter on a subarray during the listening period - but of course that will degrade the distance as well.

The question isn't about whether or not there is degradation. The question is just how long, just how many extremely tiny slivers of real time you have this degradation.

How many times do you need to talk to the missile? Do you need to do it continually? Or can it be in a small finite set of intervals? If you are referring to the latter, the main array would be working 100% with its duty cycles intact. Then you "steal" a short period of like 6 to 50 microseconds to send a coded message, then go back to the regular duty cycle. An analogy is like watching a TV program. The main TV program is the regular duty cycles, but you have short breaks for the commercials, and during those short breaks that you are messaging your friend, which is the cycle used to send the communication message.

And this is the point where the whole AESA idea hit the brick wall in 2004.

There is no CPU that is fast enough to process the data, and since 2008/2012 the watt/transistor switch stay the same.
Means the AD transformation bandwidth and procession capability limited due to basic thermodynamic / theoretical reasons since 2008.
So, the Chinese / Russians are behind the USA by 10-12 years, so the Russians / Chinese has the same technology now like the USA.

That killed all software defined radar idea.

CPUs are never fast enough for direct signal processing. That is why the SoC is hard wired. You do it through FPGA, and this is why you see FPGA at the heart of every module, and this is why FPGA makers is at the heart of the defense industry. You are not talking of Intel but Xilinx and Altera.

Even if China has access in buying Xilinx and Altera, which they do, the US would quickly get suspicious if volumes are "allocated" to PLA military use. Besides, Chinese government won't allow this because they want to be self sufficient and would rule all parts must be made in China. So you have to use a local FPGA maker like Gowind or some other, perhaps a custom fab. The technology won't be as good as the best Xilinx or Altera now have, but it can still meet the requirements until the local FPGA maker goes into its next generation product, and the local FPGA maker gets better with each generation.

Or, if quantities are massive enough, you skip the entire FPGA process and just burn everything into an ASIC. ASIC is far less flexible, but they are cheaper, run faster and definitely cooler. Huawei has move to using ASIC with its base stations, while Nokia still uses FPGA. This has paid for Huawei's base stations running cheaper, faster, and cooler. If your quantities are massive enough, then you can resort to using ASIC on every module, and this helps if you want to practically AESA every radar across every branch of the PLA.

You will only do FPGA during the AESA prototype, then go ASIC during mass production.
 

Anlsvrthng

Captain
Registered Member
The question isn't about whether or not there is degradation. The question is just how long, just how many extremely tiny slivers of real time you have this degradation.

How many times do you need to talk to the missile? Do you need to do it continually? Or can it be in a small finite set of intervals? If you are referring to the latter, the main array would be working 100% with its duty cycles intact. Then you "steal" a short period of like 6 to 50 microseconds to send a coded message, then go back to the regular duty cycle. An analogy is like watching a TV program. The main TV program is the regular duty cycles, but you have short breaks for the commercials, and during those short breaks that you are messaging your friend, which is the cycle used to send the communication message.

What you try to prove ?
Is it AESA vs PESA?
Or you try to convince me about that it is not an issue for a radar operator sitting in a ship to lose few second of early warning time to detect a supersonic sea skimming missile, targeting him in the thin metal hull of a destroyer ?

CPUs are never fast enough for direct signal processing. That is why the SoC is hard wired. You do it through FPGA, and this is why you see FPGA at the heart of every module, and this is why FPGA makers is at the heart of the defense industry. You are not talking of Intel but Xilinx and Altera.

Even if China has access in buying Xilinx and Altera, which they do, the US would quickly get suspicious if volumes are "allocated" to PLA military use. Besides, Chinese government won't allow this because they want to be self sufficient and would rule all parts must be made in China. So you have to use a local FPGA maker like Gowind or some other, perhaps a custom fab. The technology won't be as good as the best Xilinx or Altera now have, but it can still meet the requirements until the local FPGA maker goes into its next generation product, and the local FPGA maker gets better with each generation.

Or, if quantities are massive enough, you skip the entire FPGA process and just burn everything into an ASIC. ASIC is far less flexible, but they are cheaper, run faster and definitely cooler. Huawei has move to using ASIC with its base stations, while Nokia still uses FPGA. This has paid for Huawei's base stations running cheaper, faster, and cooler. If your quantities are massive enough, then you can resort to using ASIC on every module, and this helps if you want to practically AESA every radar across every branch of the PLA.

You will only do FPGA during the AESA prototype, then go ASIC during mass production.
The problem of Pentium 4 is one of the biggest issue for the F35,and generally for the USA military.
Back, before 2004 when the F35 was designed the competitive edge of the USA was 10-15 years, and it meant 10-50* higher processing capacity than any competitor for the same money.

It made the USA advantage in many area extreme, and it made them very confident.

They built this expectation into the design of the new systems, example the F35.
The radar of it is small, half as big as the F22 or Su35.
It means that to keep up with the later it have to be eight time more powerful, or have to have magnitude higher sensitivity / capability.

On paper, it was fine, the Intel roadmap forecasted 7-9GHz CPUs by 2007/8, and 100GHz by the middle of 10s.

So by the original plans when the F35 enter the battle it should have cheap, high power SDR Tx/Rx modules on each dipole in the radar, soaking up data with extreme speed, and creating radar beams never sen before, multiple frequency , multi directional beams in each pulse , using up the cutting edge 30-40GHz general CPUs to found correlations and run extremely sophisticated (but computational extensive) algorithms to map the battle area in short notice, with real time holographic methods .

But, it never happened.

The P4 failed, its pipeline was designed to 7GHz and up, but that never materialised ,so the Intel bought back the old CPU designs, and refined them at 3GHz.


And the advantage of the USA in semiconductors melted from the magnitude(s) range in the 80/90/00s to the single-double percentage range now.

If the chaps know it in 2000, one of the first thing would be to design the F35 with at least 50%, but preferably 100% bigger radar, to have chance against the Chinese/Russian planes.
 

Tam

Brigadier
Registered Member
What you try to prove ?
Is it AESA vs PESA?
Or you try to convince me about that it is not an issue for a radar operator sitting in a ship to lose few second of early warning time to detect a supersonic sea skimming missile, targeting him in the thin metal hull of a destroyer ?

This kind of updates only matter for long range targeting like against armed bomber aircraft.

For supersonic skimming missile that's appearing right out of the radar and sea horizon, with only seconds, the intercepting missile may not need updates. Instead, the combat system will already punch in the data on its guidance system at launch.

If we have a long range targeting scenario, how many updates will a missile require? Given the time a missile will fly to let's say, a 100km, how many updates are necessary? You are sending updates like dots that form a path, and the missile traces a path that connects the dots towards the target. How much time in microseconds will each update require, along with the number of updates.

Operator will not notice losing a few 100 microseconds periods of a slightly deteriorated performance in the radar, using your example of a 100 microsecond message.

The problem of Pentium 4 is one of the biggest issue for the F35,and generally for the USA military.
Back, before 2004 when the F35 was designed the competitive edge of the USA was 10-15 years, and it meant 10-50* higher processing capacity than any competitor for the same money.

It made the USA advantage in many area extreme, and it made them very confident.

They built this expectation into the design of the new systems, example the F35.
The radar of it is small, half as big as the F22 or Su35.
It means that to keep up with the later it have to be eight time more powerful, or have to have magnitude higher sensitivity / capability.

On paper, it was fine, the Intel roadmap forecasted 7-9GHz CPUs by 2007/8, and 100GHz by the middle of 10s.

So by the original plans when the F35 enter the battle it should have cheap, high power SDR Tx/Rx modules on each dipole in the radar, soaking up data with extreme speed, and creating radar beams never sen before, multiple frequency , multi directional beams in each pulse , using up the cutting edge 30-40GHz general CPUs to found correlations and run extremely sophisticated (but computational extensive) algorithms to map the battle area in short notice, with real time holographic methods .

But, it never happened.

The P4 failed, its pipeline was designed to 7GHz and up, but that never materialised ,so the Intel bought back the old CPU designs, and refined them at 3GHz.


And the advantage of the USA in semiconductors melted from the magnitude(s) range in the 80/90/00s to the single-double percentage range now.

If the chaps know it in 2000, one of the first thing would be to design the F35 with at least 50%, but preferably 100% bigger radar, to have chance against the Chinese/Russian planes.

General CPU like that, this can be used to run the computing system of the radar on the back end. But on the modules themselves, they have SoCs that have embedded CPU. Noise processing and filtering, things like that, are done hardwired and microcoded using ASIC or FPGA. These operations need to be so fast, they cannot be software coded, and processed by CPU.
 

Anlsvrthng

Captain
Registered Member
If we have a long range targeting scenario, how many updates will a missile require? Given the time a missile will fly to let's say, a 100km, how many updates are necessary? You are sending updates like dots that form a path, and the missile traces a path that connects the dots towards the target. How much time in microseconds will each update require, along with the number of updates.
It is an interesting question .

The method described by you is the naive approach to kinetic systems.

I don't have experience with missiles , but I know quite well the theory and practice of robot/machining centres/lathes kinetic.

All path observed needs to have continuous second derivative , and the maximum acceleration describe the maximum level second derivative .

Now, it means that the missile can't have "waypoints" ,because a corner needs infinite power to make : P

Most likely the missile get an ecliptic curve (with modifications) tailored for the given type at launch.

During flight, the missile calculate the error compared to this path, and compensate, the radar checking the target, missile, recalculate the trajectory, and feed back it each time to the missile.

Most likely the missile feed back the error data, and possible the sensor data as well.

This can be used to improve the precision of the later munitions, by using up the data to found issues/manufacturing deviance in the missile components.
The radar must tell to the missile where to look for the target, so where to rotate electronically the passive or active head.

As the missile get closer either the passive head starting to "see" the target, means the missile has to feed back the data to the radar, and that can modify the trajectory of the other interceptors as well.

It means in the last 30 seconds of interception the radar has to communicate up to 4 missile, getting real time radar data from all of them, and feeding back the continuously modified trajectory, and at the same time the target can use jamming , targeting the radar echo AND the communication channels as well.
General CPU like that, this can be used to run the computing system of the radar on the back end. But on the modules themselves, they have SoCs that have embedded CPU. Noise processing and filtering, things like that, are done hardwired and microcoded using ASIC or FPGA. These operations need to be so fast, they cannot be software coded, and processed by CPU.

The Intel is the frontier.

IF they can't increase the frequency and decrease the per transistor power consumption then the DSP makers can't do that as well.

In 2004 the transistors hit the speed ceiling ,and later in 2008-9 they hit the watt/switch ceiling. That was the story behind the P4 .

Means analogue computers and design rulez again : )
 

Tam

Brigadier
Registered Member
It is an interesting question .

The method described by you is the naive approach to kinetic systems.

I don't have experience with missiles , but I know quite well the theory and practice of robot/machining centres/lathes kinetic.

All path observed needs to have continuous second derivative , and the maximum acceleration describe the maximum level second derivative .

Now, it means that the missile can't have "waypoints" ,because a corner needs infinite power to make : P

Most likely the missile get an ecliptic curve (with modifications) tailored for the given type at launch.

During flight, the missile calculate the error compared to this path, and compensate, the radar checking the target, missile, recalculate the trajectory, and feed back it each time to the missile.

Most likely the missile feed back the error data, and possible the sensor data as well.

This can be used to improve the precision of the later munitions, by using up the data to found issues/manufacturing deviance in the missile components.
The radar must tell to the missile where to look for the target, so where to rotate electronically the passive or active head.

As the missile get closer either the passive head starting to "see" the target, means the missile has to feed back the data to the radar, and that can modify the trajectory of the other interceptors as well.

It means in the last 30 seconds of interception the radar has to communicate up to 4 missile, getting real time radar data from all of them, and feeding back the continuously modified trajectory, and at the same time the target can use jamming , targeting the radar echo AND the communication channels as well.

What you describe is better known for TVM missiles. There is also command guided ones, like HQ-7, where the radar and EO on board the ship controls the missile until it hits the target. I think Osa works that way too.

Difference between SARH vs. TVM, the missile doesn't return its own radar data back to the radar.


The Intel is the frontier.

IF they can't increase the frequency and decrease the per transistor power consumption then the DSP makers can't do that as well.

In 2004 the transistors hit the speed ceiling ,and later in 2008-9 they hit the watt/switch ceiling. That was the story behind the P4 .

Means analogue computers and design rulez again : )

Software code level of execution is just not fast enough for DSP. It needs to be in microcode and hardwired to get the necessary speed. That is why FPGA is heavily involved inside each and every module. General CPU like Intel Pentium chip needs to be downstream from the array, to the back end of the radar for controlling the radar in general
 

Tam

Brigadier
Registered Member
The radar must tell to the missile where to look for the target, so where to rotate electronically the passive or active head.

This. Actually...you already described it in your own previous post, when you talked about monopulse. The radar from the seeker head tend to have a wide beamwidth, and it projects three or four lobes, with the lobes intersecting in the center. Missile tracks the target from its seeker head using monopulse. With semi active, its called inverse monopulse, with the projection from the ship illuminator, which projects four lobes with the radar receiver divided into four receiving sections. Previous to that, missile use conical scanning, which was eventually discarded because its vulnerability to jamming measures. The radar does not need to rotate the missile head, it simply heads the missile towards the target, so the array points to the target initially, and the missile begins monopulse scanning so the seeker tracks the target.
 
Last edited:
Top