J-20 5th Gen Fighter Thread VIII

Derpy

Junior Member
Registered Member
If we read this as the total number of aircraft, does this actually mean that 260-156=104 J-20s were produced within a time period of over half a year but far lower than a year? Are we reading this wrong? This number is just too incredible.

Maybe the first 2 digits are indeed batch number. Then we have 60 instead of 104 J-20s produced within the time period.
As far as we know 0260 mean it was aircraft nr 60 and part of batch 3. This air frame was probably produced 2021 or earlier and we can not infer anything about current production or total numbers from it.
 

Stealthflanker

Senior Member
Registered Member
The RCS simulation from Aircraft 101 is interesting but should be taken with a grain of salt. My main issue is that the model from the J-20 is far more detailed than the model for the F-35, which have implications in a simulation where edge refraction is accounted for. Aircraft 101 is apparently planning to redo the simulation with a more detailed F-35 model.

you should do that for every other "RCS figures" you found around.

The thing is that, there is no real standard yet for doing RCS estimates. I am unfortunately limited with my machines. and hell there is no real standard yet on polycounts. So I just optimize whenever possible. I feel it's kind of subjective just to compare models by just "looking" at it.

I have already delivered the latest "step" model which hopefully contain the curves you have problem with except that given the lack of standard in the first place. There might never be a real settlement should argument arise. Which i'm kinda frustrated with. People can demand "more details" The problem is "How much details", How much polycounts, does the edge have to be definied in certain ways ?

and of course.. will i get more pay so i can afford better computer ?


The second point is that the result of the estimates CANNOT be compared with others. Most if not all papers, presented their result in 2D polar format. I would love using similar format and i did use that when i was still using POFACETS back then but people complained that it's not representative because some angles are not displayed.

My solution for that is to adopt that 120 (-60 to 60) x 45 degrees presentation then put Mean and Median. Then the client (Which is the owner of the Aircraft101) Added the 20 x 20 degrees. The 3D presentation i did is hopefully will be more representative as it covers more angles.


And now the real problem. Does median and average/mean is enough ? No.

Radar engineers which will definitely build radar for air defense is not necessarily interested with the graphics. They will instead make a target model which the data processed statistically and fitted or a specific equation be made for them to obtain PDF (Probability Density Function) or CDF (Cumulative Distribution Function) of the Target RCS.

I am not unfortunately offering service (Yes it's a paid service) for such statistical process yet but maybe if there is enough client i would.
 

Deino

Lieutenant General
Staff member
Super Moderator
Registered Member
Tail number 61123, construction number CB0260.

53068412301_b5fb3aea2a_o.jpg

Thanks, I only saw the cn. But wasn’t aware of of the sn.
 
Last edited:

latenlazy

Brigadier
The first post is the old paper that compared canard configuration vs traditional configuration on a non-specific aircraft model, with the conclusion that canard is not worse than traditional (contrary to a great deal of western-sourced misinformation). It has nothing to do with the J-20 other than as a proof of concept that canards are compatible with stealth.

The RCS simulation from Aircraft 101 is interesting but should be taken with a grain of salt. My main issue is that the model from the J-20 is far more detailed than the model for the F-35, which have implications in a simulation where edge refraction is accounted for. Aircraft 101 is apparently planning to redo the simulation with a more detailed F-35 model.
View attachment 116292
View attachment 116293
Even if they redo the model there’s not much real world value in this exercises imo. All these models have lower resolutions than the scale of physical interactions they're trying to measure. If you’re trying to model EM interactions at wavelengths of 2-4 cm you probably want a physical model with a resolution of at least that, and ideally an order of magnitude or two lower. This is like trying to see 1 meter sized objects from space with cameras that only have 3 meter per pixel resolution.
 

latenlazy

Brigadier
you should do that for every other "RCS figures" you found around.

The thing is that, there is no real standard yet for doing RCS estimates. I am unfortunately limited with my machines. and hell there is no real standard yet on polycounts. So I just optimize whenever possible. I feel it's kind of subjective just to compare models by just "looking" at it.
The “standard” for polycounts should be whatever polycount gets you to smaller polygon sizes (ideal an order of magnitude lower or more) than the wavelengths of the beams you’re trying to model. That’s a minimum threshold for a model that actually simulates the physical interactions between an EM beam and an object. There’s a reason why most people don’t do realistic and usable RCS modeling for complex objects on their hobby computer.
 

Stealthflanker

Senior Member
Registered Member
The “standard” for polycounts should be whatever polycount gets you to smaller polygon sizes (ideal an order of magnitude lower or more) than the wavelengths of the beams you’re trying to model. That’s a minimum threshold for a model that actually simulates the physical interactions between an EM beam and an object. There’s a reason why most people don’t do realistic and usable RCS modeling for complex objects on their hobby computer.

Do you have source for that tho ? Like say a Book ?

and i think the required computation power is more into the method. e.g MOM or Methods of Moments is a guaranteed University level mainframe computing power.
 

latenlazy

Brigadier
Do you have source for that tho ? Like say a Book ?

and i think the required computation power is more into the method. e.g MOM or Methods of Moments is a guaranteed University level mainframe computing power.
It’s physics. Stealth is all about concentrating reflection into narrow lobes. If the object has low resolution your reflected lobes will be bigger. 3.5 cm wavelength against a 7 cm facet for example will have a broader reflected beam than a 3.5 cm wavelength against a 2 cm facet. You don’t know if the real object reflects better than the analogue because the analogue is coarser than the real object.

I think the algorithms that power these simulators are obviously better today than ever before but there’s also a reason why no one in university is doing high fidelity modeling of large complex shapes unless they’re a graduate student with funding for a multi-year project that often also involves practical application. You may not need that intensive a simulator to get rough RCS comparisons going but on the flip side for every increase in polygon resolution you get a square-cube increase in vector parameters and thus a square-cube increase in needed computation power. Without a very powerful home setup I don’t see getting to the resolution needed to get to meaningful results a trivial hobby exercise.
 

Stealthflanker

Senior Member
Registered Member
It’s physics. Stealth is all about concentrating reflection into narrow lobes. If the object has low resolution your reflected lobes will be bigger. 3.5 cm wavelength against a 7 cm facet for example will have a broader reflected beam than a 3.5 cm wavelength against a 2 cm facet. You don’t know if the real object reflects better than the analogue because the analogue is coarser than the real object.

Does it ? Like the relationship is Lambda/D tho Let's say you have that 7 cm long poly and 3.5 cm wavelength.

The mainlobe width would be 0.035/0.07 The resultant would be 0.5 Radians. and 3.5 cm wavelength vs 2 cm long facet would have
0.035/0.02 = 1.75 Radians Well smaller length give larger lobe.

Are we really talking the same physics ? Mine is a well known equations to predict antenna mainlobe from its dimension. Since RCS is basically antenna gain and the radiation patterns defined the same way in Far field, the relationship would be the same.


I think the algorithms that power these simulators are obviously better today than ever before but there’s also a reason why no one in university is doing high fidelity modeling of large complex shapes unless they’re a graduate student with funding for a multi-year project that often also involves practical application. You may not need that intensive a simulator to get rough RCS comparisons going but on the flip side for every increase in polygon resolution you get a square-cube increase in vector parameters and thus a square-cube increase in needed computation power. Without a very powerful home setup I don’t see getting to the resolution needed to get to meaningful results a trivial hobby exercise.

Then what ?
 

taxiya

Brigadier
Registered Member
Do you have source for that tho ? Like say a Book ?
It is the same principle of discrete representation of analog signal (DC conversion). The minimum digitalization sampling rate is twise the maximum component frequency of the analog signal. Since analog signals bandwidth is infinite in theory, the digitalization sampling rate should be infinite too, but in practical terms we take the highest frequency whose power is detectable as the max, therefor we have a minimum sampling rate. If HW allows, the higher the rate the better, which push the high end boundry further. Here a signal is a function of time.

This principle is universal beyond signal processing. To faithfully represent a spetial object, just replace time with distance, everything else is the same.

The principle is that a discrete representation of a continious curve (information) can be achieved faithfully (without loss of information) by series of discrete sampling points (the polycounts) if the sampling frequency (in time or space) is two times of or higher than the component "frequency" of the continious curve (over period of time or distance).

If you want to read, search for "Information Theory".
 
Last edited:

latenlazy

Brigadier
Does it ? Like the relationship is Lambda/D tho Let's say you have that 7 cm long poly and 3.5 cm wavelength.

The mainlobe width would be 0.035/0.07 The resultant would be 0.5 Radians. and 3.5 cm wavelength vs 2 cm long facet would have
0.035/0.02 = 1.75 Radians Well smaller length give larger lobe.

Are we really talking the same physics ? Mine is a well known equations to predict antenna mainlobe from its dimension. Since RCS is basically antenna gain and the radiation patterns defined the same way in Far field, the relationship would be the same.
You can get *a* result but it's not the *same* result as the real world object you're analogizing. Those equations need to interact with physical geometric parameters to spit out meaningful results for a 3D object, since the object of interest is not a point or a line or a flat plate. If your model object is coarser than the real object you're analogizing so to will be your model output. If the level of difference for significance is tiny (in this case the difference between 0.01 and 0.001 is massive in significance but tiny in absolute quantities) your coarser resolution overwhelms your level of significance.
Then what ?
If everything could be achieved with hobby enthusiasm we wouldn't need expensive equipment and corporate financing to build all those cool engineering projects.
 
Top