J-20 5th Generation Fighter VII

Status
Not open for further replies.

stannislas

Junior Member
Registered Member
I think in general there's some value with this kind of exercise, but where I hesitate to go as far as "represent some level of truth" is that 1) accuracy of simulations are going to be pretty sensitive to model roughness, 2) as stealth materials and modeling has gotten better general shaping has become less deterministic in assessing RCS capability. It's still the principal factor, but it accounts for less than it used to.
Well, the thing is that we will never know the true RCS, none of the US, China and Russia would give out the RCS in details figures instead a rough number, and in compare with those numbers, this works definately represent more 'level of truth'. Like the very early AU work, some details like the reflection angles and the comparision results between Case 1 and 2 means more than just a RCS number, we may even get some kind of conclusion that J-20 performs better in long range with radar search mode, whereas F-35 does better against short range 'staring' mode.

Also, things like roughness or craftsmanship may not even able to be modelled in reality due the complexitiness, but that doesn't mean there is no value or 'truth value' of a model, in fact building a model is common in almost every science and engineering branchs. There are numoerious cases that important variables could not be modelled but instead using a number of experience to estimate. That's why I said:
J-20 has roughly same level of steath level as F-35, and Su-57, despite it's the worst, but not as horrible as i thought
you can take it further as, ok, F-35 probabaly has the best craftsmanship, so RCS would more close to this number, J-20 may be slight worse, so give a let's say 5% penaty, Su-57 has a visiable fan blade and tones of uneven screws, let's penalize it even more. Despite this would eveantually become a judgemental call, but at least give us a base line on how to evaluate the case, and this the value or the truth of this work.
 
Last edited:

BoraTas

Major
Registered Member
btw, someone did a lot work in RCS analysis of j-20. Keep in mind these figures are posted without considering the RAM layer. He clearly stated how he did the simulation, so you can make your own judgement of these things
Please, Log in or Register to view URLs content!
Here is the end result of J-20 in his simulation
j-20-clean-with-mnzn-ram.png

vs F-35
f-35clean-with-mnzn-ram-1.png

Based on his analysis, you'd see that average RCS of J-20 faired the worst vs F-35A at X-band. From VHF to L band, the numbers look quite comparable. It seems like F-35A is also very much focused on just S to X band radar directly in front of its nose. Its stealth gets a lot worse as frequency decreases and angles are further away from the center.
I think this is as close as we will get to true numbers as civilians for a while. He even modeled small body features like the antenna hump, complex things like the S-duct and non-specular effects. It is not really accurate for sure. To start with he doesn't model RAM. But this stuff is detailed and the software used is really powerful.
 

latenlazy

Brigadier
Well, the thing is that we will never know the true RCS, none of the US, China and Russia would give out the RCS in details figures instead a rough number, and in compare with those numbers, this works definately represent more 'level of truth'. Especially like the very early AU work on this, some details like the reflection angles and the comparision results between Case 1 and 2 means more than just a RCS number, we may even get some kind of conclusion that J-20 performs better in long range with radar search mode, whereas F-35 does better against short range 'staring' mode.

Also, like thing you mentioned, like roughness or craftsmanship is even not modelable, but that doesn't mean there is no value or 'truth value' of a model, in fact building a model is common in almost every science and engineering branchs. There are numoerious cases that important variables could not be modelled but given a number of experience. Thus why I said:

you can take it further as, ok, F-35 probabaly has the best craftsmanship, so RCS would more close to this number, J-20 may be slight worse, so give a let's say 5% penaty, Su-57 has a visiable fan blade and tones of uneven screws, let's penalize it even more. despite this would become a judgemental call, but at least give us a base line on how to evaluate the case, and this the value or the truth of this work.
Model roughness is a term for how high the resolution of your simulation is. It’s not about craftsmanship. In the real world the size of your interacting elements are discrete. EM waves have a discrete wavelength, radar beams have a discrete beam density and shape profile, etc, materials induce specific kinds of scatter patterns, fineness of a shape affects reflection concentration and angles, etc. A model has to make assumptions about these different discrete parameters, and if the model is too rough, even so much as assume different scale differences in the interaction of specific model elements, you will end up with very little analysis that tells you anything about reality. And given that the interacting elements are *not* working at eye level resolutions, a model that doesn’t look rough to the eyes doesn’t validate whether the model’s roughness is too high to generate meaningful output.

A simpler way of explaining this is if you want to calculate the aerodynamics of a golf ball by simply putting a 3D model of a ball into a CFD simulator with very low vector resolution you’re not going to get much useful insight about how golf balls actually work aerodynamically. Or think about how a super smooth ball might deflect a stream of water differently from a ball with higher roughness. Or think about how either ball might deflect water differently based on how big or dense or cohesive the stream is.

In my view these kinds of hobby model exercises are simply far too rough to yield any usable insights beyond confirming “J-20 does in fact employ stealth shaping”. Your mileage may vary and everyone can draw their own conclusions about how much to take away from these exercises. I’m just sharing the reasons behind my opinion that I personally don’t find much that advances the conversation here.
 
Last edited:

stannislas

Junior Member
Registered Member
Model roughness is a term for how high the resolution of your simulation is. It’s not about craftsmanship. In the real world the size of your interacting elements are discrete. EM waves have a discrete wavelength, radar beams have a discrete beam density, etc, materials induce specific kinds of scatter patterns, fineness of a shape affects reflection concentration and angles, etc. A model has to make assumptions about these different discrete parameters, and if the model is too rough, even so much as assume different scale differences in the interaction of specific model elements, you will end up with very little analysis that tells you anything about reality.

A simpler way of explaining this is if you want to calculate the aerodynamics of a golf ball by simply putting a 3D model of a ball into a CFD simulator with very low vector resolution you’re not going to get much useful insight about how golf balls actually work aerodynamically.

In my view these kinds of hobby model exercises are simply far too rough to yield any usable insights beyond confirming “J-20 does in fact employ stealth shaping”.
lol, an ideal rough case simulation is just a hobby model, seriously? You do understand all the things you mentioned in the first paragraph could be estimated or modelled, right? if not than some number of experiences could be given.
Let me put this way, even if the test methods flaws and 3D model is to rough in resolution, in your view, as long as the author was willing to hold the same standard against every testing subjects, it would estabilish a baseline, and rest are how much work he/she would like to continue on to make the number better.

Your mileage may vary and everyone can draw their own conclusions about how much to take away from these exercises. I’m just sharing the reasons behind my opinion that I personally don’t find much that advances the conversation here.
this is the only part i would agree, it's meaningless to continue this kind of discussion if our funmental recoginization of the value of a ideal rough model is different.
 
Last edited:

latenlazy

Brigadier
lol, all i can say is you clearly have very little ideas of using a simulation model as a evaluation or estimation tool in an industrial/ commercial enviroment.
An ideal rough case simulation is just a hobby model, seriously? You do understand all the things you mentioned in the first paragraph could be estimated or modelled, right? if not than some number of experiences could be given.
I’m not saying the work that went into this model wasn’t significant. But given the number of parameters and conditions that seem to be accounted for in this model, I stand by the view that this is a hobby model. All the things I mentioned *are* being estimated in this model to some extent. My point is if dimensional elements of the model factors are too rough the output doesn’t really tell you much even if you account for a decent number of factors. This is because adding factors doesn’t by itself get around how much resolution you are modeling those factors with. “Could” is a bit of an irrelevant point here. I’m not saying models can’t account for those points I’m making. I’m questioning whether *this* particular model does so sufficiently.


Let me put this way, even if the test methods flaws and 3D model is to rough in resolution, in your view, as long as the author was willing to hold the same standard against every testing subjects, it would estabilish a baseline, and rest are how much work he/she would like to continue on to make the number better.

If your camera resolution is too low using the same lens against different objects won’t tell you anything meaningful about the comparison between objects. Same principle as model resolution being too low. You can call that a “baseline” but that doesn’t make it a *meaningful* baseline.
 
Last edited:

stannislas

Junior Member
Registered Member
I’m not saying the work that went into this model wasn’t significant. But given the number of parameters and conditions that seem to be accounted for in
If your resolution is too low using the same lens against different objects won’t tell you anything meaningful about the comparison between objects. You can call that a “baseline” but that doesn’t make it a meaningful baseline.
what make you think this model is too rough to be considered as useful? i'm not saying it is or is not, just curious
casue some of the reasons you mentioned, like discrete wavelength or beam density or fineness of a shape affects reflection concentration and angles are either natually discrete in computer simulation, or what I called 'extra mile to make number better', I haven't see anything that is too rough to make it a 'golf ball'
 

latenlazy

Brigadier
what make you think this model is too rough to be considered as useful? i'm not saying it is or is not, just curious
casue some of the reasons you mentioned, like discrete wavelength or beam density or fineness of a shape affects reflection concentration and angles are either natually discrete in computer simulation, or what I called 'extra mile to make number better', I haven't see anything that is too rough to make it a 'golf ball'
Golf ball was just an analogy. Not saying this model is *that* rough. But it does look pretty rough based on the images of the simulation output provided in the post. Based on the RCS scatter renders and also the J-20 model used for the simulation either the J-20 model, or beam profile, or both are modeled as visibly discrete polygons and not continuous shapes. This suggests the model being used is of lower dimensional or element counts. This is done to reduce computation load of the model, which is understandable, but it’s also what makes me hesitate to draw strong conclusions based on model resolution.
 
Last edited:

stannislas

Junior Member
Registered Member
Golf ball was just an analogy. Not saying this model is *that* rough. But it does look pretty rough based on the images of the simulation output provided in the post. Based on the RCS scatter renders and also the J-20 model used for the simulation either the J-20 model, or beam profile, or both are modeled as discrete polygons and not continuous shapes. This is done to reduce computation load of the model, which is understandable, but it’s also what makes me hesitate to draw strong conclusions based on model resolution.
lol, if it's this level of overall feeling, then fair enough, I can see where you were comming from.

my presonal experience on those models are, don't get too harsh on some methods flews or missing details, even the most sophisticated models simulated on most powerful devices have consistance huge errors between results and reality, sometime even 20-40% in certain high tech industry. Every engineers and shakeholder knows about this, so they prefer to replace most sophisticated models with simple ones, and the results superisingly or not superisingly doesn't make much a different in terms of trend, charateristics, relative comparision and etc..
 

latenlazy

Brigadier
lol, if it's this level of overall feeling, then fair enough, I can see where you were comming from.

my presonal experience on those models are, don't get too harsh on some methods flews or missing details, even the most sophisticated models simulated on most powerful devices have consistance huge errors between results and reality, sometime even 20-40% in certain high tech industry. Every engineers and shakeholder knows about this, so they prefer to replace most sophisticated models with simple ones, and the results superisingly or not superisingly doesn't make much a different in terms of trend, charateristics, relative comparision and etc..
Well, I’m not trying to be harsh to the person who chose to do this exercise. I’m just cautioning against drawing any strong conclusions about it.

Like sure, in real life R&D work engineers prefer simpler over more sophisticated models, but that’s because the point of the model is to provide a study tool to explore the general characteristics of your system of interest. That kind of simulation work is just very different from what we seem to be trying to do here, which is to use simulation as an analytical tool to substitute for lack of better methods to derive conclusions about performance.
 
Status
Not open for further replies.
Top