PLAN Aircraft Carrier programme...(Closed)

Status
Not open for further replies.

dingyibvs

Senior Member
It's been seriously examined, what's your response?

You said it yourself, the details such as mean and distribution of failures are irrelevant while expert opinions are, so why insist on details? It's a pretty simple question, I await your answers.
 

Brumby

Major
You just said it yourself, "Mean and distribution is irrelevant when those people who knows what they want out of the testing is saying the results are not satisfactory. " So why the insistence on "evidence of fact"? What does it matter when people who knows what they want out of the testing, and I'd certainly say Admiral Ma is such a person, is saying that the results are satisfactory?

Since you want to labour on this point I will oblige by demonstrating you are making a fallacy error of equivocation.

Outside of lab conditions, there is a testing path that Jeff has highlighted.
Firstly you have the dead-load testing. How much of testing is actually required. I have no idea but presumably is a function of scope, issues encountered and reliability of test data generated amongst others. The US to-date has conducted more than 3000. Needless to say it is better than 1. How much dead-load testing has Mr. Ma conducted? So far we don't even know whether he has tested any outside of lab conditions. Facts matters because information can be put in perspective. When the mean cycle of 240 achieved is 5 times higher than target, we know a number of things from it. We know it is from a sample size of 1967 dead-load launches. We know that 5 times of mean cycle is a target of 1200. Translated, we know on average in a normal day, a carrier can launch 120 sorties over a 12 hour period. A targeted failure of 1200 is one failure every 10 days versus one failure every 2 days achieved in testing. The US Navy doesn't accept such a rate of failure - it is a matter of expectation. Others might find it acceptable. So when the test results are stated to be unsatisfactory, we know what it actually means in the context of things. When Mr.Ma says he is satisfied, please explain what is the satisfaction based on.

We also know that US testing has progressed to actual aircraft launches - 452 of it subject to various loading. The only reported issue is excessive release dynamics. It also mean there are no other known issues. In contrast where exactly can you place the test to-date on the Chinese side? Facts do matter.
 

Blitzo

Lieutenant General
Staff member
Super Moderator
Registered Member
Mate,
Not so quick. This is what you said "What I draw from the overall statements by Rear Admiral Ma is confirmation that an EMALS catapult is under advanced stages of development, likely with at least one functioning prototype, and that he as a leader on the project is very satisfied with its development and performance either projected or demonstrated."

The whole premise of your proposition is that Mr. Ma is development head of the EMAL program. The scope of his development is unknown. What is known is publicly he has done what he needs to do and is ready to handoff the program to the next phase. I cannot object on reasonable grounds of deduction that he has finished his piece and by that meaning development effort at his end has advanced sufficiently to hand off. What we don't know are many things including :
(i) His development scope and what constitutes his end of the program and the start of the next phase
(ii) His definition of having completed development and what satisfied means.
(iii)The meaning of next phase and what further work entails

Factually all we know is that he is ready to pass the baby to the next person in the chain. Everything outside of that is simply conjecture.

(ii) and (iii) are things that we obviously do not know and will likely not know for some time, but I would say that (i) is arguably the most important determinant in your skepticism.

Now, I am going to back track a little but in a way which I believe is reasonable.
I would redefine (i) in relation to your acceptance of the situation I present, as how intimately involved and knowledgable Rear Admiral Ma is in the EMALS project. That is to say, there are many positions of leadership and management in any project like this, and whether he is highest, or one of the second highest is not really important; after all high positions in project management might be bureaucratic rather than technical. So whether he is involved to a degree where we can judge if he may have adequate technical knowledge is more important.

So I should rephrase my original statement to "What I draw from the overall statements by Rear Admiral Ma is confirmation that an EMALS catapult is under advanced stages of development, likely with at least one functioning prototype, and that he as a likely leader or intimately involved contributor on the project is very satisfied with its development and performance either projected or demonstrated."

But now, let's examine the original article.
Please, Log in or Register to view URLs content!

...Rear Admiral Ma Weiming, power and electrical engineering specialist of the Navy of the Chinese People's Liberation Army (PLAN)....

So this tells us he is a rear admiral, whose focus seems to be in power and electrical engineering and quite important in R&D of various systems, evidenced by the paragraph below.

Ma Weiming has won the First Prize of National Scientific and Technological Progress Award and First Prize of the Military Scientific and Technological Progress Award for a number of times. He is called a "national-treasure-class" technical rear admiral.

But what position does he have for the EMALS itself? Well it's actually said quite clearly below. (Incidentally, prior to this, Rear Admiral Ma has been most noted for his involvement in Chinese IEPS development; )

Rear Admiral Ma Weiming, inventor of China's electromagnetic catapult and specialist in electrical engineering

And finally, at the last paragraph Rear Admiral explicitly says he is responsible for developing usable technologies for the navy. Combined with the above selected parts of the article and the Rear Admiral's achievements, I think it is not an unreasonable assumption to expect that he is quite intimately involved with the PLAN's EMALS development effort -- and that is probably an understatement given the earlier part of the article calls him the "inventor" of China's electromagnetic catapult.

Pointing to the one star on his uniform, he said he is just a technical rear admiral and is only responsible for developing useable technologies, and that only high-ranking military officials can decide which kind of technical plan is adopted.

So unless there's any reason to suspect the above statements are false or unreliable, then I think it is a fair judgement based on the article, to say Rear Admiral Ma is likely able to satisfy the assumptions of my (now slightly modified) position.

--
I should also say I like this guy's style. Apart from being quite important to the PLAN's future technologies, he seems to have a sense of humour too.
 

Blitzo

Lieutenant General
Staff member
Super Moderator
Registered Member
Since you want to labour on this point I will oblige by demonstrating you are making a fallacy error of equivocation.

Outside of lab conditions, there is a testing path that Jeff has highlighted.
Firstly you have the dead-load testing. How much of testing is actually required. I have no idea but presumably is a function of scope, issues encountered and reliability of test data generated amongst others. The US to-date has conducted more than 3000. Needless to say it is better than 1. How much dead-load testing has Mr. Ma conducted? So far we don't even know whether he has tested any outside of lab conditions. Facts matters because information can be put in perspective. When the mean cycle of 240 achieved is 5 times higher than target, we know a number of things from it. We know it is from a sample size of 1967 dead-load launches. We know that 5 times of mean cycle is a target of 1200. Translated, we know on average in a normal day, a carrier can launch 120 sorties over a 12 hour period. A targeted failure of 1200 is one failure every 10 days versus one failure every 2 days achieved in testing. The US Navy doesn't accept such a rate of failure - it is a matter of expectation. Others might find it acceptable. So when the test results are stated to be unsatisfactory, we know what it actually means in the context of things. When Mr.Ma says he is satisfied, please explain what is the satisfaction based on.

We also know that US testing has progressed to actual aircraft launches - 452 of it subject to various loading. The only reported issue is excessive release dynamics. It also mean there are no other known issues. In contrast where exactly can you place the test to-date on the Chinese side? Facts do matter.

Regarding the 201 failures out of 1967 thing...
I know that you're trying to use the example to demonstrate that success depends on the metric one wants to use, but the point latenlazy and dingyibvs are making is that, hypothetically, if the 201 failures were all clustered at the beginning of testing (say, the first 201 of 1967 tests), and then the subsequent 1766 tests were all successful, then that would suggest some kind of modification to the system has occurred which has dramatically improved its reliability, so thus a mean failure rate that includes the original 201 consecutive failures would be flawed. Of course like I said this is all hypothetical, and in the original US tests with the 201/1967 failures, the distribution of failures is such that it is probably reasonable to use a mean cycle failure instead.

Imo this entire discussion about the distribution of failures is only tangental to the matter at hand, because we have no idea what the testing stats of the Chinese EMALS is like.
For a matter of principle, I agree with you and think you're obviously correct in saying that the metric of success (such as mean failure rate during testing) is important, but latenlazy and dingyibvs are also correct in saying that hypothetically, if the vast majority of failures are clustered at the beginning of testing and then there were no (or very very few) subsequent failures onwards, then that suggests initial testing was either done incorrectly in a way that exacerbated failure, and/or that the system was modified to correct initial faults that made the system fail and that the updated system no longer suffers from the same defect that caused the initial consecutive/high fail rates.
That is to say, in the hypothetical idea that 201 first launches out of 1967 were failures with 202 onwards being all successful due to a modification of the system, then it would be reasonable to classify test number 202 as test number 1 of a new updated system separate from the first 201 tests.

If the system in this hypothetical scenario was not changed after the 201 failed test, and all subsequent 1766 tests were successful, then obviously it would be a statistical miracle and I'd probably go and buy a lottery ticket or ten if I was part of the project.

tl;dr I think latenlazy and dingyibvs are emphasizing the distribution of failures as indirectly saying that correcting initial defects of a hypothetical system may substantially reduce an initially high failure rate. In that sense, all this stuff about distribution is moot, as one should logically have two testing data sets, one for the system before the modification, and one for after.
 
Last edited:

latenlazy

Brigadier
Yano, I was giving some thought to what potential timelines would look like, and concluded that our back and forth about this are somewhat irrelevant to date of deployment. Even if an EMALS system was ready and launching planes it wouldn't matter. It still wouldn't see operation on a fully operational CVN for at least a half decade, and maybe a little over that. This is presuming that they build a catapult in either of their first two indigenous carriers, and we're all reasonably certain the first one won't have a catapult, so that's left to the second one earliest, and maybe even the third one (which pushes even the most optimistic timeline a good 3-5 years back). I think, at the very least, we all now know that EMALS as the catapult the PLAN will go with from the get go is a reasonable expectation, if not with absolute certainty, and, frankly, I think that's the most important takeaway.
 

Brumby

Major
Regarding the 201 failures out of 1967 thing...
I know that you're trying to use the example to demonstrate that success depends on the metric one wants to use, but the point latenlazy and dingyibvs are making is that, hypothetically, if the 201 failures were all clustered at the beginning of testing (say, the first 201 of 1967 tests), and then the subsequent 1766 tests were all successful, then that would suggest some kind of modification to the system has occurred which has dramatically improved its reliability, so thus a mean failure rate that includes the original 201 consecutive failures would be flawed. Of course like I said this is all hypothetical, and in the original US tests with the 201/1967 failures, the distribution of failures is such that it is probably reasonable to use a mean cycle failure instead.

Imo this entire discussion about the distribution of failures is only tangental to the matter at hand, because we have no idea what the testing stats of the Chinese EMALS is like.
For a matter of principle, I agree with you and think you're obviously correct in saying that the metric of success (such as mean failure rate during testing) is important, but latenlazy and dingyibvs are also correct in saying that hypothetically, if the vast majority of failures are clustered at the beginning of testing and then there were no (or very very few) subsequent failures onwards, then that suggests initial testing was either done incorrectly in a way that exacerbated failure, and/or that the system was modified to correct initial faults that made the system fail and that the updated system no longer suffers from the same defect that caused the initial consecutive/high fail rates.
That is to say, in the hypothetical idea that 201 first launches out of 1967 were failures with 202 onwards being all successful due to a modification of the system, then it would be reasonable to classify test number 202 as test number 1 of a new updated system separate from the first 201 tests.

If the system in this hypothetical scenario was not changed after the 201 failed test, and all subsequent 1766 tests were successful, then obviously it would be a statistical miracle and I'd probably go and buy a lottery ticket or ten if I was part of the project.

tl;dr I think latenlazy and dingyibvs are emphasizing the distribution of failures as indirectly saying that correcting initial defects of a hypothetical system may substantially reduce an initially high failure rate. In that sense, all this stuff about distribution is moot, as one should logically have two testing data sets, one for the system before the modification, and one for after.

Primarily as I emphasized, the dissection over statistical possibilities and hypotheticals were not necessary because it doesn't have any bearing on the nature and direction of the conversation. It was simply a misdirection in my view. A proactive program would probably reflect a set of statistics that would show a progressive improvement in test results. We know as a fact that an additional 1000 plus dead-load launches were done subsequently except the results had not been released. If the results showed a positive improvement , the hypothetical consideration would effectively be cancelled. However if the results are still below expectation, then further work needs to be done. Regardless, I fail to see how this issue has any significant bearing on the status of testing of the program and in particular about the Chinese side of things.
 

dingyibvs

Senior Member
Since you want to labour on this point I will oblige by demonstrating you are making a fallacy error of equivocation.

Outside of lab conditions, there is a testing path that Jeff has highlighted.
Firstly you have the dead-load testing. How much of testing is actually required. I have no idea but presumably is a function of scope, issues encountered and reliability of test data generated amongst others. The US to-date has conducted more than 3000. Needless to say it is better than 1. How much dead-load testing has Mr. Ma conducted? So far we don't even know whether he has tested any outside of lab conditions. Facts matters because information can be put in perspective. When the mean cycle of 240 achieved is 5 times higher than target, we know a number of things from it. We know it is from a sample size of 1967 dead-load launches. We know that 5 times of mean cycle is a target of 1200. Translated, we know on average in a normal day, a carrier can launch 120 sorties over a 12 hour period. A targeted failure of 1200 is one failure every 10 days versus one failure every 2 days achieved in testing. The US Navy doesn't accept such a rate of failure - it is a matter of expectation. Others might find it acceptable. So when the test results are stated to be unsatisfactory, we know what it actually means in the context of things. When Mr.Ma says he is satisfied, please explain what is the satisfaction based on.

We also know that US testing has progressed to actual aircraft launches - 452 of it subject to various loading. The only reported issue is excessive release dynamics. It also mean there are no other known issues. In contrast where exactly can you place the test to-date on the Chinese side? Facts do matter.

I didn't want to get into this either, but since you're focusing on details, let me pick your statement apart, and demonstrate to you why these numbers mean absolutely nothing to me.

1) How do you reconcile 201 failures in 1967 launches with 240 mean cycles between failure? Sounds more like the MCBF should be 5, no? The relevance of this question should be self-evident.

2a) What is considered a failure? Can you define it for me? Not all failures are made equal, we need to know what a failure is before we move on to the next question, which is...

2b) what types of failures are contained within those 201 failures? Certainly an exploding EMALS apparatus is not quite on the same level as failure to achieve the minimal take off speed by 1%. Yes, this difference may sound frivolous, but what would the US consider a take off with 10% more speed than necessary and expected? Is that a failure? The plane still took off but would suffer unnecessarily large stress which will add up over time and decrease the plane's shelf life. What about 1% more speed than desired? What about 30%? If China and the US have different definitions of failures, then even if the PLA were to release their testing data, we'd be comparing apples to oranges, no?

3) We're going around in circles here, but what exactly is the distribution of these failures? The relevance of this has been pointed out by me and others. You can't just conveniently appeal to authority when you don't have the relevant details, you either trust the authorities or you trust the details. Since you've clearly elected to trust the details, please enlighten us with them so we can share your faith.

Feel free the answer these questions, I'm sure I'll have follow on questions based on your answers until my bachelor-level EE knowledge is exhausted. It won't take long, I promise you. I've switched careers and haven't done any real engineering in about 7 years now and I was never great to begin with.
 

Blitzo

Lieutenant General
Staff member
Super Moderator
Registered Member
Primarily as I emphasized, the dissection over statistical possibilities and hypotheticals were not necessary because it doesn't have any bearing on the nature and direction of the conversation. It was simply a misdirection in my view. A proactive program would probably reflect a set of statistics that would show a progressive improvement in test results. We know as a fact that an additional 1000 plus dead-load launches were done subsequently except the results had not been released. If the results showed a positive improvement , the hypothetical consideration would effectively be cancelled. However if the results are still below expectation, then further work needs to be done. Regardless, I fail to see how this issue has any significant bearing on the status of testing of the program and in particular about the Chinese side of things.



I think this entire discussion surrounding the distribution 201/1967 failures has no bearing on the status of testing of the Chinese EMALS program, which is why I said:
Imo this entire discussion about the distribution of failures is only tangental to the matter at hand, because we have no idea what the testing stats of the Chinese EMALS is like.

I think dingyibvs and latenlazy were making a caveat that a simple mean failure rate count isn't always accurate depending on whether the distribution of failures was a result of something like improving the system and/or incorrect initial testing conditions.
Reading over some of your replies on the last page I'm not sure if you missed the rationale of what they were suggesting?

I'm not sure why the last few pages was contesting this relatively simple principle, that in reality is not even really related to the Chinese EMALS situation.
I think we all implicitly agree that a sensible distribution of failures throughout a series of tests where a system is not modified is a reliable but simple indication of reliability, or at least I've said that in my last post.
 

dingyibvs

Senior Member
Primarily as I emphasized, the dissection over statistical possibilities and hypotheticals were not necessary because it doesn't have any bearing on the nature and direction of the conversation. It was simply a misdirection in my view. A proactive program would probably reflect a set of statistics that would show a progressive improvement in test results. We know as a fact that an additional 1000 plus dead-load launches were done subsequently except the results had not been released. If the results showed a positive improvement , the hypothetical consideration would effectively be cancelled. However if the results are still below expectation, then further work needs to be done. Regardless, I fail to see how this issue has any significant bearing on the status of testing of the program and in particular about the Chinese side of things.

Explain why it has no bearing on the nature and direction of the conversation? Is it because the experts think it's unsatisfactory? If we're to rely on expert opinion then why do we need the numbers in the first place? Are we not trying to make sense of the numbers ourselves? How can we do it without knowing key parameters with which we need to dissect these numbers? If we can't even put TWO numbers in the proper context, how are we to value the utility all the details you so crave?
 

dingyibvs

Senior Member
I think this entire discussion surrounding the distribution 201/1967 failures has no bearing on the status of testing of the Chinese EMALS program, which is why I said:


I think dingyibvs and latenlazy were making a caveat that a simple mean failure rate count isn't always accurate depending on whether the distribution of failures was a result of something like improving the system and/or incorrect initial testing conditions.
Reading over some of your replies on the last page I'm not sure if you missed the rationale of what they were suggesting?

I'm not sure why the last few pages was contesting this relatively simple principle, that in reality is not even really related to the Chinese EMALS situation.
I think we all implicitly agree that a sensible distribution of failures throughout a series of tests where a system is not modified is a reliable but simple indication of reliability, or at least I've said that in my last post.

Here's my understanding of our situation. Rear Admiral Ma released a statement, and we're debating the relevance and thereby the implications of his statement, which I think we can all agree is relevant to our understanding of the Chinese EMALS program. Many of us highly value his statement and are willing to deduct, infer, and speculate at least partially based on his statement. Brumby, on the other hand, does not value his statement highly and values details instead. I, at least, in an effort to point out to him that details available in the public domain can never be sufficient to make an accurate assessment of the program, use the 201/1967 data to illustrate our point. It is, in essence, an example of why public domain data is useless, and why you can make a more accurate assessment of the program by the actions and words (in the PLA's case, since, again, they correlate exceptionally well with their actions) of those who DO have access to sufficient quantities of data.

Essentially, our conversation here is about what to make of RA Ma's statement. Should it be valued, or should it be dismissed in the absence of details?
 
Status
Not open for further replies.
Top