I think you have lost the plot on this one. Firstly you need to be clear exactly with what you want to come out of this conversation. If you are attempting to nick pick because I have given you some test data, you are heading in the wrong direction especially with hypotheticals of mean and bell curve distribution which has no relevance to this conversation.
The test data status establishes the following :
(i)Dead-load launches had been done. In fact a sizable number i.e. 1967 dead-load launches with failures that is 5 times higher than their expected target. It means reliability is an issue that needs fixing
(ii)Mean and distribution is irrelevant when those people who knows what they want out of the testing is saying the results are not satisfactory. You are creating scenarios that are simply not an issue.
(iii)452 aircraft launches had been done with excessive release dynamics identified as an issue that needs fixing.
(iv)With the above, we know what testing had been done and what needs fixing. Effectively we know exactly where they are with the program.
In contrast, by the same measure provide me one evidence of fact besides an official statement that is the basis to launch all kinds of deduction, inference and speculation on where the program may be on the Chinese side.
What I don't understand is why are we going through a second round of iteration. I thought we are done with this.
I'm simply saying that the 1967 test with 201 failures and 204 mean cycles tells us next to nothing about what the actual test process and difficulties of the technology look like. It doesn't even do a good job of informing us exact challenges and assess progress. That type of statistic is too general and is mostly just a smokescreen. Five times the target fail rate does quite a bit better in communicating that information, though not perfect. So yeah, a bit of a nitpick.
I don't think anyone disputes that we have a lot more transparency about programs in the US. I don't think anyone is championing that a statement by a PLA official is more reliable than detailed documentation from a US Military Program. However, I think our point of interest isn't over whether US sources are more reliable about US programs than Chinese sources about Chinese programs, but over figuring out what the Chinese source is telling us about Chinese programs and their state of development/progress.
I'm going to go off from our set of exchanges to try to get at what may be the deeper point of disagreement here.
Now, as I understand it (and correct me if I'm wrong), your point about Lakehurst is about trying to set up a point of comparison, and the point of focusing on reliability of information is to suggest that gauging the progress and challenges of a relative peer program is generally more sound as a means of gauging how far along a Chinese EMALS is than taking educated stabs at a few pieces of leaked information and documents coupled with the official statement of a general. From a general information assessment point of view I agree.
The criticism with that approach is that it presumes some comparability in development paths and timelines between a US program and a Chinese program. That also requires some degree of inferences, extrapolations, deductions, assumptions, etc. For instance, we assume these problems are reduced because physics is generally the same for everyone and there are generalities to engineering development, but that's not always a given and it's going to vary case by case. For example that condition may not apply with an EMALS if China has a better grasp of deploying engineering solutions that depend on EM propulsion, because they've built a maglev, while the US hasn't. I'm not saying this is necessarily true of course, but that's an example of where the specifics would undermine the generalities. After all, you can have an argument that provides far more data, but if the comparative case being used is incorrect and not a good analogue no amount of data can override the logical incongruence to make the conclusion more accurate.
I'm not saying that approach is bad or wrong of course. None of our approaches are perfect. However, I am saying that line of argumentation has its own problems at a different point of the argumentation chain. You of course are going to assert that this framework will yield a more reliable assessment and that we should have a stronger sense of trust towards conclusions derived from it than through the alternative methods, but I think some members (me included to a significantly lesser extent) question why we should take that notion of reliability as a given and push back on that certitude, especially if they have their own experience with analytical approaches that have worked in the past.
To summarize, while you're critiquing the approach of drawing dotted lines along a pattern of historical precedent based on information markers and sparse pieces of information, others are critiquing the approach of assuming that another case that can be argued is comparative can be backwards inducted. (After all if you're going to say that x and y are congruent you need to prove congruency first). There are good reasons why both sides strongly feel their methods are more informative.
Personally, I see both sides, and I'd like to think that experience has taught me to be never certain of any of these methods, so it's all just good and casual conversation. They're all different forms of information that we're trying to fit into a billion piece jigsaw that's missing most of its pieces. Sorry if things got contentious or heated, or if this was an unnecessary long and wordy response. I agree that we should move on, but I felt it best to try to air out exactly where everyone seemed to be butting heads.