Did they actually use the 14 flight option, or the 6 flight option? There have been 6 flights of Block 5, and they have a good amount of insight into that vehicle from the Commercial Crew program.
Quote from: gongora on 11/10/2018 01:49 pmDid they actually use the 14 flight option, or the 6 flight option? There have been 6 flights of Block 5, and they have a good amount of insight into that vehicle from the Commercial Crew program.Yeah, I removed that part earlier just to be safe. I was under the impression that the other two options were for new variants of launch vehicles that already had Cat 3 certification. However, D4H got certification with less than 14, and I don't think D4M ever had it?
Quote from: psherriffs on 11/09/2018 10:05 pmJust wondering. Where does the "14 consecutive successful missions" number come from? For 95% reliability wouldn't you need 19? I count 36 consecutive successful missions so far for SX.After 14 consecutive successes you have more than 50% confidence in 95% reliability. This is calculated as 0.95^14 = .487% This 48.7% is the average probability of getting at least one failure if the actual reliability is 95%. The confidence level is 100% less this number, or 51.3%.19 successful missions does not "prove" 95% reliability, it only establishes a 62.3% (100% - 0.95^19) confidence that the true reliability is at least 95%.
Just wondering. Where does the "14 consecutive successful missions" number come from? For 95% reliability wouldn't you need 19? I count 36 consecutive successful missions so far for SX.
That's not how these probabilities work. [Examples deleted]I hope this helps illustrate that everything depends on the prior probabilities. That is, it depends on the population of launch vehicles you are sampling from. [...] It's not a very satisfying answer, because we don't really know much about the prior probabilities. So we have to assume something. But we should be aware that our calculations are based on these (very uncertain) prior probabilities and be appropriately cautious in our use of the probability figures we get from them.
this is something that nearly everyone who isn't a professional statistician gets wrong. Probabilities are often non-intuitive. That's why people hire professional statisticians.
Quote from: ChrisWilson68 on 11/10/2018 01:24 amThis is all about comparing the odds that you've gotten this result because it's a reliable vehicle versus that you've gotten this result by chance even though the vehicle is not reliable. Which is why the confidence is 50%. There is still an equal chance that the actual reliability is less than 95%, including due to weird population distributions.
This is all about comparing the odds that you've gotten this result because it's a reliable vehicle versus that you've gotten this result by chance even though the vehicle is not reliable.
I am not a reliability engineer, but I do know from working with reliability engineers that this is how they calculate confidence levels for reliability in populations with unknown distributions.
Quotethis is something that nearly everyone who isn't a professional statistician gets wrong. Probabilities are often non-intuitive. That's why people hire professional statisticians.A professional statistician won't be much help here. The problem is the assumptions behind the model (the priors) and not the mathematics of probability.
Quote from: envy887 on 11/10/2018 12:57 pmQuote from: ChrisWilson68 on 11/10/2018 01:24 amThis is all about comparing the odds that you've gotten this result because it's a reliable vehicle versus that you've gotten this result by chance even though the vehicle is not reliable. Which is why the confidence is 50%. There is still an equal chance that the actual reliability is less than 95%, including due to weird population distributions.Incorrect. See my previous posts where I worked out concrete examples where this is not true.
Quote from: envy887 on 11/10/2018 12:57 pmI am not a reliability engineer, but I do know from working with reliability engineers that this is how they calculate confidence levels for reliability in populations with unknown distributions.You're misunderstanding what they're doing.
They're not calculating it this way because these calculations give numbers that are true no matter what the prior distributions are. They're calculating it this way because, if they want to get numbers, they need to assume *something* about the priors. These calculations build in a particular set of assumptions about the priors.
A lot of people think what you are claiming here, that these formulas give results that are true no matter what the priors are.
But that's not true. The formulas assume something about the priors. They have to. They assume something about the priors that is a useful heuristic.There's nothing wrong with doing a calculation like that as long as you understand what it actually means and don't get overconfident about the numbers it gives.