I guess Soyuz has had success with an abort system that I think has been used 3 times (?) but I don't think we know how many injuries and deaths have occurred due to the abort system propellants.Soyuz SAS has killed 3 ground crew (1966) and saved 6 flight crew (1975, 1983, 2018).
Yep, the other thing bugging me. Why is there a need to communicate with Earth at all? ... so why can't they have fully autonomous flight control software that would monitor and manage the flight?
Could this anomaly be the result of human error based on the recent discussions on this thread?
I don't see how it could be anything but human error.We simply don't have enough information to make that kind of judgment call right now. Let's not get ahead of ourselves
I think the subtlety of the "I don't see how it could be anything but human error" comment was missed. Its what HAL told Dave Bowman when Mission Control told Dave there was a discrepancy between HAL and the ground-based 9000 unit....
Yep, the other thing bugging me. Why is there a need to communicate with Earth at all? ... so why can't they have fully autonomous flight control software that would monitor and manage the flight?
There is no "need" to communicate with earth (for flight control). Starliner does have fully autonomous flight control software, but it failed. The communication that was attempted was to override the autonomous flight control software that had failed and take manual control of the spacecraft. But the communications dropout prevented that until too much fuel had been expended to make reaching ISS tenable. Had crew been aboard they would have done exactly what ground control attempted, but without the comm dropout, they would have succeeded where ground control did not. There would not have been the time lapse where excessive propellant got burn off. Crew would have taken control, oriented the spacecraft and executed the insertion burn, long before any comm gap had finished. By the time the spacecraft had come out of the comm drop the spacecraft would have been well on its way to the ISS,
Crew would have taken control, oriented the spacecraft and executed the insertion burn, long before any comm gap had finished. By the time the spacecraft had come out of the comm drop the spacecraft would have been well on its way to the ISS,
NASA Administrator Bridenstine has said that an uncrewed docking mission to the ISS is not required for a crewed mission to be launched. But will that crewed mission then have to dock manually? I don't think you would want an automated docking to take place for the first time with crew on board. And since all subsequent missions of Starliner to the ISS would be crewed, that would mean Starliner could never use it's automated docking system.
Prove that any of this has stemmed from "poor corporate/managerial culture".
I've had good and bad management. One thing I do is write control software. The quality of my control software has nothing to do with the quality of my management. In fact, management has exactly nothing to do with any of it. I don't know if that's the case inside Boeing, and neither do you.
Exactly. You and ONLY you are responsible for the quality of the product you provide. Blaming lousy product quality on bad management says less about management than it does about your own character.
Crew would have taken control, oriented the spacecraft and executed the insertion burn, long before any comm gap had finished. By the time the spacecraft had come out of the comm drop the spacecraft would have been well on its way to the ISS,
I thought that voice comm also went through TDRS, so crew would have been onboard a ship with malfunctioning thrusters and no communications with the ground. How are you so confident that a crew would have had time to diagnose the problem, take control of the ship, and with no ground comms make a decision to boost to a higher, more dangerous orbit, rather than taking the safer de-orbit path that they were already on and wait until comms returned? I would have to believe that crew's first reaction would have been to treat this as an emergency. The would have declared it, and if they didn't already know comms were out, they soon would. Next step is to evaluate the emergency and take first steps to mitigate. Key to most emergencies is to not act rashly, and once in a stable state, begin a second evaluation of the situation. Pressing on to the ISS after an emergency that even a day later may not be fully understood and no ground comms makes no rational sense to me.
Prove that any of this has stemmed from "poor corporate/managerial culture".
I've had good and bad management. One thing I do is write control software. The quality of my control software has nothing to do with the quality of my management. In fact, management has exactly nothing to do with any of it. I don't know if that's the case inside Boeing, and neither do you.
Exactly. You and ONLY you are responsible for the quality of the product you provide. Blaming lousy product quality on bad management says less about management than it does about your own character.I also write software for a living, and strongly disagree with this. I do the best I can, but I make mistakes. Fortunately, the software goes to others to be tested before it is released. The quality of these tests, and the feedback from them, has a huge function on the quality of the final results.
Here is where management comes in. Management can assign really good folks to QA, who creatively think of ways to stress your software, or they can assign folks who will just test precisely what you fixed. They can think of QA as an asset that helps them maintain quality (and pay them well), or treat them as an overhead that should be minimized. If there is a workforce reduction, they can decide whether to fire a QA person or a coder. If a QA person finds a problem, they can reward them for preventing a customer failure, or castigate them as nit-pickers who are impacting the schedule. They can consider a move from (say) QA to coding as a promotion, or as a lateral move. There are any number of ways management can affect QA, and hence quality.
Writing software might seem like an activity where you and only you determine quality. But it's not - every programmer has blind spots and makes mistakes. It's a team effort to create quality software, and management has a big impact on the effectiveness of the team.
Prove that any of this has stemmed from "poor corporate/managerial culture".
I've had good and bad management. One thing I do is write control software. The quality of my control software has nothing to do with the quality of my management. In fact, management has exactly nothing to do with any of it. I don't know if that's the case inside Boeing, and neither do you.
Exactly. You and ONLY you are responsible for the quality of the product you provide. Blaming lousy product quality on bad management says less about management than it does about your own character.I also write software for a living, and strongly disagree with this. I do the best I can, but I make mistakes. Fortunately, the software goes to others to be tested before it is released. The quality of these tests, and the feedback from them, has a huge function on the quality of the final results.
Here is where management comes in. Management can assign really good folks to QA, who creatively think of ways to stress your software, or they can assign folks who will just test precisely what you fixed. They can think of QA as an asset that helps them maintain quality (and pay them well), or treat them as an overhead that should be minimized. If there is a workforce reduction, they can decide whether to fire a QA person or a coder. If a QA person finds a problem, they can reward them for preventing a customer failure, or castigate them as nit-pickers who are impacting the schedule. They can consider a move from (say) QA to coding as a promotion, or as a lateral move. There are any number of ways management can affect QA, and hence quality.
Writing software might seem like an activity where you and only you determine quality. But it's not - every programmer has blind spots and makes mistakes. It's a team effort to create quality software, and management has a big impact on the effectiveness of the team.
I assume that Boeing Space, being mostly a defense contractor, still uses a One Spec to Rule Them All kind of software methodology:
1) Gather requirements.
2) Write the spec.
3) Design and code the software to the spec.
4) Test to the spec.
5) For problems discovered, modify the spec through some soul-destroying process.
6) Goto 3
Notice that this is an infinite loop, and therefore a reasonable approximation to the actual Boeing process.
In contrast, companies that were founded some time in the last 15 years use something like an Agile process:
a) Gather key requirements and throw them on a big list.
b) Determine a minimum viable product from the contents of the big list.
c) Code to that MVP, likely using test-driven software development.
d) QA is mostly playing with the MVP and not worrying too much about the final product yet.
e) Decide if this is something that you could actually ship. If so, goto h
f) Take output from QA and the big list to build the next iteration.
g) goto c
h) If the customer is NASA or DoD, now you write the spec and clear it with the customer.
i) Do formal QA against the spec.
This is obviously an oversimplification, because you have to conform to NASA or the DoD's commitment milestones, but it's really different from the One Spec to Rule Them All methodology that was cutting-edge in 1975.
Assuming that the MET problem is really just a configuration error, I'm not sure which methodology would have been more likely to catch it; maybe they're both equally bad. But the agile methodology puts a very high premium on usability, and a usable system is one where it's hard to make a configuration error.
I believe that the massive, massive cost and schedule overruns that the incumbents are incurring are likely largely driven by their discomfort with the fact that software now dictates their mechanical engineering practices and not the other way around. In contrast, the next-generation companies take this as an article of faith, and it guides all of their methodological practices.
That said, adapting Agile to mission-critical embedded systems is no picnic. However, there now appear to be existence proofs that it's possible.
A very similar event happened on Gemini 8. They got unexpected thruster firings, swapped to a backup system, and got it under control. They even diagnosed it down to a single stuck thruster. They then re-entered at first opportunity, since they could no longer trust the primary attitude control and were down to a single backup system.
Exactly. You and ONLY you are responsible for the quality of the product you provide. Blaming lousy product quality on bad management says less about management than it does about your own character.
This is absolutely, categorically false.
It has been proven mathematically that poor process (which is set by management)
Could this anomaly be the result of human error based on the recent discussions on this thread?
I don't see how it could be anything but human error.We simply don't have enough information to make that kind of judgment call right now. Let's not get ahead of ourselves
Much like anybody stating the issue is company wide.
Any comparison to the Max problem is completely clueless
Yes, there is absolutely no way you could say that multiple major problems stemming from poor corporate/managerial culture across several divisions with increasing frequency can be the result of poor overall corporate/managerial culture.
Prove that any of this has stemmed from "poor corporate/managerial culture".
I've had good and bad management. One thing I do is write control software. The quality of my control software has nothing to do with the quality of my management. In fact, management has exactly nothing to do with any of it. I don't know if that's the case inside Boeing, and neither do you.
Prove that any of this has stemmed from "poor corporate/managerial culture".
I've had good and bad management. One thing I do is write control software. The quality of my control software has nothing to do with the quality of my management. In fact, management has exactly nothing to do with any of it. I don't know if that's the case inside Boeing, and neither do you.
Exactly. You and ONLY you are responsible for the quality of the product you provide. Blaming lousy product quality on bad management says less about management than it does about your own character.I also write software for a living, and strongly disagree with this. I do the best I can, but I make mistakes. Fortunately, the software goes to others to be tested before it is released. The quality of these tests, and the feedback from them, has a huge function on the quality of the final results.
Here is where management comes in. Management can assign really good folks to QA, who creatively think of ways to stress your software, or they can assign folks who will just test precisely what you fixed.
Crew would have taken control, oriented the spacecraft and executed the insertion burn, long before any comm gap had finished. By the time the spacecraft had come out of the comm drop the spacecraft would have been well on its way to the ISS,
I thought that voice comm also went through TDRS, so crew would have been onboard a ship with malfunctioning thrusters and no communications with the ground. How are you so confident that a crew would have had time to diagnose the problem, take control of the ship, and with no ground comms make a decision to boost to a higher, more dangerous orbit, rather than taking the safer de-orbit path that they were already on and wait until comms returned? I would have to believe that crew's first reaction would have been to treat this as an emergency. The would have declared it, and if they didn't already know comms were out, they soon would. Next step is to evaluate the emergency and take first steps to mitigate. Key to most emergencies is to not act rashly, and once in a stable state, begin a second evaluation of the situation. Pressing on to the ISS after an emergency that even a day later may not be fully understood and no ground comms makes no rational sense to me.I agree the astronauts would likely land and not press on. Put yourself in their place. You are in a brand new ship. The first thing you try results in unexpected thruster firings. You shut them off - now what? You'd have to think, what happens if I press on to ISS? Can I be *absolutely* sure the problem will not recur? If the same thing happens again after you raise orbit, it could be disastrous - not enough fuel for re-entry, and you die. If it re-occurs when right next to ISS, it could be worse - you could kill yourself AND the station crew. You would need to be absolutely sure the problem would not re-occur, and I don't see how you could be so certain in just a few minutes, even with the help of the ground. So the only sensible thing would be to re-enter as soon as possible, and live to fight another day.
A very similar event happened on Gemini 8. They got unexpected thruster firings, swapped to a backup system, and got it under control. They even diagnosed it down to a single stuck thruster. They then re-entered at first opportunity, since they could no longer trust the primary attitude control and were down to a single backup system. The pilot on that mission was Neil Armstrong, and he was praised for making the right decision, even though some of the other astronauts grumbled that if had had more of the right stuff, the remaining portion of the mission could have possibly been saved.
Prove that any of this has stemmed from "poor corporate/managerial culture".
I've had good and bad management. One thing I do is write control software. The quality of my control software has nothing to do with the quality of my management. In fact, management has exactly nothing to do with any of it. I don't know if that's the case inside Boeing, and neither do you.
Exactly. You and ONLY you are responsible for the quality of the product you provide. Blaming lousy product quality on bad management says less about management than it does about your own character.I also write software for a living, and strongly disagree with this. I do the best I can, but I make mistakes. Fortunately, the software goes to others to be tested before it is released. The quality of these tests, and the feedback from them, has a huge function on the quality of the final results.
Here is where management comes in. Management can assign really good folks to QA, who creatively think of ways to stress your software, or they can assign folks who will just test precisely what you fixed. They can think of QA as an asset that helps them maintain quality (and pay them well), or treat them as an overhead that should be minimized. If there is a workforce reduction, they can decide whether to fire a QA person or a coder. If a QA person finds a problem, they can reward them for preventing a customer failure, or castigate them as nit-pickers who are impacting the schedule. They can consider a move from (say) QA to coding as a promotion, or as a lateral move. There are any number of ways management can affect QA, and hence quality.
Writing software might seem like an activity where you and only you determine quality. But it's not - every programmer has blind spots and makes mistakes. It's a team effort to create quality software, and management has a big impact on the effectiveness of the team.
The job of management is to support the workers, if they need it. If they are doing more than that, then either they are bad or the workers are. If management is involved directly in the actual work of the company then something has gone horribly wrong.