As I stated in other thread, a bigger fairing could have been done for 500M or less (probably much less).
Quote from: Nancyloo on 12/28/2010 02:18 amIf greater payload volume is so beneficial why has a 6 or 7m payload fairing not been developed for the EELV’s? They tried that. Look up Hammerhead fairings. Caused severe buffeting during flight.
If greater payload volume is so beneficial why has a 6 or 7m payload fairing not been developed for the EELV’s?
Quote from: baldusi on 02/15/2012 05:27 pmAs I stated in other thread, a bigger fairing could have been done for 500M or less (probably much less).Quote from: Downix on 12/28/2010 02:52 amQuote from: Nancyloo on 12/28/2010 02:18 amIf greater payload volume is so beneficial why has a 6 or 7m payload fairing not been developed for the EELV’s? They tried that. Look up Hammerhead fairings. Caused severe buffeting during flight.
ULA still offers the large fairings in their user manuals for EELVs, so that beats speculation by amateurs.
1) Untested hammerhead fairing experiment on multi-billion dollar payload = undesireable.2) A 7.2 meter fairing might still be too small to justify telescope design changes (end up with a slightly bigger complicated umbrella instead of a simpler bigger cheaper one). 3) They would have to pay for the upgrade out of this project's budget (which might not be possible given money management rules, i.e. only the fairing upgrade justification department can pay for it for them, but that budget is already spoken for or too small) I don't know if this is the case or not, but I've worked in big enough companies to see how silly things like that can get in the way (drilling departments with way to much power).
http://selenianboondocks.com/2010/06/agnep1/
I have seen for years now at NASA how even the most clever engineers can be seduced or bullied into accepting terrible vehicle designs ...He [Jon's boss] quietly but firmly cut me off and said:"The answer" is Shuttle-C.”.I understood from the tone of his voice that this decision wasn’t technical, it was strategic. Shuttle-C was based on propulsion hardware developed and controlled at MSFC. If the Mars program went forward, if this vehicle was developed, if Shuttle-C was baselined for its launch, then MSFC would be supporting it for many years to come.
Since somehow my original post disappeared; here's a restatement and additional information:In the 2000ies, NASA was trolling around for old bulk supplies of medical equipment to get Intel 8086 chips LINKThen there's the case of the Solid Rocket Boosters. The paper Reusable Solid Rocket Motor--Accomplishments, Lessons, and a Culture of Success by Moore and Phelps says that during a ten year period beginning in the mid 1990s, more than 100 materials used in the RSRM became obsolete -- aka got dropped by their manufacturers.Yes, work arounds were found to keep the RSRM program, and by extension, the Shuttle program going; but it did consume a non-trivial amount of money for re-certification.This never would have been a problem if the Shuttle had achieved it's program goal of rapid and routine spaceflight, because we'd have retired OV-099 through 104 by 1992-1994 and replaced them with the OV-2xx series; and the OV-2xx Orbiters would have in turn been retired by 2000-2002 and replaced with the OV-3xx series.
Are you similarly outraged at the B-52 program?
These type issues are not unique to Shuttle and were dealt with in any number of ways (and the reality is you have just scratched the surface) in many, many programs that exist in operations for years and years.
About trolling for 8086 chips, as an aside.New tech is more expensive than old tech. The payoff for chips, as one example, is that you get more transistors per dollar with the new chip, and this translates into more calculation power, therefore the amortized price of the calculation is less than what you got with the old chip. Fine.There's no question that an 80286 chip could never compete with a xeon chip doing an FEA computational analysis of some task, say combustion flow in the nozzle. But one of the things that's happening these days, is that the common mindset is that one simply can't do any meaningful work improving launch costs, for example, without the latest and greatest tech. But the lost truth is that plenty of work can be done without waiting for, or buying, the latest and greatest technology.Somewhere on the forum is a thread which includes a short discussion on a multi-million dollar new computer and support contract. Several posters observed that it was a pretty high annual cost per seat for tech support; maybe $10K/seat/year, IIRC. It wasn't all that clear what NASA got for that money, however. So what's the whine?NASA is losing its way and its funding, in part because of high design costs. The high costs are a partial function of a shallow insistance on new tech for the sake of new tech; losing sight of the work itself.The typical bullying (see below) knee-jerk reaction is, "What? You don't want our Rocket Scientists to have the best tools available?" as if that is indeed the extent of the analysis necessary for getting work accomplished; and worse, as if the only choices I offer are between the latest and greatest and slide rules. NASA's management would do well to learn something about "Appropriate Technology".Again, the lost truth is that accomplishment is taking a back seat to the acquisition of new gadgetry. The example of trolling for 8086's is an example of "for want of a penny a war was lost".Part of a good spacecraft design would be in identifying the weak points in the design, such as chips, and ensuring that the supplies will be available over the life of the design.
Quote from: OV-106 on 02/18/2012 03:37 pmAre you similarly outraged at the B-52 program?In the early 1980s, the Gunner's CRT on the B-52 had a lifetime of about a decade in active use.In the rare cases when they failed, they were replaced by new CRTs still in the original packaging and with production dates of 1961.[talks with a former B-52 crewdog from the early 1980s]Additionally in the 1990s, we retired all but the G and H models, in effect massively increasing the spare parts pile for the remaining B-52s.The big problem was that the Shuttle was designed just as the microelectronics revolution was taking off in the 1970s -- the F-15 is prone to the same issues -- the USAF has been forced to trawl around to find the required chips for early F-15 flight control computers.
Quote from: JohnFornaro on 02/18/2012 01:44 pmAbout trolling for 8086 chips, as an aside.New tech is more expensive than old tech. The payoff for chips, as one example, is that you get more transistors per dollar with the new chip, and this translates into more calculation power, therefore the amortized price of the calculation is less than what you got with the old chip. Fine.There's no question that an 80286 chip could never compete with a xeon chip doing an FEA computational analysis of some task, say combustion flow in the nozzle. But one of the things that's happening these days, is that the common mindset is that one simply can't do any meaningful work improving launch costs, for example, without the latest and greatest tech. But the lost truth is that plenty of work can be done without waiting for, or buying, the latest and greatest technology.Somewhere on the forum is a thread which includes a short discussion on a multi-million dollar new computer and support contract. Several posters observed that it was a pretty high annual cost per seat for tech support; maybe $10K/seat/year, IIRC. It wasn't all that clear what NASA got for that money, however. So what's the whine?NASA is losing its way and its funding, in part because of high design costs. The high costs are a partial function of a shallow insistance on new tech for the sake of new tech; losing sight of the work itself.The typical bullying (see below) knee-jerk reaction is, "What? You don't want our Rocket Scientists to have the best tools available?" as if that is indeed the extent of the analysis necessary for getting work accomplished; and worse, as if the only choices I offer are between the latest and greatest and slide rules. NASA's management would do well to learn something about "Appropriate Technology".Again, the lost truth is that accomplishment is taking a back seat to the acquisition of new gadgetry. The example of trolling for 8086's is an example of "for want of a penny a war was lost".Part of a good spacecraft design would be in identifying the weak points in the design, such as chips, and ensuring that the supplies will be available over the life of the design.Choosing a chip that will be around for a long time is akin to looking into a crystal ball.NASA did go right with the 8086 as it stuck around for a long time.FYI a lot of embedded designs still even use 8bit controllers and there is some good arguments to using a simple processor for some things.The 8051 chip for example is still manufactured as clones today.Though they been largely replace by newer chips such as the At Mega it of being found inside a modern computer monitor or television.Though NASA did stick with the original 8086 vs move to newer clones or even single chip solutions....
Even if people stop building 8086s entirely, you can always just build one in an FPGA (there are fully rad-hard--not just rad-tolerant--FPGAs available these days). There are free IP 8086 "cores" available, even, though the cost of buying/licensing a non-free one would be absolutely trivial in this case. In fact, you could fit a whole bunch of those 8086s on a single FPGA. The nice thing about FPGAs, too, is that if you do want to upgrade to a more sophisticated core than the 8086 all you have to do is re-program it... you could do that even if it's billions of miles from Earth.Also, with a rad-hard FPGA, you can program all (or almost all) the digital functions on the single unit, allowing you to effectively have your own single-chip solution.EDIT:And for what it's worth, I disagree with the assertion that old tech is necessarily cheaper than modern tech. A modern microcontroller is a heck of a lot cheaper than a whole bunch of vacuum tubes or discrete transistors and operates a lot faster and is more flexible as well (a bunch of digital inputs/outputs and often analog inputs and PWM/analog outputs thrown in for good measure) while operating with much less power and more reliable in most environments.
Quote from: Robotbeat on 02/18/2012 04:45 pmEven if people stop building 8086s entirely, you can always just build one in an FPGA (there are fully rad-hard--not just rad-tolerant--FPGAs available these days). There are free IP 8086 "cores" available, even, though the cost of buying/licensing a non-free one would be absolutely trivial in this case. In fact, you could fit a whole bunch of those 8086s on a single FPGA. The nice thing about FPGAs, too, is that if you do want to upgrade to a more sophisticated core than the 8086 all you have to do is re-program it... you could do that even if it's billions of miles from Earth.Also, with a rad-hard FPGA, you can program all (or almost all) the digital functions on the single unit, allowing you to effectively have your own single-chip solution.EDIT:And for what it's worth, I disagree with the assertion that old tech is necessarily cheaper than modern tech. A modern microcontroller is a heck of a lot cheaper than a whole bunch of vacuum tubes or discrete transistors and operates a lot faster and is more flexible as well (a bunch of digital inputs/outputs and often analog inputs and PWM/analog outputs thrown in for good measure) while operating with much less power and more reliable in most environments.True FPGA's have probably made obsolescence less of an issue.Even entire computers such as the C64 ,Apple II and even early Amigas have been replicated in FPGA.The big issue with NASA is they were worried about software issues from unforeseen bugs.They even avoided clones of the 8086 such as the NEC V30.Though it should be noted hardware is only one part of the cost software and qualification also are considerable costs esp for something that is not mass produced in large numbers.