All the speculation about why Northrop Grumman & OmegA lost the NSSL Phase 2 LSP contract, pointing to things like reusability, obsolescence, what the commercial market will support, and so forth, amuses me. The government defined very specific criteria in their RFP for evaluating the proposals, and it's all in the public domain. I direct your attention here, specifically Attachment 6, or read the full RFP if you'd like to familiarize yourself.
https://beta.sam.gov/opp/c3e7f1e342154e92a5779c5daa1b1db0/viewThe
Readers Digest version goes kinda like this:
* There are multiple criteria on which your proposal is evaluated, and in each primary technical areas you can get a color/score of Outstanding, Good, Acceptable, Marginal, or Unacceptable. For non-technical areas you get an Acceptable or Unacceptable rating. On Price, they look at the "Total Evaluated Price" (TEP) and you get a Reasonable/Not Reasonable rating and Balance/Unbalanced rating.
* In the Technical factor, there are four subfactors
1. System Capability, specifically Mass-To-Orbit, Orbital Accuracy, and Mission Assurance
2. Category A/B Missions, including two sample missions that reflect real-life missions in planning, plus System Readiness. These sample missions would be those that you would expect to fly on an Atlas V, Delta IV M/M+, Falcon 9, OmegA Intermediate, or an early Vulcan Centaur.
3. Category C Missions, including two sample missions that reflect real-life missions in planning, plus System Readiness. These sample mission would be expected to fly on Delta IV Heavy, Falcon Heavy, OmegA Heavy, or a later Vulcan Centaur/ACES.
4. System Risks and Mitigations
* In addition to the combined technical color-code rating, the Technical subfactors have a level-deeper list where individual Strengths and Weaknesses can be identified, which are helpful in comparing two offerors who have the same summary rating.
* The non-technical areas are Past Performance and Small Business Participation. You might want to say "ah-ha!" and suspect that a fail rating in one of these areas can explain some outcome, but I will put that to rest. All four competitors received "Acceptable" ratings in both of these categories.
* For the Price evaluation, each offeror needs to supply a list of products and services and price for each year of the contract, and my understanding is that those prices are Firm Fixed Price. The government bounces that against their mission manifest scenarios, including mission acceleration, launch service support and fleet surveillance, anomaly resolution, special studies, and integration studies. That weighted sum is called the Total Evaluated Price (TEP).
* During the evaluation process, the government informs each supplier with an "Evaluation Notice" (EN) with the strengths and weaknesses that have been identified in their proposal. The contractor can then respond with some more polish on areas of the proposal in order to close an identified weakness, or firm up an identified strength or significant strength. There were 3 rounds of ENs during the proposal evaluation.
So it comes down to the Technical factor and Price, and in the Price evaluation all four entrants were deemed to have "Reasonable" and "Balanced" pricing.
At this point I can't say much more, because the evaluation is marked FOUO and subject to distribution statements.
But I will point out that the Technical evaluation of each proposal is singularly focused on the offerors'
ability to achieve the government's mission objectives and execute the contract during its period of performance with a low risk, highly reliable launch solution. Price is an important consideration, but not as important as Technical merit. There's nothing in there directly about hardware reuse, synergies with other government customers/programs, spreading the work around as many suppliers as possible to keep everyone happy (except as a tick-the-box requirement about Small Business utilization, which has a specific definition in government procurement law), etc., except to the extent that those features/bugs influence your TEP or risk, or show up as strengths or a weaknesses in the level-deeper comparison evaluation.
If SpaceX scored any Strengths on hardware reusability, portions of the comparative evaluation that pertain to that were redacted. If hardware reuse influenced their price to the government, I saw no evidence of that.
One more point, the government had to balance Technical/Risk scoring on one hand and Price on the other. In that case, a Value determination had to be made, which add some subjectivity. It was the job of the source selection committee to determine the Best Value (60%) and Next Best Value (40%) offerings, in a one-choice-at-a-time method. It was not a unanimous determination, but it wasn't a close call either.
I'd like to say more about how the evaluation was scored, but shouldn't do so, because that's not public information. OmegA was
very competitive, and I'm proud of what the team accomplished and proposed. My opinion based on what I read is that if the government could have awarded three contracts, they would have.
I'd also like to echo what Tory Bruno said on Twitter recently, that if you look at the values of the contract task orders (i.e. missions) that were awarded at the LSP announcement, "the identity of the lowest price provider might surprise you..."