Is Atlas the only vehicle that can do yaw steering?
Shuttle did it, too
And so did Apollo in 1975 (for the Apollo-Soyuz mission) and Titan in 1966 (for Gemini 12, which rendezvoused with the Atlas launched one orbit earlier, which is why they needed yaw steering.
(You seem to think that because the landing location is fixed relative to the surface of Earth it is somehow easier for the rocket to know its relative location to the landing site, but this is not true)
Wrong, the Falcon has INS and GPS, it knows exactly where it is in relation to the landing site
1 If regular launches can use tables, then so can yaw steering. Your argument that "nope, they just load the one" is meaningless because just because they don't, doesn't mean they couldn't. You would have to provide a reason they couldn't, such as if Delta doesn't have enough RAM. I doubt that would be the case for a Falcon.
2. Yaw steering doesn't have to use tables, it can be computed onboard as you said. Either way this still only* is a software functionality, you have yet to provide one bit of hardware that would need to change for a Delta or a Falcon.
1. They don't
2. The whole point of yaw steering is a continuous available launch window and not a group of instantaneous launch times.
(You seem to think that because the landing location is fixed relative to the surface of Earth it is somehow easier for the rocket to know its relative location to the landing site, but this is not true)
Wrong, the Falcon has INS and GPS, it knows exactly where it is in relation to the landing site
In that case, vehicles planning to dock with the ISS know exactly where they are relative to the ISS, because they would be preloaded with the orbital parameters of the target, and can have GPS and INS. Yaw steering same thing, the rocket would know the desired orbit, and current position relative to it. This works both ways. Your only explanation so far why that wouldn't be the case is dependence on time of launch, but I already explained why that isn't an issue (except for the obviously required SW changes).
1 If regular launches can use tables, then so can yaw steering. Your argument that "nope, they just load the one" is meaningless because just because they don't, doesn't mean they couldn't. You would have to provide a reason they couldn't, such as if Delta doesn't have enough RAM. I doubt that would be the case for a Falcon.
2. Yaw steering doesn't have to use tables, it can be computed onboard as you said. Either way this still only* is a software functionality, you have yet to provide one bit of hardware that would need to change for a Delta or a Falcon.
1. They don't
2. The whole point of yaw steering is a continuous available launch window and not a group of instantaneous launch times.
1. Ok.
2. The Atlas V Cygnus launches used a series of discrete launch opportunities over the 30 minute window.
In that case, vehicles planning to dock with the ISS know exactly where they are relative to the ISS, because they would be preloaded with the orbital parameters of the target,
No, they used sensors for that.
I believe the reasons for lacking it are bean counters, not technology. Unless you have a mission that needs it to reach some otherwise un-available orbit, or some customer demands it, then the decision boils down to how often a few minute window would help, how much a delay until the next window costs, versus the cost to develop and certify.
Nope, it is avionics/software architecture. Delta could not if they wanted to
Are you saying this is incorrect:?
...
For yaw (RAAN) steering, it's the robustness of the software and how it converges under highly dispersed conditions (see OA-6).
Atlas has had this capability going back to the Atlas II days. Delta IV will have it this year.
(emphasis mine)
I believe the reasons for lacking it are bean counters, not technology. Unless you have a mission that needs it to reach some otherwise un-available orbit, or some customer demands it, then the decision boils down to how often a few minute window would help, how much a delay until the next window costs, versus the cost to develop and certify.
Nope, it is avionics/software architecture. Delta could not if they wanted to
Are you saying this is incorrect:?
...
For yaw (RAAN) steering, it's the robustness of the software and how it converges under highly dispersed conditions (see OA-6).
Atlas has had this capability going back to the Atlas II days. Delta IV will have it this year.
Delta IV is getting a new avionics suite as part of the ULA common avionics upgrade. The old RIFCA based system could not do it.
In that case, vehicles planning to dock with the ISS know exactly where they are relative to the ISS, because they would be preloaded with the orbital parameters of the target,
No, they used sensors for that.
And the Falcon 9 uses sensors (besides INS and GPS) to find the landing site, so by that logic then your statement was incorrect:
Wrong, the Falcon has INS and GPS, it knows exactly where it is in relation to the landing site
And the Falcon 9 uses sensors (besides INS and GPS) to find the landing site, so by that logic then your statement was incorrect:
Wrong. It is only has a radar altimeter. It does not use it to find the landing site. It is only for the last final seconds.
And actually, F9 could likely land on land without the radar.
And the Falcon 9 uses sensors (besides INS and GPS) to find the landing site, so by that logic then your statement was incorrect:
Wrong. It is only has a radar altimeter. It does not use it to find the landing site. It is only for the last final seconds.
And actually, F9 could likely land on land without the radar.
Finding the landing site includes finding its location on the z axis. Are you trying to say that the altimeter is not a sensor? Otherwise what are you saying is wrong on my post? The "s" at the end of "sensors"? In that case how sure are you that there are no other landing sensors? (including anything based on the barge and transmitted to the stage) If there was a definitive statement on this, I missed it.
This is all drifting off topic from the original point that the F9 certainly is capable of yaw steering from a processor performance and available information (hardware) perspective, even though it would obviously need software changes to do so. If there is something wrong with this statement please clarify what hardware changes it would need.
A Falcon 9 might be able to do yaw-steering with the right software. But it doesn't have enough sensors to actually rendezvous and especially dock in orbit.
It has a radar that helps reduce z-height uncertainty (which can be compounded with a barge-landing since the sea surface isn't always flat). It'd be pretty useless for docking, though.
Are you saying this is incorrect:?
For yaw (RAAN) steering, it's the robustness of the software and how it converges under highly dispersed conditions (see OA-6).
Atlas has had this capability going back to the Atlas II days. Delta IV will have it this year.
Delta IV is getting a new avionics suite as part of the ULA common avionics upgrade. The old RIFCA based system could not do it.
If you look at this paper on RIFCA
A report on the flight of Delta II's redundant inertial flight control assembly (RIFCA) (unfortunately behind a paywall) there are many factors that could make adding new features (not just yaw steering) difficult. For safety, it runs programs out of EEPROM, which is slow to program. It has only 64K of program memory and 64K of RAM. Given what it's trying to do (lots of redundancy, sensor checking, etc.) this might very well be full. It talks to the GSE through a single RS-422 data link, which might be slow.
The problem with adding yaw steering is surely not processor speed. The RIFCA has triple BX-1750A microprocessors, which can run at 20 MHz. So it's quite a bit more powerful than the Shuttle computers (which were about 1.2 MIPS) and orders of magnitude more powerful than the Apollo computer. Both of these could do yaw steering.
My guess (and it's a guess) is that the memory is already fully used, so adding *any* new feature is extremely difficult.
I'd be extremely surprised if any of these limitations apply to the SpaceX avionics. In that case, I still believe in the "bean counter" argument - they have not yet done it since the cost exceeds the benefits. Here you need the opportunity costs as well - if the same groups could add yaw steering, or enhance the landing software, I'd suspect they'd be assigned to the latter.
In that case how sure are you that there are no other landing sensors? (including anything based on the barge and transmitted to the stage)
Yes, and like I said, it can land on land without it.
I'd be extremely surprised if any of these limitations apply to the SpaceX avionics. In that case, I still believe in the "bean counter" argument - they have not yet done it since the cost exceeds the benefits. Here you need the opportunity costs as well - if the same groups could add yaw steering, or enhance the landing software, I'd suspect they'd be assigned to the latter.
It is not just yaw steering. Like for DSCOVR, even with excess performance, there was only a one second launch window.
I'd be extremely surprised if any of these limitations apply to the SpaceX avionics. In that case, I still believe in the "bean counter" argument - they have not yet done it since the cost exceeds the benefits. Here you need the opportunity costs as well - if the same groups could add yaw steering, or enhance the landing software, I'd suspect they'd be assigned to the latter.
It is not just yaw steering. Like for DSCOVR, even with excess performance, there was only a one second launch window.
This makes perfect sense. Launches to the moon, or L points, are trying to hit a point that is (almost) fixed in inertial space. (At least, they re moving much slower than the Earth rotates). So the navigation solution depends on the time of launch, as illustrated in

For the booster, all you change is azimuth, to go through the right insertion point. Then the second burn depends on the initial azimuth chosen, and hence on the launch time as well.
So if SpaceX has not implemented real-time trajectory mods, then they could not use non-instantaneous windows for either rendezvous, moon, or L2 missions.
For the booster, all you change is azimuth, to go through the right insertion point. Then the second burn depends on the initial azimuth chosen, and hence on the launch time as well.
No, the azimuth and first stage flight stay the same. The second stage burn accounts all the differences
For the booster, all you change is azimuth, to go through the right insertion point. Then the second burn depends on the initial azimuth chosen, and hence on the launch time as well.
No, the azimuth and first stage flight stay the same. The second stage burn accounts all the differences
That's not the normal method. Ranger changed the azimuth as a function of launch time - see the diagram I include. So did Apollo. See
Apollo lunar landing launch window: The controlling factors and constraintsFor the Apollo lunar missions, daily launch windows required a range of launch azimuths; the longer the daily launch window, the larger the required range of launch azimuths.
This is because the insertion burn takes place *directly* opposite the desired target, by the nature of orbits. This point moves with respect to the Earth's surface, so you need a different azimuth to get there depending on the time of launch. This applies to the moon, L points, or planetary injections, since they stay relatively fixed as the Earth rotates.
This is also covered on JPLs
JPL PUBLICATION 82-43: Interplanetary Mission Design Handbook, Volume I, Part 2:
As the launch site partakes in the sidereal rotation of the Earth, the continuously changing ascent plane manifests itself in a monotonic increase of the launch azimuth, EL , with lift-off time, tL , (or its angular counterpart, aL , measured in the equator plane) :
They go into lots of detail about this, including Figure 10 (launch geometry) and Figure 12 (launch time vs launch azimuth) and pages of discussion:
Since lift-off times are bounded by preselected launch-site dependent limiting values of launch azimuth EL (e .g ., 70 deg and 115 deg), each of the two declination contours thus contains a segment during which launch is permissible-"a launch window." The two segments on the plot do define the two available daily launch windows .
Of course, you *can* launch at a constant azimuth, and either do yaw steering on ascent (the Shuttle did this for Magellan) or do a plane change at insertion. This gets expensive fast, like all plane changes, and leads to lesser mission margins and shorter launch windows. You still have a navigation solution that depends on the time of launch, since the second stage burn always varies. And if the second stage computer supports this, and the second stage computer also directs the first stage (as it does in all cases I know of, except the Shuttle/IUS combination), then a time-variable azimuth makes sense.
That's not the normal method. Ranger changed the azimuth as a function of launch time - see the diagram I include. So did Apollo.
No, stop using 40 year old data. That hasn't been the normal method for decades.
JPL book was made out dated by Challenger.
MRO, MSL, Juno, MAVEN, O-Rex, PNH, etc had windows of 30 minutes or more and flew the same first stage trajectory.
MRO, MSL, Juno, MAVEN, O-Rex, PNH, etc had windows of 30 minutes or more and flew the same first stage trajectory.
LDCM was a pretty good example of trajectory variability through the window.
MRO, MSL, Juno, MAVEN, O-Rex, PNH, etc had windows of 30 minutes or more and flew the same first stage trajectory.
MSL had instantaneous windows, 5 minutes apart, all the way through the almost 2 hour launch window. Are you saying all of these had the same azimuth? If so, why? It's definitely giving up performance. What's the corresponding benefit?