Anyways, not being familiar with tracking networks that are available for IM, (DSN and the other one), can signals being downlinked be visible to more than one dish at the same time. Are these signals being recorded in a form equivalent to the live signal at the antenna in such a way that essentially no (yeah zero is impossible but hopefully the idea is clear) signal degradation occurs. Weeks from now, post processing can be done. Superficiallly, combining two such signals from similarily sized dishes could provide 3db of increased signal and significantly improve S/N ratio.
Yes, this can be done, and is done routinely in radio astronomy. Each station records on a wide band recorder, wide enough to contain all the signal modulations with all their differing doppler shifts. Once all recordings are available they are summed, with appropriate and time varying delays to account for Earth rotation. The problem is finding the right delays - the known station positions will get you close, but cannot account for random delays like the ionosphere. It's immensely helpful if there is a common known source visible to all stations to initialize the timing search. Astronomers use a nearly quasar; IM-1 could potentially use a signal bounded off the moon by an Earth station. Overall, summing is an expensive process carried out at a specialized facility and usually needs to be arranged well in advance. Each station needs a very accurate clock and suitable recorder. But the main constraint is likely arranging time on a bunch of big radio telescopes all at once.
Simpler and real-time techniques can be used when the 2 or more dishes are close together. Then various uncertainties like Earth rotation, ionospheric delays, and doppler shifts cancel, or nearly so. Then you basically delay one signal by the right amount, then add it to the other. This is what JPL does when the array two dishes at the same complex.
Also super sampling or oversampling each bit time might enable additional process somewhat analogous to Viterbi's soft-error decoding algorithm. In addition, correlation patterns could be developed to provide more likelihood of extracting data since (at least in many space systems) a repetitive format pattern is used including sync bits, and in many cases, expected bits or expected ranges of bits can be expected. Also for bit patterns that do not change or change within a very narrow range such as a power supply voltage or a temperature, overlaying and syncing multiple patterns spread over time (minutes, hours, days) should enable improving signal to noise ratio. Each doubling of the number of frames of data could provide (in oversimplified theory) a 3db gain in S/N ratio.
Almost all these tricks are already studied (and used where helpful) in the field of error-correcting codes. Many of them closely approach the theoretical performance bounds given the signal to noise available. Your best bet is to sum the various sources to get the best signal-to-noise possible, then use the known error correction methods.
FWIW, it is my understanding that GPS signals are actually buried in the noise but are successfully dug out, in some cases to do life dependent operations such as landing aircraft in low visibility conditions
Yes, but this is enabled since each receiver knows exactly what each satellite is sending (each has it's own PRN sequence). The receiver does not need to recover the message, just find the delay that causes the received signal to best match the (known) message contents. Position is computed from the delays, not the message content. (There are also much slower, higher SNR components that send unknown messages such as status and almanacs.)
Finally, Artificial Intelligence has already extracted unexpected patterns from data. To AI, apparently data is data and it does not care where it came from and what it represents. Depending on how much data is actually extracted in real time this might or might not be worth a look at post-mission to see if anything useful can be found.
Current techniques are quite close to the theoretical limits (which are quite solid), and AI is unlikely to help much.
Despite how unlikely it would be, and how it has not yet apparently been activated, is there only one data rate for the science radio? The thought is that if the science radio is ever successfully activated and without knowledge of what actual radiation pattern is from whatever antenna is to be used, will a lower bit rate allow a better signal to noise ratio? On the the other hand, the "better and/or correct" decision probably would have been to have only the one data rate since adding multiple data rates adds complications with additional failure modes, more complex operational failure modes, etc, etc, etc, etc, etc ---- maybe you get the idea.
Yes, normal science radios offer a wide variety of rates for exactly the reasons you suggest, and they use them exactly as outlined - pick the highest rate that works under the circumstances. I don't know about IM-1, but the only reason I could see for them not to have multiple rates is that they are always the same distance from Earth. But I suspect (but do not know) they used some standard radio and have access to multiple rates.
These are all sensible questions, with fairly well known but quite technical solutions. I don't know your status in life, or what resources you have access to, but many technical schools offer course in "Information theory", "Error-correcting codes" and so on that will answer most of your questions. If this is the way you think, you would enjoy these courses, and they are not bad for job prospects either. Just be aware that some math is required.
Largely agree. However, I do think that inadequate attention has been paid to depot failure modes:
1) What about failure that renders the depot's docking hardware unusable? Is there a plan to be able to fix these?
2) What about MMOD damage that strikes when the depot is nearly full?
3) What about docking failures that result in an explosion or breakup?
4) What about docking failures that leave the LSS stranded in LEO and boiling dry?
Most of these have mitigations. But the mitigations aren't an organic part of the architecture. The depot is a asset that progressively increases in value (as it fills) without a corresponding decrease in risk.
Each of these cases is a loss of mission but not a loss of crew. You end up with a derelict SS (either Depot or HLS), and you just write it off. Presumably this happens very rarely. Eventually, we will probably need a way to de-orbit a large derelict spacecraft. I suspect the probability of an explosion that produces a large number of objects is very small.
reminder: we are not discussing the crewed mission segment here, because that segment is effectively identical with Dr. Griffin's scheme.
Has there been any “official” update given since Friday’s press conference? The last daily update on the IM web page was Friday, and I don’t find anything on the NASA pages either.
I’m particularly curious about LRO flyovers. Weren’t those supposed to happen yesterday?
I know, private company; not entitled to answers; etc., etc. But NASA missions were on there, and NASA assets are being used to communicate with it, and a NASA spacecraft should have done multiple flyovers of the landing site by now. I would think that a NASA update is more than appropriate.
Apologies if an update was given in another thread (either public or L2) and I missed it. If so, a pointer to it would be greatly appreciated.
The thing that still screams out is: it fell over because it had a high centre of mass and a fairly narrow leg span.
I understand that they designed it that way for the reasons/space constraints they mentioned, but... it still fell over and compromised the mission somewhat.
It reminds me of how my own students will often put a lot of effort into explaining to me why they did something wrong.
I explain to them that trying to justify an error is a way of at least partially giving it validation - trying to excuse it. And that the real way forward is to accept it as a mistake and find an alternative.
According to Jonathan McDowell, 392 total satellites over the lifetime of Starlink are "down" and 5480 are "on orbit". That gives us 5872 total satellites, of which 6.68% have been deorbited. He classifies 2.2% of total Starlinks as "early deorbit", 3.7% as "disposal complete", and 0.8% as "reentry after fail". I haven't done the math to try and figure out what the trend has been over time, but that's already lower than "9%", so I'm not sure what source you're using. I'm not going to get into classifying all deorbited satellites as "failed", which I believe is being unreasonably pessimistic.
[EDIT] Looking at stats over time, it's clear that the deorbit percentage is heavily weighted towards early shells that have seen much higher rates of disposal, which makes sense: