Author Topic: SpaceX Falcon 9 v1.1 CRS-3 Splashdown Video Repair Task Thread  (Read 1201549 times)

Offline ugordan

  • Senior Member
  • *****
  • Posts: 8520
    • My mainly Cassini image gallery
  • Liked: 3543
  • Likes Given: 759
Oh, I'm sure they have it quite well-documented. Just not in a format your typical rocket pr0n enthusiast likes. It's in the form of vehicle telemetry. That's gold, anything else is just gravy.
If you want to go that way, we don't need a video. And yet we have one.

Exactly. We don't *need* a video. Having video and needing video are separate things.

It still doesn't change my argument that any such solutions are just too much trouble for the amount of use they'll have eventually.
What trouble? Switch the camera for a mono version? Or reconfigure the camera to transmit in gray scale? You make it sound like mono cameras are troublesome, but in fact they are EXACTLY the same! Usually manufacturers supply the same camera in mono and color version.

Did you read what I wrote about monochrome video bandwidth impacts and your point about camera noise? I'm not sure why you're thinking that switching to grayscale video would simply sort out all issues with transmission, which is ultimately the only issue we had here. It's not color. It's not camera noise. It's packet loss.

Offline Lars_J

  • Senior Member
  • *****
  • Posts: 6160
  • California
  • Liked: 677
  • Likes Given: 195
Ugordan is right. Switching to monochrome would not save much bandwidth. (if any) Don't argue if you don't understand how chroma (color) is sub-sampled compared to luminosity (brightness).

Offline IRobot

  • Full Member
  • ****
  • Posts: 1312
  • Portugal & Germany
  • Liked: 310
  • Likes Given: 272
Ugordan is right. Switching to monochrome would not save much bandwidth. (if any) Don't argue if you don't understand how chroma (color) is sub-sampled compared to luminosity (brightness).
I do understand better than you think. I've designed scientific grade CCD cameras :)

And I work with Colorimeters and Spectrometers. Part of my day's work is dealing with YCbCr, XYZ, xyY CIELab, etc.

My point is that any useless information should be discarded to save bandwidth. 12.5% might be enough to save a frame.

EDIT: the used codec can have color subsampling of 4:2:2 or 4:4:4, meaning at best a 2:1 relation between luminance and chroma. So it is more likely that the color information is 33% of the video bandwidth, not considering compression effects of similarities between frames. Where did you get the 12.5%?
« Last Edit: 05/01/2014 04:28 pm by IRobot »

Offline R7

  • Propulsophile
  • Senior Member
  • *****
  • Posts: 2725
    • Don't worry.. we can still be fans of OSC and SNC
  • Liked: 992
  • Likes Given: 668
EDIT: the used codec can have color subsampling of 4:2:2 or 4:4:4, meaning at best a 2:1 relation between luminance and chroma. So it is more likely that the color information is 33% of the video bandwidth, not considering compression effects of similarities between frames. Where did you get the 12.5%?

Wouldn't 4:2:2 mean 1:1 ratio between luma and chroma bandwidths? (=8 luma samples and 4 Cb + 4 Cr samples)

The 12.5% is assuming (non-mpeg4 standard?) 4x4 subsampling, kind of "4:1:0:0:0".

Btw anyone know what kind of error correction methods the RF-link had? Some sort of Reed-Solomon encapsulation layer?
« Last Edit: 05/01/2014 04:55 pm by R7 »
AD·ASTRA·ASTRORVM·GRATIA

Offline ugordan

  • Senior Member
  • *****
  • Posts: 8520
    • My mainly Cassini image gallery
  • Liked: 3543
  • Likes Given: 759
I just checked the *.ts SpaceX posted, the format is 4:2:0 so that would bring the total number of bits to encode to 50% more than monochromatic video so I stand corrected. I must have been thinking of older codecs like Indeo that did allow as high as 4x4 subsampling of chroma.

Offline rickyramjet

  • Full Member
  • *
  • Posts: 114
  • Killeen, TX
  • Liked: 106
  • Likes Given: 78
The problem is the noisy signal, not the codec.  The reason for the noisy signal is signal strength, pure and simple, worsened by the distance from the rocket and the bad weather.  The easiest solution is to install a more powerful transmitter.

Offline Adaptation

  • Full Member
  • *
  • Posts: 160
  • Liked: 40
  • Likes Given: 38
Quote
First we took a pass on the data to align every MPEG packet on a 188 byte boundary and set the packet start byte to 0x47.

I'm not seeing that.  For instance location 26EC is divisible by 188 (0XBC) but its value is 4F not 47.
I can concur. In the repair1.ts file the sync bytes have not been "fixed" to 47 (hex). Maybe they uploaded the wrong file? Not that this does much: the rest of the header is usually broken as well. Also, in the raw.ts there are 5 places where I had to add exactly 56 bytes in order for the headers to align on the 188 bytes grid.

Anyway. Back to trying to get a little more life out of this video.  ;)

Well some bits in the header should be able to be restored as well. 

Here is a prototype for the two header bytes.
1000 0111  (this is the G or 47)
0Y0Y YYYY
YYYY YYYY
00X1 XXXX

Where 1's and 0's are values that should be set regardless of contents of packets.  X's contain data that may be determined by analysing headers before and after this packet.  Y's contain data that could possibly be determined by analyzing data within the packet, knowing the identifiers associated with the codec and several invalid values could be excluded.

Without doing much analysis we can make some bitwise filters for the second and fourth bytes to fix five bits. 

0101 1111  to be and with the second byte (this sets the packet to not be ignored and not to give it special priority)
0011 1111  to be and with the fourth byte  (this declares the stream to be unencrypted)
0001 0000  to be ored with the fourth byte  (this sets packet contains payload to true, which is possibly too big of an assumption)

http://en.wikipedia.org/wiki/MPEG_transport_stream#Packet
« Last Edit: 05/01/2014 06:06 pm by Adaptation »

Offline IRobot

  • Full Member
  • ****
  • Posts: 1312
  • Portugal & Germany
  • Liked: 310
  • Likes Given: 272
The problem is the noisy signal, not the codec.  The reason for the noisy signal is signal strength, pure and simple, worsened by the distance from the rocket and the bad weather.  The easiest solution is to install a more powerful transmitter.
More or less. If you use a codec that reduces the information to 50% (for example by not using color), and keeping a frame rate of 15fps, you can actually send the same frame twice (or send the same transmission packet twice). But as that would require a deep software change, the other option would be to double the number of frames, 30fps. Then differences between frames would be reduced to half, increasing compression ratio.

I'm no codec expert, so there are probably better solutions on how to use bandwidth to reduce transmission noise (data corruption). Still, a change in the codec is probably a lot easier than replacing the transmitter. A more powerful transmitter also uses more energy.

Offline Lars_J

  • Senior Member
  • *****
  • Posts: 6160
  • California
  • Liked: 677
  • Likes Given: 195
Someone on Youtube did a decent effort of cleaning it up:

I'm not sure how accurate it is, and I don't think the legs were extended in the beginning of the clip? Nonetheless it looks neat.

« Last Edit: 05/01/2014 06:51 pm by Lars_J »

Offline IRobot

  • Full Member
  • ****
  • Posts: 1312
  • Portugal & Germany
  • Liked: 310
  • Likes Given: 272
That guy got the rocket image from the best frame and overimposed it on all frames... it offers a visual cue, but that's all.

Offline Adaptation

  • Full Member
  • *
  • Posts: 160
  • Liked: 40
  • Likes Given: 38
the other option would be to double the number of frames, 30fps. Then differences between frames would be reduced to half, increasing compression ratio.

Doubling the framerate would do little to solve the problem on a modern codec.  You would need a full frame codec like MJPEG for that to really work.  You could reduce the threshold for sending a keyframe or have keyframes sent twice.  As they are using a fixed bit rate stream there may be room for some of these tricks. 

The best thing they could do is know better where the rocket will come down and have adequate downlink capability there. 

Higher transmit power is nice but it only gets you so far, double the power and you get 1/4 more range, you can only double the power so many times before the strategy gets out of hand.  But using for instance a 28 dbi directional antenna on the receiver gives you the same result as multiplying your transmit power 500 times.  The only problem is you have to be able to point it very accurately, if you're off by just 5° you are only going to get 250x the receive power but it steeply drops from there a few more degrees and its the same as sticking blinders up on your receiver. 

AFIK this launch did not aggressively attempt landing at a precise spot because more velocity was given to the dragon to assure the highest possible margins for mission success. 
« Last Edit: 05/01/2014 07:03 pm by Adaptation »

Offline AJA

  • Full Member
  • ****
  • Posts: 889
  • Per Aspera Ad Ares, Per Aspera Ad Astra
  • India
  • Liked: 146
  • Likes Given: 212
@AJA why red filter? Maybe a luminance filter with IR cut, but can't understand the reason to use the red filter.

Falcon's white. The legs too.. have a white border (scroll to find input~2's YT screenshot ITT). The ocean's blue. Now that means that the RGB luminances of Falcon are (say) r1, g1, b1, whereas the ocean's is ~0, ~g2(bodies of water do look greenish sometimes don't they? Plus, plankton?), b2. The difference |b2-b1|, is I would assume the smallest of the three pairs and doesn't really help in terms of establishing where the legs end and water begins (in the image data). |g2-g1| would probably be more than the blue channel differences, but the largest, by far would be |r2-0| (I'm assuming that the ocean's black in the red channel..or very close to it atleast). So this allows you to differentiate.

While it may seem useless, and like a really poor version of a grasshopper video if you're not able to tell if Falcon is moving up and down in response to the waves... I'm counting on the fact that as waves break, the surf is going to be white...and will be visible in the image as well.

They may still want to cut out IR.. because the water might be radiating, and once the engine lights, it'll probably saturate the sensor.

I don't think this'd require much modification at all. Unless they're using some special space qualified camera, with a custom chip, a custom form factor etc.... can't you get a black and white camera and stick a red filter in front of it? If they keep the same data payload, they can trade two channels for more dynamic resolution...

Offline MP99

12.5% is actually a lot. When receiving such error-prone signal, it makes a lot of difference. It also reduces transmission power requirements (for the same frame rate), therefore more power available for transmission, therefore better S/N ratio.

Maybe. *If* that's the only telemetry sent. Who's to say the video wasn't multiplexed along all the other, high rate vehicle telemetry so the 12.5% for video is more of a noise in the total bandwidth budget?

Also true monochrome cameras are up to 3x more sensitive, meaning less (camera) noise to start with.

Seriously? At the codec quality settings they're using, the camera dirt that's deposited on the way down you're worried about camera noise?

Again, why go around making up solutions that would only be relevant for a couple of seconds before splashdown?
Because the way up is quite well documented. The way down is not.

Oh, I'm sure they have it quite well-documented. Just not in a format your typical rocket pr0n enthusiast likes. It's in the form of vehicle telemetry. That's gold, anything else is just gravy.

It still doesn't change my argument that any such solutions are just too much trouble for the amount of use they'll have eventually.
Worked on sort of "telemetry", we divided data into what's important (10%, in this case sensors data) and what's less important (90%, in this case video feed). Both were CRC-ed, but the first got retransmitted if corrupted or lost, the second was not. Sort of TCP and UDP.

On worsening channel the retransmissions of sensor data occupied more and more bits until no bits were available to video.

I guess SpaceX does the same, so if we see SOME video, it means ALL sensor data was received without gaps.

You guys are worried about how many bits the chroma component takes when some of analysis says they included substantial fill-in packets to bump up the data rate to make it a fixed rate transmission??

If they had infinite time to work on the transmission system it would have been nice to optimise it with lots of redundancy data instead of fill-in "0xffffffff" packets. (Though maybe those "ffff"s make it easier to re-synchronise the stream once major errors start to bite??)

But, I suspect this is more of an off-the-shelf system that was stymied by weather conditions on the day.

Next launch / splashdown should have an easier time of it.

cheers, Martin

Offline eeergo

Someone on Youtube did a decent effort of cleaning it up:

I'm not sure how accurate it is, and I don't think the legs were extended in the beginning of the clip? Nonetheless it looks neat.



The overlay this person made is actually quite misleading: there are some misplaced pixels at 0:14-0:15 from the engine exhaust that appear as yellow artifacts to the left of the image - in the original video they were not so apparent since there was a lot of noise, but here they take the context away and get quite distracting. Also, it makes the splashdown and subsequent tipping over of the stage very confusing to watch, since the legs should be submerged.
-DaviD-

Offline Lars_J

  • Senior Member
  • *****
  • Posts: 6160
  • California
  • Liked: 677
  • Likes Given: 195
Yes, the overlay works better for the first part of the video.

Offline michaelni

  • Member
  • Posts: 28
  • Liked: 23
  • Likes Given: 0
The video seems simple profile level 3 mpeg4 video in mpeg TS.
It seems none of the error resilience features of mpeg4 have been used when encoding it. Which is a pitty, had slices been used then the decoder could resume decoding of a frame at the next slice start, had data partitioning been used then the more important low resolution information and motion vectors would have been coded first in each slice making errors less likely to damage them. And had rvlcs been used then slices could have been decoded from both the start and end again, limiting the impact of bit errors.

I know nothing about how the video was generated or how it was transmitted, but if there was some FEC in there then then it should be possible in principle to re-run FEC decoding after manual fixing up all mpeg-TS and mpeg4-ES headers. And as such manual fixing would decrease the errors, FEC would then have fewer errors to deal with and might in a few rare cases be able to fix a few more errors.
Also if some kind of CRCs have been used, CRCs can also be used to correct bit errors as long as the number of errors is sufficiently small, which each CRC would need to correct, the exact number that could be corrected that way depends on the packet size and  crc polynomial being used.

Offline arnezami

  • Full Member
  • **
  • Posts: 285
  • Liked: 267
  • Likes Given: 378
Yeah. It's a real challenge. Pretty stuck here.

But I still got a few ideas I want to try...

Offline SVBarnard

  • Member
  • Posts: 91
  • USA
  • Liked: 17
  • Likes Given: 2
can someone please explain to me why spacex still hasn't released the footage they got from their airplane? Why are they being so secretive about it? I mean seeing is believing so why not just release the footage and prove to everyone in the world they really accomplished such an unprecedented feat?

I mean they did actually record it from their airplane right?

Offline luinil

the airplane might not have been close enough to take a video.

Remember the weather was pretty heavy, NASA renounced to send their plane.

Offline Lars_J

  • Senior Member
  • *****
  • Posts: 6160
  • California
  • Liked: 677
  • Likes Given: 195
can someone please explain to me why spacex still hasn't released the footage they got from their airplane? Why are they being so secretive about it? I mean seeing is believing so why not just release the footage and prove to everyone in the world they really accomplished such an unprecedented feat?

I mean they did actually record it from their airplane right?

I suspect we will see more footage when SpaceX releases their usual mission highlights video.

Tags:
 

Advertisement NovaTech
Advertisement Northrop Grumman
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
0