Author Topic: SpaceX Falcon 9 v1.1 CRS-3 Splashdown Video Repair Task Thread  (Read 1201627 times)

Offline Lar

  • Fan boy at large
  • Global Moderator
  • Senior Member
  • *****
  • Posts: 13463
  • Saw Gemini live on TV
  • A large LEGO storage facility ... in Michigan
  • Liked: 11864
  • Likes Given: 11086
Great thread. If one of you guys can help, it will be worth your time and effort. This is an Elon call to arms.

Can you get the specs on the camera used including the software load (and it sounds like, especially the codec) ??
"I think it would be great to be born on Earth and to die on Mars. Just hopefully not at the point of impact." -Elon Musk
"We're a little bit like the dog who caught the bus" - Musk after CRS-8 S1 successfully landed on ASDS OCISLY

Online docmordrid

  • Senior Member
  • *****
  • Posts: 6334
  • Michigan
  • Liked: 4207
  • Likes Given: 2
Yes! We need confing and codec specs like GOP length and other mpeg-(4?) parameters.

And I agree: using monochromatic analog video would solve a LOT of problems where signal strength and  interference is an issue. Newer isn't always better.
« Last Edit: 05/01/2014 01:41 am by docmordrid »
DM

Offline ugordan

  • Senior Member
  • *****
  • Posts: 8520
    • My mainly Cassini image gallery
  • Liked: 3543
  • Likes Given: 759
The first thing I would do to improve video quality on future flights would be to encode a monochrome image. Color is useless for this type of image.

MPEG already operates on luminance/chrominance components and most of the bits allocated are for luminance (i.e. monochrome) component. I'm not sure what you're suggesting, it wouldn't do much for improving video quality and color actually helps a lot when you have objects with similar brightness in the scene.

Offline neilh

  • Senior Member
  • *****
  • Posts: 2365
  • Pasadena, CA
  • Liked: 46
  • Likes Given: 149
A "SpaceX team member" (same account they used for an AMA last year) posted the following comment

http://www.reddit.com/r/spacex/comments/2bsn2/first_stage_landing_video/ch6f8io
Quote
[–]spacexdevtty [+1] 8 points 2 hours ago (12|4)
Hey all, SpaceX team here. Just wanted to answer some frequently asked questions we've been getting:
Q: Why is the video so bad? A: This was recorded over a very lossy RF link.
Q: Why release the video? A: We've had some success here (and with a little outside help as well, see http://aeroquartet.com/[1] ) manually stitching the video back together, but it's time consuming work. We don't expect the video to be 100% recoverable but we're hoping folks out there with more MPEG expertise than we have can provide assistance recovering more of the video.
Q: What codec? Settings? A: MPEG 4 Part 2, P/I 15, 15fps, NTSC, fixed bitrate
Q: How did you create repair1.ts? A: This was a joint effort between us and an outside consultant. First we took a pass on the data to align every MPEG packet on a 188 byte boundary and set the packet start byte to 0x47. Then we identified blocks within keyframes that contain bit errors, and then manually flipping bits in those corrupt blocks to see if it recovers more of the image.
Q: What is a TS file? A: http://lmgtfy.com/?q=MPEG+TS[2]
Feel free to post more questions here, we'll try and respond. We really appreciate all the help and great ideas! Thank you!!
« Last Edit: 05/01/2014 05:04 pm by Chris Bergin »
Someone is wrong on the Internet.
http://xkcd.com/386/

Offline IRobot

  • Full Member
  • ****
  • Posts: 1312
  • Portugal & Germany
  • Liked: 310
  • Likes Given: 272
The first thing I would do to improve video quality on future flights would be to encode a monochrome image. Color is useless for this type of image.

MPEG already operates on luminance/chrominance components and most of the bits allocated are for luminance (i.e. monochrome) component. I'm not sure what you're suggesting, it wouldn't do much for improving video quality and color actually helps a lot when you have objects with similar brightness in the scene.
I think we would all be happy with objects with similar brightness, if we could see them. I wouldn't mind if the whole rocket looked of the same brightness as long as I could see the flame, legs and ocean.

So color is useless here.

Also slight color noise introduces a lot of information between frames.

Offline Adaptation

  • Full Member
  • *
  • Posts: 160
  • Liked: 40
  • Likes Given: 38
Quote
First we took a pass on the data to align every MPEG packet on a 188 byte boundary and set the packet start byte to 0x47.

I'm not seeing that.  For instance location 26EC is divisible by 188 (0XBC) but its value is 4F not 47.

Online docmordrid

  • Senior Member
  • *****
  • Posts: 6334
  • Michigan
  • Liked: 4207
  • Likes Given: 2
Color video, via chroma subsampling, also reduces apparent resolution. An example is NTSC where colors are spread across 2 horizontal lumance pixels. You end up with ½ the horizontal resolution and full vertical resolution.
DM

Offline arnezami

  • Full Member
  • **
  • Posts: 285
  • Liked: 267
  • Likes Given: 378
Quote
First we took a pass on the data to align every MPEG packet on a 188 byte boundary and set the packet start byte to 0x47.

I'm not seeing that.  For instance location 26EC is divisible by 188 (0XBC) but its value is 4F not 47.
I can concur. In the repair1.ts file the sync bytes have not been "fixed" to 47 (hex). Maybe they uploaded the wrong file? Not that this does much: the rest of the header is usually broken as well. Also, in the raw.ts there are 5 places where I had to add exactly 56 bytes in order for the headers to align on the 188 bytes grid.

Anyway. Back to trying to get a little more life out of this video.  ;)

Offline ugordan

  • Senior Member
  • *****
  • Posts: 8520
    • My mainly Cassini image gallery
  • Liked: 3543
  • Likes Given: 759
I think we would all be happy with objects with similar brightness, if we could see them. I wouldn't mind if the whole rocket looked of the same brightness as long as I could see the flame, legs and ocean.

So color is useless here.

You're not getting me. When everything is the same brightness, you have no idea what you're looking at, where one object ends and the next one begins. This is where color is important, to distinguish between different materials.

Offline ugordan

  • Senior Member
  • *****
  • Posts: 8520
    • My mainly Cassini image gallery
  • Liked: 3543
  • Likes Given: 759
Color video, via chroma subsampling, also reduces apparent resolution. An example is NTSC where colors are spread across 2 horizontal lumance pixels. You end up with ½ the horizontal resolution and full vertical resolution.

Chroma subsampling is precisely why I said luminance takes the most bits for encoding, because the 2 chroma channels are reduced in resolution by a factor of 4 or more. Luminance is not subsampled, unless you tell the encoder to do that for some reason.

The upshot of all this is that having color as additional information in the stream really doesn't have as big an impact as some would like to believe. It's not like it's 3 full red/green/blue frames encoded and 3x the bandwidth of a single monochrome image.
« Last Edit: 05/01/2014 11:29 am by ugordan »

Offline input~2

  • Moderator
  • Global Moderator
  • Senior Member
  • *****
  • Posts: 6810
  • Liked: 1540
  • Likes Given: 567
This frame at 14 sec. seems interesting

Offline AJA

  • Full Member
  • ****
  • Posts: 889
  • Per Aspera Ad Ares, Per Aspera Ad Astra
  • India
  • Liked: 146
  • Likes Given: 212
Ok..let me throw this out there: Would having the telemetry bitstream as opposed to only the video bitstream help? I'm thinking on the lines of whether additional error correction coding etc. would introduce errors in the TSH, or in the error correction code of the videostream. Somewhat like a frame-shift mutation, or an insertion/deletion mutation.


Color video, via chroma subsampling, also reduces apparent resolution. An example is NTSC where colors are spread across 2 horizontal lumance pixels. You end up with ½ the horizontal resolution and full vertical resolution.

Chroma subsampling is precisely why I said luminance takes the most bits for encoding, because the 2 chroma channels are reduced in resolution by a factor of 4 or more. Luminance is not subsampled, unless you tell the encoder to do that for some reason.

The upshot of all this is that having color as additional information in the stream really doesn't have as big an impact as some would like to believe. It's not like it's 3 full red/green/blue frames encoded and 3x the bandwidth of a single monochrome image.


How about a monochrome image through a red-filter? Reduced data (although not a 3x reduction as you point out), without reduction in discerning boundaries. After all, differential luminance in one channel is what contributes to discernibility in the composite. The ocean would be dark, while the surf kicked up (being white) would still give you a sense of "sea level". You'd probably be able to write some compression code that wouldn't bother transmitting pixels below a certain black value anyway. Plus, we know the colour of the ocean, and the colour of Falcon's legs to a good degree... so as long as you took the channel with the highest variance, you'd be able to reconstruct a fairly decent "false colour" image..?


I still think they should put a hundred memory cards inside small, sealed, brightly painted empty plastic boxes (only a data cable coming out), and stick it with strong, but water soluble adhesive to the inner wall of the first stage-second stage interface. Then offer a monetary reward to the local Florida fishing community for whoever returns most boxes :P


Or simply have a drone fly much closer to the stage than would be allowed for manned airborne assets.

Offline Sohl

  • Full Member
  • **
  • Posts: 298
  • Liked: 131
  • Likes Given: 451
There's a lot of interesting ideas presented here to get better video data, but the next scheduled flight is probably too soon to make any changes on the vehicle side.  Perhaps they can do better with aircraft or ship assets to be in a better position for signal acquisition (especially if weather cooperates).  Later on, it should not matter as much as they get close to and hopefully fully succeeding at a land landing.

But let's hope for calm seas on this next flight!   ;)

Offline mmeijeri

  • Senior Member
  • *****
  • Posts: 7772
  • Martijn Meijering
  • NL
  • Liked: 397
  • Likes Given: 822
I still think they should put a hundred memory cards inside small, sealed, brightly painted empty plastic boxes (only a data cable coming out), and stick it with strong, but water soluble adhesive to the inner wall of the first stage-second stage interface. Then offer a monetary reward to the local Florida fishing community for whoever returns most boxes :P

They had what they called "Talon pods" on earlier flights where they tried to recover v1.0 first stages. I haven't heard anything about that this time.
Pro-tip: you don't have to be a jerk if someone doesn't agree with your theories

Offline ugordan

  • Senior Member
  • *****
  • Posts: 8520
    • My mainly Cassini image gallery
  • Liked: 3543
  • Likes Given: 759
How about a monochrome image through a red-filter? Reduced data (although not a 3x reduction as you point out), without reduction in discerning boundaries.

But why bother? Assuming 4x4 subsampled chrominance channels, color video only has 12.5% more bits to encode than the same video where only luminance is encoded. Seeing as how bad the transmission losses were, that wouldn't have made any difference whatsoever.

Again, why go around making up solutions that would only be relevant for a couple of seconds before splashdown? The end goal is recovering the stage, not getting better video of a landing stage that ends up being lost anyway. A better approach is to just record the stream onboard for later playback and even that would not be needed for land boostback where the stage doesn't go over the horizon to primary range tracking.

Offline IRobot

  • Full Member
  • ****
  • Posts: 1312
  • Portugal & Germany
  • Liked: 310
  • Likes Given: 272
How about a monochrome image through a red-filter? Reduced data (although not a 3x reduction as you point out), without reduction in discerning boundaries.

But why bother? Assuming 4x4 subsampled chrominance channels, color video only has 12.5% more bits to encode than the same video where only luminance is encoded. Seeing as how bad the transmission losses were, that wouldn't have made any difference whatsoever.
@AJA why red filter? Maybe a luminance filter with IR cut, but can't understand the reason to use the red filter.

12.5% is actually a lot. When receiving such error-prone signal, it makes a lot of difference. It also reduces transmission power requirements (for the same frame rate), therefore more power available for transmission, therefore better S/N ratio.

Also true monochrome cameras are up to 3x more sensitive, meaning less (camera) noise to start with.

Again, why go around making up solutions that would only be relevant for a couple of seconds before splashdown?
Because the way up is quite well documented. The way down is not.
« Last Edit: 05/01/2014 02:20 pm by IRobot »

Offline ugordan

  • Senior Member
  • *****
  • Posts: 8520
    • My mainly Cassini image gallery
  • Liked: 3543
  • Likes Given: 759
12.5% is actually a lot. When receiving such error-prone signal, it makes a lot of difference. It also reduces transmission power requirements (for the same frame rate), therefore more power available for transmission, therefore better S/N ratio.

Maybe. *If* that's the only telemetry sent. Who's to say the video wasn't multiplexed along all the other, high rate vehicle telemetry so the 12.5% for video is more of a noise in the total bandwidth budget?

Also true monochrome cameras are up to 3x more sensitive, meaning less (camera) noise to start with.

Seriously? At the codec quality settings they're using, the camera dirt that's deposited on the way down you're worried about camera noise?

Again, why go around making up solutions that would only be relevant for a couple of seconds before splashdown?
Because the way up is quite well documented. The way down is not.

Oh, I'm sure they have it quite well-documented. Just not in a format your typical rocket pr0n enthusiast likes. It's in the form of vehicle telemetry. That's gold, anything else is just gravy.

It still doesn't change my argument that any such solutions are just too much trouble for the amount of use they'll have eventually.

Offline rst

  • Full Member
  • ***
  • Posts: 347
  • Liked: 127
  • Likes Given: 0
Just a note:  for those who may not have noticed, there's a parallel discussion of the video and what may or may not be visible on particular frames on the CRS-3/SpX-3 discussion thread in the missions section:

http://forum.nasaspaceflight.com/index.php?topic=31513.1650

Offline hrissan

  • Full Member
  • ****
  • Posts: 411
  • Novosibirsk, Russia
  • Liked: 325
  • Likes Given: 2432
12.5% is actually a lot. When receiving such error-prone signal, it makes a lot of difference. It also reduces transmission power requirements (for the same frame rate), therefore more power available for transmission, therefore better S/N ratio.

Maybe. *If* that's the only telemetry sent. Who's to say the video wasn't multiplexed along all the other, high rate vehicle telemetry so the 12.5% for video is more of a noise in the total bandwidth budget?

Also true monochrome cameras are up to 3x more sensitive, meaning less (camera) noise to start with.

Seriously? At the codec quality settings they're using, the camera dirt that's deposited on the way down you're worried about camera noise?

Again, why go around making up solutions that would only be relevant for a couple of seconds before splashdown?
Because the way up is quite well documented. The way down is not.

Oh, I'm sure they have it quite well-documented. Just not in a format your typical rocket pr0n enthusiast likes. It's in the form of vehicle telemetry. That's gold, anything else is just gravy.

It still doesn't change my argument that any such solutions are just too much trouble for the amount of use they'll have eventually.
Worked on sort of "telemetry", we divided data into what's important (10%, in this case sensors data) and what's less important (90%, in this case video feed). Both were CRC-ed, but the first got retransmitted if corrupted or lost, the second was not. Sort of TCP and UDP.

On worsening channel the retransmissions of sensor data occupied more and more bits until no bits were available to video.

I guess SpaceX does the same, so if we see SOME video, it means ALL sensor data was received without gaps.

Offline IRobot

  • Full Member
  • ****
  • Posts: 1312
  • Portugal & Germany
  • Liked: 310
  • Likes Given: 272
Oh, I'm sure they have it quite well-documented. Just not in a format your typical rocket pr0n enthusiast likes. It's in the form of vehicle telemetry. That's gold, anything else is just gravy.
If you want to go that way, we don't need a video. And yet we have one.

It still doesn't change my argument that any such solutions are just too much trouble for the amount of use they'll have eventually.
What trouble? Switch the camera for a mono version? Or reconfigure the camera to transmit in gray scale? You make it sound like mono cameras are troublesome, but in fact they are EXACTLY the same! Usually manufacturers supply the same camera in mono and color version.

Tags:
 

Advertisement NovaTech
Advertisement Northrop Grumman
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
0