Great thread. If one of you guys can help, it will be worth your time and effort. This is an Elon call to arms.
The first thing I would do to improve video quality on future flights would be to encode a monochrome image. Color is useless for this type of image.
[–]spacexdevtty [+1] 8 points 2 hours ago (12|4)Hey all, SpaceX team here. Just wanted to answer some frequently asked questions we've been getting:Q: Why is the video so bad? A: This was recorded over a very lossy RF link.Q: Why release the video? A: We've had some success here (and with a little outside help as well, see http://aeroquartet.com/[1] ) manually stitching the video back together, but it's time consuming work. We don't expect the video to be 100% recoverable but we're hoping folks out there with more MPEG expertise than we have can provide assistance recovering more of the video.Q: What codec? Settings? A: MPEG 4 Part 2, P/I 15, 15fps, NTSC, fixed bitrateQ: How did you create repair1.ts? A: This was a joint effort between us and an outside consultant. First we took a pass on the data to align every MPEG packet on a 188 byte boundary and set the packet start byte to 0x47. Then we identified blocks within keyframes that contain bit errors, and then manually flipping bits in those corrupt blocks to see if it recovers more of the image.Q: What is a TS file? A: http://lmgtfy.com/?q=MPEG+TS[2]Feel free to post more questions here, we'll try and respond. We really appreciate all the help and great ideas! Thank you!!
Quote from: IRobot on 04/30/2014 10:37 pmThe first thing I would do to improve video quality on future flights would be to encode a monochrome image. Color is useless for this type of image.MPEG already operates on luminance/chrominance components and most of the bits allocated are for luminance (i.e. monochrome) component. I'm not sure what you're suggesting, it wouldn't do much for improving video quality and color actually helps a lot when you have objects with similar brightness in the scene.
First we took a pass on the data to align every MPEG packet on a 188 byte boundary and set the packet start byte to 0x47.
QuoteFirst we took a pass on the data to align every MPEG packet on a 188 byte boundary and set the packet start byte to 0x47.I'm not seeing that. For instance location 26EC is divisible by 188 (0XBC) but its value is 4F not 47.
I think we would all be happy with objects with similar brightness, if we could see them. I wouldn't mind if the whole rocket looked of the same brightness as long as I could see the flame, legs and ocean.So color is useless here.
Color video, via chroma subsampling, also reduces apparent resolution. An example is NTSC where colors are spread across 2 horizontal lumance pixels. You end up with ½ the horizontal resolution and full vertical resolution.
Quote from: docmordrid on 05/01/2014 08:13 amColor video, via chroma subsampling, also reduces apparent resolution. An example is NTSC where colors are spread across 2 horizontal lumance pixels. You end up with ½ the horizontal resolution and full vertical resolution.Chroma subsampling is precisely why I said luminance takes the most bits for encoding, because the 2 chroma channels are reduced in resolution by a factor of 4 or more. Luminance is not subsampled, unless you tell the encoder to do that for some reason.The upshot of all this is that having color as additional information in the stream really doesn't have as big an impact as some would like to believe. It's not like it's 3 full red/green/blue frames encoded and 3x the bandwidth of a single monochrome image.
I still think they should put a hundred memory cards inside small, sealed, brightly painted empty plastic boxes (only a data cable coming out), and stick it with strong, but water soluble adhesive to the inner wall of the first stage-second stage interface. Then offer a monetary reward to the local Florida fishing community for whoever returns most boxes
How about a monochrome image through a red-filter? Reduced data (although not a 3x reduction as you point out), without reduction in discerning boundaries.
Quote from: AJA on 05/01/2014 12:51 pmHow about a monochrome image through a red-filter? Reduced data (although not a 3x reduction as you point out), without reduction in discerning boundaries.But why bother? Assuming 4x4 subsampled chrominance channels, color video only has 12.5% more bits to encode than the same video where only luminance is encoded. Seeing as how bad the transmission losses were, that wouldn't have made any difference whatsoever.
Again, why go around making up solutions that would only be relevant for a couple of seconds before splashdown?
12.5% is actually a lot. When receiving such error-prone signal, it makes a lot of difference. It also reduces transmission power requirements (for the same frame rate), therefore more power available for transmission, therefore better S/N ratio.
Also true monochrome cameras are up to 3x more sensitive, meaning less (camera) noise to start with.
Quote from: ugordan on 05/01/2014 01:42 pmAgain, why go around making up solutions that would only be relevant for a couple of seconds before splashdown? Because the way up is quite well documented. The way down is not.
Quote from: IRobot on 05/01/2014 02:18 pm12.5% is actually a lot. When receiving such error-prone signal, it makes a lot of difference. It also reduces transmission power requirements (for the same frame rate), therefore more power available for transmission, therefore better S/N ratio.Maybe. *If* that's the only telemetry sent. Who's to say the video wasn't multiplexed along all the other, high rate vehicle telemetry so the 12.5% for video is more of a noise in the total bandwidth budget? Quote from: IRobot on 05/01/2014 02:18 pmAlso true monochrome cameras are up to 3x more sensitive, meaning less (camera) noise to start with.Seriously? At the codec quality settings they're using, the camera dirt that's deposited on the way down you're worried about camera noise?Quote from: IRobot on 05/01/2014 02:18 pmQuote from: ugordan on 05/01/2014 01:42 pmAgain, why go around making up solutions that would only be relevant for a couple of seconds before splashdown? Because the way up is quite well documented. The way down is not. Oh, I'm sure they have it quite well-documented. Just not in a format your typical rocket pr0n enthusiast likes. It's in the form of vehicle telemetry. That's gold, anything else is just gravy. It still doesn't change my argument that any such solutions are just too much trouble for the amount of use they'll have eventually.
Oh, I'm sure they have it quite well-documented. Just not in a format your typical rocket pr0n enthusiast likes. It's in the form of vehicle telemetry. That's gold, anything else is just gravy.
It still doesn't change my argument that any such solutions are just too much trouble for the amount of use they'll have eventually.