Oh I do like the idea behind this:https://www.youtube.com/channel/UCyZDgyJBYz3OXD3JbDJzNww That is a great one stop page to see the progress over time.
Revelation for the p frames in part 10 -- the ones with the big gray areas that shouldn't be.-3 completely ignores a block. It also clears chroma and luma, like -2, so subsequent blocks don't inherit it. This isn't terribly noticeable in most p frames because there aren't a lot of non-movement blocks. Except in part 10. Part 10, the camera is being overexposed by that big blast of steam and the rest of the frame gets dark, changing the whole frame on a regular basis. We stopped using -1 in p frames because it made gray blocks. It only does that if the blocks that are being blanked with -1 have nothing to inherit from, aka movement blocks. Test frame 185, take a look at it in the editor, pick it apart. X:19164:80,5:7:-1,23:7:-3,24:7:-1,27:7:-3,29:7:-1,3:8:21583:5:0:0:0:0:0,5:8:21693:0:-5:0:0:0:0, 6:8:21763:-10:0:0:0:0:0,8:8:21872:0:5:0:0:0:0,17:11:-3,35:11:37109,28:12:39566,35:12:40350, 8:13:-1:0:0:-10:0:0:0,9:13:-1,16:13:-3,28:13:43161,32:13:43824,15:14:45251:20:0:0:0:0:0, 6:16:52992,0:19:-3,39:19:70393,38:20:-3,42:20:75770,38:22:-3,17:23:84384,10:28:-3compared to X:19164:80,5:7:-3,3:8:21583,17:11:-3,35:11:37109,28:12:39566,35:12:40350,8:13:-3, 28:13:43161,32:13:43824,0:19:-3,39:19:70393,38:20:-3,42:20:75770,38:22:-3,17:23:84384,10:28:-3I need to sleep now.
Quote from: Chris Bergin on 06/03/2014 03:50 pmOh I do like the idea behind this:https://www.youtube.com/channel/UCyZDgyJBYz3OXD3JbDJzNww That is a great one stop page to see the progress over time.I agree it is awesome. Of course I will beg for more :-) ... It would be sweet to have a slow motion (3 fps?) copy running immediately after the 15 fps video.
Quote from: Req on 06/03/2014 06:12 pmI'll host any files this project needs, or even any daemon as long as I don't have to jump through flaming hoops to get it running on a CentOS 5 system(or systems). Primary circuit is a 90th percentile 10g line that usually sees less than 75Mbps of it's 1Gbps monthly commit, so LOTS of free bandwidth(something like 250TB/mo) is sitting around right now.Hit me with a PM anytime.Thanks Req, I've sent you a PM.All the spreadsheets should now be pointing to my image proxy script. Seems to be running alright, but I notice some strange caching issues that could be related to he script timing out (current host doesn't let me adjust the timeout).Changes to MMBs may not show up in the images immediately. Give it a little time, refresh your browser, etc. After I finish cleaning the script up, I'll upload it somewhere. Maybe the git for SpaceXVideoApp2 is the best place for it?
I'll host any files this project needs, or even any daemon as long as I don't have to jump through flaming hoops to get it running on a CentOS 5 system(or systems). Primary circuit is a 90th percentile 10g line that usually sees less than 75Mbps of it's 1Gbps monthly commit, so LOTS of free bandwidth(something like 250TB/mo) is sitting around right now.Hit me with a PM anytime.
Keep in mind, there are THOUSANDS of bits in each frame, so a 'long' MMB (in this case 129 commands) is still a very SMALL portion of the data contained there-in.
Quote from: mhenderson on 06/03/2014 04:48 pmQuote from: Chris Bergin on 06/03/2014 03:50 pmOh I do like the idea behind this:https://www.youtube.com/channel/UCyZDgyJBYz3OXD3JbDJzNww That is a great one stop page to see the progress over time.I agree it is awesome. Of course I will beg for more :-) ... It would be sweet to have a slow motion (3 fps?) copy running immediately after the 15 fps video.It doesn't have all the latest corrections (parts 3, 4, 5, 13, 14 still come from previous version) but here it is with nominal framerate and with 3 frames per second. It's getting really nice!Edit: it was the wrong quote... changed to the good one
Hi,Nice videos SwissCheese! I would love to see the latest corrections in it .As for the automated videos: it looks nice but usually a video compiled/released by hand (usually by SwissCheese) looks quite a lot better. When looking at it a little bit more closely I think there are still a few issues:1) It doesn't remove frames (as in 0:0:-3) that have not been touched yet. For example frames 42-60. They are still unrepaired and should not be put into the video yet.2) It doesn't seem to make a distinction between "no changes needed" and "lets forget this one". I think.3) In the video from 19:35:08 through 19:35:10 there seems to be no movement even though frames 121-140 have been repaired.4) We have no feedback when we haven't filled in an mmb correctly on the wiki (for example added a comment that effectively disables the frame). This would be very useful. Since you can then debug the problem.5) Is the automated video still based on the wiki or on the spreadsheet now?This list might not be complete but maybe somebody can take a look at it.Regards,arnezamiPS. I think it would be a good idea to add a "comments"-column to the wiki. The mmb-column would strictly be used for making the (automated) video (and might contain 0:0:-3) while the comments-column might contain comments and/or mmbs which aren't good enough yet to put into the video.
Quote from: Chris Bergin on 06/03/2014 03:50 pmOh I do like the idea behind this:https://www.youtube.com/channel/UCyZDgyJBYz3OXD3JbDJzNww That is a great one stop page to see the progress over time.One thing I've noticed is that the channel is putting out 'updated' videos at a really high rate... it tends to over-saturate the 'cool' effect of watching it develop.So while a few builds a day is cool up front, perhaps it is worthwhile to delete or archive all but one video per day every couple days. That way someone who goes there can truly appreciate the evolution of the video from initial to final form without being inundated by small deltas.
Quote from: Req on 06/03/2014 06:12 pmQuote from: JohnKiel on 06/03/2014 03:04 pmQuote from: Quialiss on 06/03/2014 02:14 pmQuote from: JohnKiel on 06/03/2014 12:11 pmUnfortunately, it looks like the MMBs can create an image request URL that's larger than the ~2KB Google Sheet's IMAGE function can manage.A bit ironic considering the editor had to be modified to deal with the long MMBs. It still works for all the individual p frames though, which is great! I'm tinkering with a work-around. Basically a proxy that allows a much shorter URL to request the image. Currently sorting out how to cache and respond with 304's to avoid quickly using up the paltry 10GB/month I get on my chosen host. (If anyone with more bandwidth is willing to host the simple php image proxy script once I'm done, let me know.)I'll host any files this project needs, or even any daemon as long as I don't have to jump through flaming hoops to get it running on a CentOS 5 system(or systems). Primary circuit is a 90th percentile 10g line that usually sees less than 75Mbps of it's 1Gbps monthly commit, so LOTS of free bandwidth(something like 250TB/mo) is sitting around right now.Hit me with a PM anytime.Thanks Req, I've sent you a PM.All the spreadsheets should now be pointing to my image proxy script. Seems to be running alright, but I notice some strange caching issues that could be related to he script timing out (current host doesn't let me adjust the timeout).Changes to MMBs may not show up in the images immediately. Give it a little time, refresh your browser, etc. After I finish cleaning the script up, I'll upload it somewhere. Maybe the git for SpaceXVideoApp2 is the best place for it?
Quote from: JohnKiel on 06/03/2014 03:04 pmQuote from: Quialiss on 06/03/2014 02:14 pmQuote from: JohnKiel on 06/03/2014 12:11 pmUnfortunately, it looks like the MMBs can create an image request URL that's larger than the ~2KB Google Sheet's IMAGE function can manage.A bit ironic considering the editor had to be modified to deal with the long MMBs. It still works for all the individual p frames though, which is great! I'm tinkering with a work-around. Basically a proxy that allows a much shorter URL to request the image. Currently sorting out how to cache and respond with 304's to avoid quickly using up the paltry 10GB/month I get on my chosen host. (If anyone with more bandwidth is willing to host the simple php image proxy script once I'm done, let me know.)I'll host any files this project needs, or even any daemon as long as I don't have to jump through flaming hoops to get it running on a CentOS 5 system(or systems). Primary circuit is a 90th percentile 10g line that usually sees less than 75Mbps of it's 1Gbps monthly commit, so LOTS of free bandwidth(something like 250TB/mo) is sitting around right now.Hit me with a PM anytime.
Quote from: Quialiss on 06/03/2014 02:14 pmQuote from: JohnKiel on 06/03/2014 12:11 pmUnfortunately, it looks like the MMBs can create an image request URL that's larger than the ~2KB Google Sheet's IMAGE function can manage.A bit ironic considering the editor had to be modified to deal with the long MMBs. It still works for all the individual p frames though, which is great! I'm tinkering with a work-around. Basically a proxy that allows a much shorter URL to request the image. Currently sorting out how to cache and respond with 304's to avoid quickly using up the paltry 10GB/month I get on my chosen host. (If anyone with more bandwidth is willing to host the simple php image proxy script once I'm done, let me know.)
Quote from: JohnKiel on 06/03/2014 12:11 pmUnfortunately, it looks like the MMBs can create an image request URL that's larger than the ~2KB Google Sheet's IMAGE function can manage.A bit ironic considering the editor had to be modified to deal with the long MMBs. It still works for all the individual p frames though, which is great!
Unfortunately, it looks like the MMBs can create an image request URL that's larger than the ~2KB Google Sheet's IMAGE function can manage.
Quote from: JohnKiel on 06/03/2014 08:49 pmQuote from: Req on 06/03/2014 06:12 pmQuote from: JohnKiel on 06/03/2014 03:04 pmQuote from: Quialiss on 06/03/2014 02:14 pmQuote from: JohnKiel on 06/03/2014 12:11 pmUnfortunately, it looks like the MMBs can create an image request URL that's larger than the ~2KB Google Sheet's IMAGE function can manage.A bit ironic considering the editor had to be modified to deal with the long MMBs. It still works for all the individual p frames though, which is great! I'm tinkering with a work-around. Basically a proxy that allows a much shorter URL to request the image. Currently sorting out how to cache and respond with 304's to avoid quickly using up the paltry 10GB/month I get on my chosen host. (If anyone with more bandwidth is willing to host the simple php image proxy script once I'm done, let me know.)I'll host any files this project needs, or even any daemon as long as I don't have to jump through flaming hoops to get it running on a CentOS 5 system(or systems). Primary circuit is a 90th percentile 10g line that usually sees less than 75Mbps of it's 1Gbps monthly commit, so LOTS of free bandwidth(something like 250TB/mo) is sitting around right now.Hit me with a PM anytime.Thanks Req, I've sent you a PM.All the spreadsheets should now be pointing to my image proxy script. Seems to be running alright, but I notice some strange caching issues that could be related to he script timing out (current host doesn't let me adjust the timeout).Changes to MMBs may not show up in the images immediately. Give it a little time, refresh your browser, etc. After I finish cleaning the script up, I'll upload it somewhere. Maybe the git for SpaceXVideoApp2 is the best place for it?Google Spreadsheets are doing all the best to limit the Slashdot DOS effect by extensively caching data. For instance the spreadsheet is published at regular intervals but they can be even up to 5 minutes. I.e. the JSON feed from spreadsheet is often late, thus you cannot immediately get changes you do in spreadsheet via json feed.Another "fix" Google is employing is caching any external data, i.e. when you pull the image with IMAGE function the request most probably passes the proxy, and unless there is a change in URL parameter to IMAGE function every time, you'll be geting cached image that Google fetched some time ago. This lets google limit its DOS effect on resources lined in the spreadsheet (i.e. it doesn't matter how many visitor spreadsheet gets).Hope this explains delays in image update frequency if images are being proxied via external system. The cumulative delay here can be up to 10 minutes, that is not too bad, considering the fact that image and all subsequent frames are updated automatically without any extra manual work (frame generation, upload to wiki, updating wiki referencing it, etc.)
Google Spreadsheets are doing all the best to limit the Slashdot DOS effect by extensively caching data. For instance the spreadsheet is published at regular intervals but they can be even up to 5 minutes. I.e. the JSON feed from spreadsheet is often late, thus you cannot immediately get changes you do in spreadsheet via json feed.Another "fix" Google is employing is caching any external data, i.e. when you pull the image with IMAGE function the request most probably passes the proxy, and unless there is a change in URL parameter to IMAGE function every time, you'll be geting cached image that Google fetched some time ago. This lets google limit its DOS effect on resources lined in the spreadsheet (i.e. it doesn't matter how many visitor spreadsheet gets).Hope this explains delays in image update frequency if images are being proxied via external system. The cumulative delay here can be up to 10 minutes, that is not too bad, considering the fact that image and all subsequent frames are updated automatically without any extra manual work (frame generation, upload to wiki, updating wiki referencing it, etc.)
Solution to try and by-pass this (if this is really needed) is to automatically add ?unique-id-here to the URL param, I use this myself for some L2 websites I run to make sure the user doesn't get an outdated cached page when I update something.Just my 2c.
Quote from: Jester on 06/04/2014 01:09 pmSolution to try and by-pass this (if this is really needed) is to automatically add ?unique-id-here to the URL param, I use this myself for some L2 websites I run to make sure the user doesn't get an outdated cached page when I update something.Just my 2c.Yes, this would force an image refresh for every sheet load, and could be accomplished by appending NOW() to the URL, but I'd like to avoid it if possible.
You could add a hash of the mmb to the URL.
Quote from: wronkiew on 06/04/2014 02:51 pmYou could add a hash of the mmb to the URL.The problem is that the MMBs used to create the hash in the URL may not match the MMBs the proxy script (spxi) sees when it pulls json for the sheet. If a user changes the MMB, the sheet will request a new image immediately, but the sheet doesn't immediately save the changed MMB to Google's servers, so spxi will return images for the wrong MMB.I suppose spxi could calculate a hash the same way as the spreadsheet (no simple native functions in google sheets for hashing, so we'd have to get creative), and if they don't match, loop for a bit, pulling new json for the sheet, trying its best to get updated data in the allotted amount of time; But there's no guarantee it would ever return the correct image; It could be some time before spxi actually sees the changed MMB.That said, it's probably better than nothing.