We currently have several different space situational awareness (SSA) datasets, of varying quality. There are public datasets, like that provided by USSPACECOM and the ESA's SSA Programme, along with private entities like LeoLabs.The public systems are generally accessible but the curation leaves something to be desired if you want to operate your constellation based on their data. The private guys make money off of subscriptions (LeoLabs seems to be charging about $2500/month/satellite) and provide better-curated data, with better precision and quality.
Database: It contains all the known objects, their assigned IDs, their owners (the curator would own objects that were rogue, unclaimed, of unknown origin, or launched under the cover of some kind of military secrecy), and their current curated state vector. If I had my druthers, this information would be public and free, but that kinda depends on how efficient the market is.
Interesting thought and I'd like to know more. As you obviously put some thought in this, and the quality of the data would determine most of the success of such an effort, can you elaborate a bit please on what is current available? To start with, related to the two quotes about one aspect, data:What is available?You mention "There are public datasets". That is a great start but could you share the pointers to data ? Also, what insights do you have on the quality and content of the public data? What are those datasets doing right or wrong? Are they 50%, 90% or 99% complete for objects since Sputnik? Are the orbits and state vectors stale or current? In other words, orbits in terms of "I measured the parameters once last year and assume they will be unchanged" or "based on 10 year track of object X with N measurements we can do predictions a year ahead? "? Are the current database just N objects with M orbital parameters, or is there an orbital model with drag etc involved?
What quality would be necessary?Based on the perceived use of the database, how complete and accurate does the data have to be? As an example, to predict the collision probability of two satellites a week, a month or a year in advance, you need a certain quality of the orbits. How is that quality measured and is it achievable based on public data?
Do you or anyone know how to quantify the quality difference between private operators and the public data? Could the public ever obtain quality to rival the private operators or do you need some other in-house, private measurements which the public will never see?
Maneuverable vs static objects?
Starlink and some military satellites can move around and without some data acquisition, a public dataset would always be "stale".
Finally, I wondered about parallels in other industries such as financial data where the privates (Bloomberg, Reuters) make money in contrast with some unusable public data I never liked for predictions. Even the private data needed curation at times. And in the AI/ML market, academic datasets are available but the giants have datasets several order of magnitudes more which makes it harder to rival them.
And I forgot - What use-cases do we know for such a dataset?Setting aside military applications, as they have their own, I see an astronomical use-case as such data could be useful to predict and remove satellite tracks from images and signals. Maybe disaster relief "which imaging satellite is closest for an image now?".Weather related like " I need a N hour track of approaching storm images and provider X cannot give me that but maybe provider Y plus Z can do the trick together?
Definitely the astronomy thing is a knock-on use case. I'd think that most earth observation companies would have better ways to posting their most recent stuff than their state vectors. But if you're going to have a public facility, its first and foremost goal should be to prevent collisions. Anything above and beyond that is just gravy.
Another way to add value is to let go of the pure data repositories like [3-8] and start adding code in the OSS spirit. There was one Python tool [13] that actually fetched data from [3] to calculate stuff. And [10-12] do something like that to visualize. But you could combine perturbation models, or even other models and calculations to add value. A newer C++ or Python code repo in arbitrary precision might be useful as well.
The IAU wanting to spin up a public SSA for megaconstellations is sorta related...https://forum.nasaspaceflight.com/index.php?topic=48302.msg2301227#msg2301227
Quote from: TheRadicalModerate on 09/23/2021 07:16 pm5) What sorts of pathologies might mess up the market?The orbital data in the early time period just after a launch is not well defined on publicly available datasets. Witnessed a lag of orbit data on the order of weeks once the objects actually do appear. non-US objects are worse.Tasks like this are similar to staring at glaciers, except glaciers are more fun.
5) What sorts of pathologies might mess up the market?
Followup with optical telescopes is a numbers game, meaning if you had sufficient numbers of telescope tasked to monitor the more hazardous 1-10cm range objects you could pull it off, but you need an enormous number of them