Author Topic: Orbital Data Centers connecting directly to Starlink via laser  (Read 24231 times)

Offline ZachF

  • Full Member
  • ****
  • Posts: 1649
  • Immensely complex & high risk
  • NH, USA, Earth
  • Liked: 2679
  • Likes Given: 537
The more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.

What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.

Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?
artist, so take opinions expressed above with a well-rendered grain of salt...
https://www.instagram.com/artzf/

Offline Mark K

  • Full Member
  • *
  • Posts: 139
  • Wisconsin
  • Liked: 79
  • Likes Given: 30
The more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.

What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.

Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?

As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit.
The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.

I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
The more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.

What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.

Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?

As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit.
The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.

I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.
Some datacenter hardware is designed not to need any servicing. Just sufficient redundancy for the life of the server. And the servers are often value-dense enough that $50/kg is pennies.

Energy and cooling costs may be relevant. But in orbit you have brighter and more consistent sunlight, so if you have cheap enough radiators, you have a chance of having lower energy/thermal costs than on the ground. Like space based solar power but without the most expensive part of that, which is the massive high power radio transmitters and receivers (and the inefficiency of all that).
« Last Edit: 01/27/2021 04:45 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline dlapine

  • Full Member
  • ***
  • Posts: 356
  • University of Illinois
  • Liked: 209
  • Likes Given: 326
The more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.

What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.

Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?

As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit.
The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.

I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.
Some datacenter hardware is designed not to need any servicing. Just sufficient redundancy for the life of the server. And the servers are often value-dense enough that $50/kg is pennies.

Energy and cooling costs may be relevant. But in orbit you have brighter and more consistent sunlight, so if you have cheap enough radiators, you have a chance of having lower energy/thermal costs than on the ground. Like space based solar power but without the most expensive part of that, which is the massive high power radio transmitters and receivers (and the inefficiency of all that).

Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT.

Next gen starlink could easily support that with one or more links.

Going any further in pricing and value and feasibility/optimization of space-based data centers is probably off topic for this thread.

Offline vsatman

Quote
SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:

I understand correctly that these 10 Gbps will be satellites in the V band 37.5..42.5 GHz??

Or optical.
optical   between UT and StarLink  sat??
the optical channel has large losses in the atmosphere and then you will have to use large diameter optical receivers on the ground

Offline rsdavis9

The more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.

What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.

Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?

As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit.
The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.

I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.
Some datacenter hardware is designed not to need any servicing. Just sufficient redundancy for the life of the server. And the servers are often value-dense enough that $50/kg is pennies.

Energy and cooling costs may be relevant. But in orbit you have brighter and more consistent sunlight, so if you have cheap enough radiators, you have a chance of having lower energy/thermal costs than on the ground. Like space based solar power but without the most expensive part of that, which is the massive high power radio transmitters and receivers (and the inefficiency of all that).

Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT.

Next gen starlink could easily support that with one or more links.

Going any further in pricing and value and feasibility/optimization of space-based data centers is probably off topic for this thread.
Does it make sense to put these datacenters at a higher altitude.
1. Can connect to more satellites directly.
2. Easier cooling and better solar.
With ELV best efficiency was the paradigm. The new paradigm is reusable, good enough, and commonality of design.
Same engines. Design once. Same vehicle. Design once. Reusable. Build once.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
This thread is to shunt some off-topic discussion away from the Starlink thread.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Good question, and we can talk about it here: https://forum.nasaspaceflight.com/index.php?topic=52902.0
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline rsdavis9


The more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.

What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.

Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?

As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit.
The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.

I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.
Some datacenter hardware is designed not to need any servicing. Just sufficient redundancy for the life of the server. And the servers are often value-dense enough that $50/kg is pennies.

Energy and cooling costs may be relevant. But in orbit you have brighter and more consistent sunlight, so if you have cheap enough radiators, you have a chance of having lower energy/thermal costs than on the ground. Like space based solar power but without the most expensive part of that, which is the massive high power radio transmitters and receivers (and the inefficiency of all that).

Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT.

Next gen starlink could easily support that with one or more links.

Going any further in pricing and value and feasibility/optimization of space-based data centers is probably off topic for this thread.
Does it make sense to put these datacenters at a higher altitude.
1. Can connect to more satellites directly.
2. Easier cooling and better solar.
With ELV best efficiency was the paradigm. The new paradigm is reusable, good enough, and commonality of design.
Same engines. Design once. Same vehicle. Design once. Reusable. Build once.

Offline RonM

  • Senior Member
  • *****
  • Posts: 3340
  • Atlanta, Georgia USA
  • Liked: 2233
  • Likes Given: 1584
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
The more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.

What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.

Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?

As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit.
The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.

I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.
Some datacenter hardware is designed not to need any servicing. Just sufficient redundancy for the life of the server. And the servers are often value-dense enough that $50/kg is pennies.

Energy and cooling costs may be relevant. But in orbit you have brighter and more consistent sunlight, so if you have cheap enough radiators, you have a chance of having lower energy/thermal costs than on the ground. Like space based solar power but without the most expensive part of that, which is the massive high power radio transmitters and receivers (and the inefficiency of all that).

Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT.

Next gen starlink could easily support that with one or more links.

Going any further in pricing and value and feasibility/optimization of space-based data centers is probably off topic for this thread.
Does it make sense to put these datacenters at a higher altitude.
1. Can connect to more satellites directly.
2. Easier cooling and better solar.

Yes, if latency isn’t essential, then putting the orbital data centers in MEO or beyond is probably a good idea. Less crowded and more energy and easier to dump heat. I could imagine Bitcoin mining could be done in orbit if the price is low enough. (Can keep the solar, thermal, and comms equipment but do servicing every few years to upgrade the silicon as ASICs are improved.... GSO or even the Lagrange Points should be no problem for Bitcoin latency. I think you could get electricity costs on orbit down to 1-2¢/kWh, plenty cheap for scamcurrency mining... And possibly for doing things like computer simulations or neural network training.)

Some things you’ll want as low of latency as possible. Those could go in LEO or even cohosted on Starlink satellites.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Coastal Ron

  • Senior Member
  • *****
  • Posts: 8971
  • I live... along the coast
  • Liked: 10336
  • Likes Given: 12060
I actually knew of someone that was working on orbital data centers about 5 years ago, but I just checked and they have changed their business completely away from that.

They were focused on securely storing information, and using the physical location barrier of space.
If we don't continuously lower the cost to access space, how are we ever going to afford to expand humanity out into space?

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
You only have a latency advantage if you’re in LEO as well.

But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) direct to Starlink and potentially cheaper energy due to more consistently available sunlight.

For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.
« Last Edit: 01/27/2021 07:54 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
An interesting thing about satellites beyond LEO is you don’t need multiple hops any more to talk to each other. Just about any satellite in MEO or GSO or a Lagrange point or whatever can just communicate directly with any other (if both are equipped with appropriate transceivers and apertures), at least the vast majority of the time (you can still occasionally get eclipsed by the Earth or Moon or possibly have the Sun interfere). It’s less of a net and more of a direct connection any-to-any topology. I’m not sure what the consequences of that are, but it’s interesting to me. With lasers, then, no company can serve as a monopolizing middleman requiring payment for transit. Censoring also becomes less feasible (hello, Space Force!).

National barriers and other geographic constraints would be much less effective at controlling the flow of information by default.

In fact, this might be one possible advantage of optical ground-space comms... deniability and lack of regulation. A satellite blinking in Morse Code (doesn’t even need to be laser light, could be just reflected sunlight) can transit information directly to people on the ground looking up. (And same for the other direction... people on the ground can use a bright flash light to transmit Morse Code to any persistently observing satellite in view, especially at night. Or during the day with a large mirror.)
« Last Edit: 01/27/2021 08:15 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline RonM

  • Senior Member
  • *****
  • Posts: 3340
  • Atlanta, Georgia USA
  • Liked: 2233
  • Likes Given: 1584
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
You only have a latency advantage if you’re in LEO as well.

But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.

For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.

If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.

When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.

https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
You only have a latency advantage if you’re in LEO as well.

But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.

For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.

If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.

When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.

https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM
No, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.

Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s, say, a second or less.
« Last Edit: 01/27/2021 08:16 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline rsdavis9

What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
You only have a latency advantage if you’re in LEO as well.

But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.

For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.

If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.

When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.

https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM
No, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.

Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s a second or less.

Video streaming.
For a 20000km orbit the latency is 66ms one way.
Could be a good place for the mars laser interconnect.
With ELV best efficiency was the paradigm. The new paradigm is reusable, good enough, and commonality of design.
Same engines. Design once. Same vehicle. Design once. Reusable. Build once.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
You only have a latency advantage if you’re in LEO as well.

But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.

For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.

If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.

When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.

https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM
No, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.

Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s a second or less.

Video streaming.
For a 20000km orbit the latency is 66ms one way.
Could be a good place for the mars laser interconnect.
Yeah, it is a good place to put video streaming caches, potentially. Video streams usually buffer much longer than that. And you could actually have a mix, like the first few seconds when you go to a new spot on a video stream are served by whatever storage is closest (giving the user a super smooth experience) while the buffer is filled by something with maybe 100ms of latency but is cheaper to operate. And as you hinted, there's no reason to put that exactly in GSO, which has a limited number of slots which can cost money. A bit higher or lower would be fine if you were planning on serving the whole globe anyway. But this way, you wouldn't need a full cache on every Starlink satellite. Just a few such servers in high orbit would be able to serve the whole world.

However, storage uses VERY little energy and can be extremely compact. Might want to just cohost that on a Starlink satellite or something.

Or you might have huge GSO satellites over different regions to provide local content, but they just use lasers to talk to Starlink and the Starlink network does the last-1000km delivery. Or, perhaps, the Starlink terminals (which are phased arrays and could talk to GSO or wherever they want) could just talk directly, but that potentially runs into the limited radio bandwidth problem (could be mitigated by using a non-stationary but still geosynchronous orbit with the satellites going far out of the geostationary line of satellites, giving you ability to do more spatial multiplexing).
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline RonM

  • Senior Member
  • *****
  • Posts: 3340
  • Atlanta, Georgia USA
  • Liked: 2233
  • Likes Given: 1584
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
You only have a latency advantage if you’re in LEO as well.

But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.

For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.

If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.

When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.

https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM
No, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.

Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s, say, a second or less.

Ok, then what are the benefits of putting a high performance processing system in space? How is it cheaper or better than a data center on Earth? Power, cooling, maintenance, physical upgrades to name a few issues are all easier to do on Earth.

As Coastal Ron mentioned, physical security is a possible answer. Maybe video streaming. What else?

Offline Tomness

  • Full Member
  • ****
  • Posts: 675
  • Into the abyss will I run
  • Liked: 299
  • Likes Given: 744
I always felt Google's investment into Starlink was to provide backbone and new end to end encrypted direct connection between connections. Say I have Starlink and using Google Stadia and playing with a another Google Stadia member in my area or another then it's my terminal to say to data center and back to another member. Or data center to data center via direct connections to transceivers. By passing different ISPs and backbone overhauls.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Quote
SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:

I understand correctly that these 10 Gbps will be satellites in the V band 37.5..42.5 GHz??

Or optical.
It'd be interesting to use optical for direct comms to high altitude aircraft which fly above the vast majority of weather anyway. Certainly the military would be interested in that. Very stealthy. But potentially could be used for commercial airliners (but that's probably way overkill... the market isn't big enough to need like terabit transmission rates to long haul aircraft, and just a phased array would be a lot simpler and more robust for commercial airliners).

Actually, that gives me an idea. Over on the East Coast, there's basically always several high altitude jets overhead at any one time. You could put optical transceivers on commercial aircraft already doing regular high altitude flights over densely populated areas and those aircraft could act as local Starlink repeaters to enable higher density coverage. The user terminals shouldn't need to change. The jetliners get extremely fast in-flight Internet in return. I think there might've been some startup that had a similar idea (but without the optical connection to Starlink).
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline dlapine

  • Full Member
  • ***
  • Posts: 356
  • University of Illinois
  • Liked: 209
  • Likes Given: 326
I'd note that we had a presentation last year from an 3rd party experimenter on using standard HPE server equipment in orbit as a test. The simple result was that it was very doable. Especially now with non-mechanical storage prices coming down.

Looking at costs, that one 2MT cabinet would cost you over $100K for launch costs alone.

For reference one cabinet in a average center would cost you about $270 in monthly floor space costs (#1), and $5 + $5 an hour for power and cooling at the 50KW usage. Asumming that solar and radiators had all benefit and no cost, I'm not sure how long you'd need You'd need about 16 months orbital operations to reach a break even point at that rate, if ever, just for the launch costs.

Might need other benefits to do this in orbit.

#1 "The cost of commercial office space in the U.S. can range from $6 per square foot in low cost regions to over $12 per square foot in New York City. On average, a 50-cabinet data center will occupy about 1,700 square feet. At a median cost of $8 per square foot, the space alone would cost about $13,600 per month."

Edited to do simple math.
« Last Edit: 01/27/2021 08:45 pm by dlapine »

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
You only have a latency advantage if you’re in LEO as well.

But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.

For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.

If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.

When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.

https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM
No, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.

Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s, say, a second or less.

Ok, then what are the benefits of putting a high performance processing system in space? How is it cheaper or better than a data center on Earth? Power, cooling, maintenance, physical upgrades to name a few issues are all easier to do on Earth.

As Coastal Ron mentioned, physical security is a possible answer. Maybe video streaming. What else?
Maintenance and physical upgrades would simply not be done except very rarely (maybe 5 year intervals).

Power may actually be cheaper. The biggest problem with space based solar power isn't the cost of the solar panels (if you use silicon) or even launch (with Starship) but actually transmitting the energy, which requires a huge cost of transmitters (plus huge receiving array on the ground only half an order of magnitude smaller than the equivalent production solar farm) and a big efficiency loss in the whole process plus the regulatory headache of transmitting that much power through the atmosphere. If you just use the power in orbit, then it could be MUCH cheaper than space based solar power and potentially cheaper that terrestrial electricity since you have a near 100% capacity factor and no significant intermittency (so the same solar array produces 5-10x as much electricity and you don't need an expensive battery and no land is needed, either).

You still have to now radiate a lot of heat, but radiators could be made fairly inexpensively (and would wouldn't need refrigeration heat pump equipment like you would at a terrestrial data center, just radiate passively using coolant loops or heat pipes).

Electricity in orbit could be 1-2 cents per kWh for consistent, 24/7 electricity (depending on orbit, you could have occasional eclipses) if Starship works. Nothing approaches that on Earth except maybe remote, orphaned electricity like flare gas or maybe stranded hydro (already paid for).
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
I'd note that we had a presentation last year from an 3rd party experimenter on using standard HPE server equipment in orbit as a test. The simple result was that it was very doable. Especially now with non-mechanical storage prices coming down.

Looking at costs, that one 2MT cabinet would cost you over $100K for launch costs alone.

For reference one cabinet in a average center would cost you about $270 in monthly floor space costs (#1), and $5 + $5 an hour for power and cooling at the 50KW usage. Asumming that solar and radiators had all benefit and no cost, I'm not sure how long you'd need You'd need about 16 months orbital operations to reach a break even point at that rate, if ever, just for the launch costs.

Might need other benefits to do this in orbit.

#1 "The cost of commercial office space in the U.S. can range from $6 per square foot in low cost regions to over $12 per square foot in New York City. On average, a 50-cabinet data center will occupy about 1,700 square feet. At a median cost of $8 per square foot, the space alone would cost about $13,600 per month."

Edited to do simple math.
16 months?? Consider that power plants can take 1-2 decades or even more to pay off. So that's not a bad timeline at all, especially as Moore's Law slows down (which it already has... more like 4 years now except in some niches). Back when computing tech was falling in price-per-performance by half every 18 months, 16 months seemed like a long time. Now we're at more like 48 months, so 16 months is doable.

Also, launch costs could get even lower than $50/kg-to-high-orbit (although that's not a bad working figure for fully reusable Starship).
« Last Edit: 01/27/2021 08:52 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline dlapine

  • Full Member
  • ***
  • Posts: 356
  • University of Illinois
  • Liked: 209
  • Likes Given: 326
And given our forced remote operations due to covid in our data centers over the last 11 months, I'd feel a lot better about estimating how often we'd need to send up someone to physically do something or the amount of "hot spare hardware" needed.

With optimization for server weight, and some kind of standard in-orbit cargo unit, this would not be completely out of the question on costs, and with on orbit fast connectivity quite reasonable.

Just waiting for that $50/KG now. Or lower.  :D

Edit: typo
« Last Edit: 01/27/2021 08:53 pm by dlapine »

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Probably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!).

Gotta give those future astronauts some work to do. ;)
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).
« Last Edit: 01/27/2021 09:30 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline watermod

  • Full Member
  • ****
  • Posts: 519
  • Liked: 177
  • Likes Given: 154
The following would be easy to do, be useful and not take huge storage like say a youtube cache on orbit and have the benefit of not needing bandwidth for these high use items constantly from the earth and being much less storage intensive than media:

Wikipedia was designed from day 1 to be clone-able and able to re-sync  on a regular basis.   An on orbit Wikipedia clone would be a major benefit for StarLink and not be a copyright hell.

Also, all those open source github codebases designed to synchronize copies.   Again copyright neutral and useful for SpaceX besides other users...

Update codebases for Windows, Linux, Apple, Android, cellphones and smart-tvs ... even Tesla cars....

Game servers and codebases for various X-Box and Playstation like products.

Portals for stock trading firms or even the markets - say a Dow Jones interface right in space.

Realtime tracking databases for everything.

Interfaces for services like FaceTime and Zoom

Any satellites that do imaging of Earth or space could feed their customers directly through StarLink and not even bother having high bandwidth connections and equipment to communicate to Earth.   - This would make the FCC part of satellite approval a lot easier.  Also consider the micro-satellites here in terms of power requirements - an IR laser and IR receiver have much less power needs than broadcasting and receiving from Earth and with NO FCC approval could have very fast approval periods for launch...














Offline RonM

  • Senior Member
  • *****
  • Posts: 3340
  • Atlanta, Georgia USA
  • Liked: 2233
  • Likes Given: 1584
Probably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!).

Gotta give those future astronauts some work to do. ;)

When I started working in data centers, the big issue was hard drive failure. Sometimes we'd have server hardware issues, but it wasn't common. The mainframe just kept running. By the time I retired 15 years later, our data center was a "lights out" facility. No one would go in the room unless there was a problem. I would get a call once every few months from the home office to let the EMC tech in to replace a failed hard drive on the SAN. The SAN was a rack full of hard drives, so 3 or 4 failing per year wasn't bad. That was five years ago and newer drives are probably more reliable.

Hot swap spares wouldn't be needed often and could keep the place running for years. So, having astronaut techs visit every couple of years could work.

The data center station would have to provide a low radiation environment. With low launch costs it shouldn't be a big deal to add enough shielding.

Offline Asteroza

  • Senior Member
  • *****
  • Posts: 2911
  • Liked: 1127
  • Likes Given: 33
BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).

The increasing RF environment due to megaconstellations makes it harder to receive at full speed as your receivers have to contend with signals other than what you want due to the fundamental basis of shared spectrum, whereas optical links don't overspread too much beyond the target sat so are effectively private/exclusive?

Offline watermod

  • Full Member
  • ****
  • Posts: 519
  • Liked: 177
  • Likes Given: 154
For the small satellites especially they could be designed much much simpler.

Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission.   So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier.   Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment.   So a lightweight chip quickly turned off forever. 

Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.

SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.

One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Probably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!).

Gotta give those future astronauts some work to do. ;)

When I started working in data centers, the big issue was hard drive failure. Sometimes we'd have server hardware issues, but it wasn't common. The mainframe just kept running. By the time I retired 15 years later, our data center was a "lights out" facility. No one would go in the room unless there was a problem. I would get a call once every few months from the home office to let the EMC tech in to replace a failed hard drive on the SAN. The SAN was a rack full of hard drives, so 3 or 4 failing per year wasn't bad. That was five years ago and newer drives are probably more reliable.

Hot swap spares wouldn't be needed often and could keep the place running for years. So, having astronaut techs visit every couple of years could work.

The data center station would have to provide a low radiation environment. With low launch costs it shouldn't be a big deal to add enough shielding.
My experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline OTV Booster

  • Senior Member
  • *****
  • Posts: 5246
  • Terra is my nation; currently Kansas
  • Liked: 3640
  • Likes Given: 6204
The difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.
With a modest range of say 500 miles there is no technical need for disconnected ground stations.  Even the remote parts of western North Dakota can be served by ground stations in Billings or Fargo, or Denver or Winnipeg, were there are backbones.

Starlink may want to handle the backhaul themselves so they can negotiate more favorable peering agreements, but they don't have to unless there is massive collusion between many different geographically diverse internet companies.
There is every incentive for SpaceX to have its own backbone.
1 - End users geographically between ground stations will likely flip between ground stations. This can't be handled by BGP routing. Heck this isn't convenient to handle with routing at all. How do you allocate IPs in blocks between stations ?
When you have your own continent backbone you can handle all special characteristics of SL for routing/switching traffic.
2 - Its so much cheaper to lease dark fiber or 100G ptp ethernet links between stations than to purchase transit.
2 - You need a global network to peer with the big boys.
3 - Its much cheaper to purchase a dozens of 100G worldwide transit links than to purchase 100s of 10G transit links. And the price of those might be cheaper in some primary locations even if the transit provider has its own leased fiber going through SL ground stations.

I agree that it would be benificial, I would propose that they are more likely to partner with one of the big tech companies, of which I could see 3 being viable partners. Amazon, Microsoft, and Google, its unlikely to be amazon, because of the direct competition (their competing sat network), and google doesn't have the geographic diversity in data centers that microsoft has.

spacex and microsoft are already partnering on some offerings with microsofts new 'azure space' product line.

I could certainly see it being benificial to all parties if microsoft allowed spacex to put ground stations on the roof of all their data centres, spacex get easy access to high speed connectivity to the wider world, and much better physical security for their ground stations, and microsoft gets an edge with their azure space product, with lower response times to their own data centre offerings compared to competitors because the ground stations are on premises.



Heres an image of microsofts current and near future data centers, while there are certainly gaps in coverage if all of these locations became ground stations, it certainly gives them a credible start and eases ground station roll out significantly, no need to worry about site security, power redundancy, internet backhaul redundancy, having someone on call that can turn the control system off and on again if needed, etc, etc
I wonder if Microsoft's under water data center concept fits in anywhere.


https://arstechnica.com/information-technology/2020/09/microsoft-declares-its-underwater-data-center-test-was-a-success/


We've had discussions on how to do ground stations on big water. With a gyro stabilized buoy to carry an array...


Just thinking.
We are on the cusp of revolutionary access to space. One hallmark of a revolution is that there is a disjuncture through which projections do not work. The thread must be picked up anew and the tapestry of history woven with a fresh pattern.

Offline Asteroza

  • Senior Member
  • *****
  • Posts: 2911
  • Liked: 1127
  • Likes Given: 33
For the small satellites especially they could be designed much much simpler.

Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission.   So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier.   Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment.   So a lightweight chip quickly turned off forever. 

Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.

SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.

One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...

The small problem with that is each laser link receiver is exclusive during use. You would effectively have to use RF from the customer to the relay sat to schedule laser time on a relay receiver, and to service more than one customer at a time, you would need multiple receivers, separated sufficiently to not have beams potentially overlap.

Though that would be a great time to use an Archinaut or similar to build a truss to distance out the optical terminals on the relay sat. Better is a space corral like aggregate persistent platform, where you gradually add more truss and optical terminals. Though that does tend to favor putting the relay in GEO, which cancels out the latency advantage. But there are plenty of spacebourne customers who need throughput and not latency. By having the the timeshare negotiation RF link only pointing out to GEO, you probably can reduce the RF licensing needs considerably.

But, generally there's a strong preference for a minimal space to ground RF link for command/telemetry even if you were to use something else for bachkhaul. Which means you have to go through the licensing hoops anyways. Well, unless you were already someone who could live with just an iridium terminal for all your command comms (or just TDRS?)

Online tbellman

  • Full Member
  • ****
  • Posts: 662
  • Sweden
  • Liked: 977
  • Likes Given: 1
My experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.

And then you happen to get a generation of hard drives that fail a lot.  Or a server model that drains their CR2032 batteries, so they lose their BIOS settings after a year or two and then refuse to boot until you can replace the battery.  Or someone produces bad capacitor electrolyte again.  Or an unnoticed bug in SSD firmware causes them to wear out or die prematurely.  Or the server factory forgot to put in the rubber grommets on the fans, and the vibrations from the fans then kill the hard drives.  (None of these are hypothetical, by the way.)

Then suddenly you have an entire datacentre failing on you, with no ability to get someone from the manufacturer to replace the substandard/broken components.

The normal, random, component failures, like disks, CPUs, DIMMs, PSUs, et.c, failing now and then, you can plan for and live with.  But there is a definite risk that you will be hit by systematic failures that can take out more or less your entire DC.

Current off-the-shelf computer hardware are designed around the fact that the vast majority of deployed servers can be serviced or replaced.  If you are going to deploy your systems in locations where servicing is not possible, then you need to add much more redundancy, dissimilar redundancy if possible, or spend a lot of time and money to make sure that the stuff you buy are really high quality and dependable.

Online tbellman

  • Full Member
  • ****
  • Posts: 662
  • Sweden
  • Liked: 977
  • Likes Given: 1
Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT.

(I think you mean "t", tonne, not "MT", which means mega-Tesla...)

50 kilowatts per rack is quite dense.  For 80 HPC servers without GPUs, I expect more like 25-30 kW.  Servers with GPUs use more power, but you will rarely be able to fit 80 GPU servers in a single rack; certainly not if you have more than one GPU per server.  Managing 50 kW per rack requires careful design of your cooling.

Network: For HPC, and assuming your HPC applications are not constrained within single racks, then you will want at least twenty 100 Gbit/s links per such rack, and importantly sub-microsecond latency to any other node within the HPC cluster.  So don't expect to build an HPC cluster with one rack per satellite; the distance between the satellites, and thus the latency, will kill you.

For general IT, then it depends a lot on what kind of general IT you are doing.  Quite a lot of places use dual 100 Gbit/s uplinks per rack these days (dual mostly for redundancy).

As for mass, remember that a datacenter contains more than the servers and network equipment.  Cooling and power distribution is big and heavy.

All normal computer equipment is designed to operate in an atmosphere, to get cooling.  If your orbital DC is vented to vacuum, then you need some alternate way of cooling the components.  Liquid cooling loops connecting to all the major hot components (CPUs, GPUs, memory chips, flash chips, power electronics, network chips, et.c, et.c), and to the motherboards themselves for cooling all the components that don't have direct connection to the cooling loops.  Alternatively, you can put the entire servers in a bath of mineral oil, pumping that oil as coolant.

(Oh, and you can't use hard disk drives in vacuum.  Solid state storage only.)

All of this will drive up mass, and costs.

The other alternative is to have your oribital DC pressurized, and have "normal" HVAC units and ventilation fans.  That will also drive mass and costs.

Offline watermod

  • Full Member
  • ****
  • Posts: 519
  • Liked: 177
  • Likes Given: 154
For the small satellites especially they could be designed much much simpler.

Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission.   So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier.   Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment.   So a lightweight chip quickly turned off forever. 

Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.

SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.

One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...

The small problem with that is each laser link receiver is exclusive during use. You would effectively have to use RF from the customer to the relay sat to schedule laser time on a relay receiver, and to service more than one customer at a time, you would need multiple receivers, separated sufficiently to not have beams potentially overlap.

Though that would be a great time to use an Archinaut or similar to build a truss to distance out the optical terminals on the relay sat. Better is a space corral like aggregate persistent platform, where you gradually add more truss and optical terminals. Though that does tend to favor putting the relay in GEO, which cancels out the latency advantage. But there are plenty of spacebourne customers who need throughput and not latency. By having the the timeshare negotiation RF link only pointing out to GEO, you probably can reduce the RF licensing needs considerably.

But, generally there's a strong preference for a minimal space to ground RF link for command/telemetry even if you were to use something else for bachkhaul. Which means you have to go through the licensing hoops anyways. Well, unless you were already someone who could live with just an iridium terminal for all your command comms (or just TDRS?)

There are interesting ways around the RF request problem.  A good starting point is to look at the undersea laser basestation and end user units patented in the mid 1980s....  and yes that gets one into DARPA space..... but spreading to wide beam along with color options for the control/use requests are pretty useful... so that implies separate laser receivers/transmitters for control/use request and beam spreaders/collimation-devices so these might reside on a separate arm on the laser servers.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
My experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.

And then you happen to get a generation of hard drives that fail a lot.  Or a server model that drains their CR2032 batteries, so they lose their BIOS settings after a year or two and then refuse to boot until you can replace the battery.  Or someone produces bad capacitor electrolyte again.  Or an unnoticed bug in SSD firmware causes them to wear out or die prematurely.  Or the server factory forgot to put in the rubber grommets on the fans, and the vibrations from the fans then kill the hard drives.  (None of these are hypothetical, by the way.)

Then suddenly you have an entire datacentre failing on you, with no ability to get someone from the manufacturer to replace the substandard/broken components.

The normal, random, component failures, like disks, CPUs, DIMMs, PSUs, et.c, failing now and then, you can plan for and live with.  But there is a definite risk that you will be hit by systematic failures that can take out more or less your entire DC.

Current off-the-shelf computer hardware are designed around the fact that the vast majority of deployed servers can be serviced or replaced.  If you are going to deploy your systems in locations where servicing is not possible, then you need to add much more redundancy, dissimilar redundancy if possible, or spend a lot of time and money to make sure that the stuff you buy are really high quality and dependable.

Wait, WHO said “no chance to replace or fix”? Not me! ;) In fact, I explicitly spoke about ability to service with astronauts if you need to.

If SpaceX is making human spaceflight super cheap, I see no reason you couldn’t have contingency service missions.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline archae86

  • Member
  • Posts: 52
  • Albuquerque, NM, USA
  • Liked: 66
  • Likes Given: 103
(Oh, and you can't use hard disk drives in vacuum.  Solid state storage only.)
Commodity drives have traditionally vented to ambient, and rely on the air pressure to fly the heads.  So they have a minimum air pressure limit, generally expressed as an altitude equivalent to roughly 10,000 feet.  There were lots of hard drive failures up at Everest Base Camp in the old days as a result.

But not all hard drives are like that.  I have an HGST helium drive I bought used for a nice low price, presumably after it was retired from about 3.5 years service in a data center somewhere.  I kinda think it must be sealed, as helium would leave in a jiffy otherwise.

Offline rsdavis9

BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).

It may or may not be cheaper but the lack of fcc spectrum allocation sure does help.

I would think satellites in MEO could communicate to the 550km fleet by just talking to satellites that are on the rim of the earth from the satellites view point.
1. Allows the laser links on 550km satellites to only look tangential to the earth. IE not up or down. Something they will already do to communicate to their in plane and adjacent planes.
2. Less interference from the earth or to the earth.
« Last Edit: 01/28/2021 02:27 pm by rsdavis9 »
With ELV best efficiency was the paradigm. The new paradigm is reusable, good enough, and commonality of design.
Same engines. Design once. Same vehicle. Design once. Reusable. Build once.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).

It may or may not be cheaper but the lack of fcc spectrum allocation sure does help.

I would think satellites in MEO could communicate to the 550km fleet by just talking to satellites that are on the rim of the earth from the satellites view point.
1. Allows the laser links on 550km satellites to only look tangential to the earth. IE not up or down. Something they will already do to communicate to their in plane and adjacent planes.
2. Less interference from the earth or to the earth.
Youre not the only one who has said this, so let me clarify:

FCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper.
« Last Edit: 01/28/2021 03:32 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline OTV Booster

  • Senior Member
  • *****
  • Posts: 5246
  • Terra is my nation; currently Kansas
  • Liked: 3640
  • Likes Given: 6204
We've had discussions on how to do ground stations on big water. With a gyro stabilized buoy to carry an array...

Just thinking.

Why gyrostablize, when you could have a quad antenna pyramid on the buoy with an IMU and steer the beam electronically?
I'm not a phased array or a buoy wallah but I do know there's limits to how far a beam can swing. Even with a quad array it might swing too extreme. The North Atlantic can get purty rough.


If it can be reliable without a gyro, so be it.
We are on the cusp of revolutionary access to space. One hallmark of a revolution is that there is a disjuncture through which projections do not work. The thread must be picked up anew and the tapestry of history woven with a fresh pattern.

Offline Mark K

  • Full Member
  • *
  • Posts: 139
  • Wisconsin
  • Liked: 79
  • Likes Given: 30
I am still not seeing the use case for this.

Everything about isolated container data centers still makes more sense to me on Earth than in orbit, especially if you are talking high orbit. If we have a "semi-trailer" data center it is a heck of a lot cheaper than 13K per month to dump it next to a warehouse in, say, upper Michigan with a connection to cold Lake Superior water for cooling.

Latency?This would be quicker than MEO for sure. Power? Yes we will pay for power but that is cheap relative to the capital cost of the solar cells and especially the cooling which I feel people are writing off too easily. For big sustained power that is going to be an issue as you only get radiant heat loss so you will need to create shaded structure. If you are in low earth orbit you will need a lot of batteries to power you over the night periods, unless you have some kind of beamed power (more capital cost).

Starlink big connections make leaving the processing on Earth even easier with the good connectivity it allows
to out of the way places.



Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
I am still not seeing the use case for this.

Everything about isolated container data centers still makes more sense to me on Earth than in orbit, especially if you are talking high orbit. If we have a "semi-trailer" data center it is a heck of a lot cheaper than 13K per month to dump it next to a warehouse in, say, upper Michigan with a connection to cold Lake Superior water for cooling.

Latency?This would be quicker than MEO for sure. Power? Yes we will pay for power but that is cheap relative to the capital cost of the solar cells and especially the cooling which I feel people are writing off too easily. For big sustained power that is going to be an issue as you only get radiant heat loss so you will need to create shaded structure. If you are in low earth orbit you will need a lot of batteries to power you over the night periods, unless you have some kind of beamed power (more capital cost).

Starlink big connections make leaving the processing on Earth even easier with the good connectivity it allows
to out of the way places.

Solar cells can be extremely cheap. 6 cents per watt. Whole panels as cheap as 16 cents per watt. How much does a terrestrial 24/7 power station cost if you include fuel? Maybe $10/Watt? Way cheaper power in orbit is possible.

Cooling is already going to liquid for a lot of stuff (for instance, Google’s AI training chips and supercomputers), and you just need to pump that fluid to exterior radiator loops where you can passively radiate the heat to deep space for “free.” Yes, radiators can be heavy, but they don’t have to be expensive at all. And so if you solve launch costs with Starlink, thermal is also taken care of.

Energy intensive computing which isn’t very latency sensitive (AI training, scam mining, big data processing/analysis jobs, rendering, basically all supercomputer simulations) can be far from LEO where heat rejection and power production are both much better, potentially better and cheaper than on Earth.
« Last Edit: 01/28/2021 06:09 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
The data center need not be in vacuum. It’d be easier to put it in a somewhat pressurized nitrogen, air, or helium... or possibly a tank of nonconductive fluid for optimal heat transfer like is done in some supercomputers. The nonconductive liquid would double as radiation shielding. Would enable greater density than typical servers.



Just need to have pumps to do ensure continual flow:
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline indaco1

  • Full Member
  • **
  • Posts: 289
  • Liked: 64
  • Likes Given: 38
..

The small problem with that is each laser link receiver is exclusive during use. You would effectively have to use RF from the customer to the relay sat to schedule laser time on a relay receiver, and to service more than one customer at a time, you would need multiple receivers, separated sufficiently to not have beams potentially overlap.
...

The data center satellites just have to be connected to few ordinary Starlink satellites, leveraging existing stuff and frequencies, and they will be online.

Even better they could simply be upgraded and more massive double duty Starlink satellites sharing the same orbits, permissions, frequencies, laser links etc.

I can't really see any advantage for high orbit except for perennial solar power. Maybe EM knows who could help with batteries :-D
Non-native English speaker and non-expert, be patient.

Offline vsatman

FCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper.

As much as I know the FCC regulates the spectrum from 9 kHz to 300 GHz and 20+ THz lasers are not subject to FCC regulation

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
FCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper.

As much as I know the FCC regulates the spectrum from 9 kHz to 300 GHz and 20+ THz lasers are not subject to FCC regulation
That’s my point.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline DigitalMan

  • Full Member
  • ****
  • Posts: 1702
  • Liked: 1201
  • Likes Given: 76
FCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper.

As much as I know the FCC regulates the spectrum from 9 kHz to 300 GHz and 20+ THz lasers are not subject to FCC regulation
That’s my point.

I found it interesting that in the video link posted in the SDA LEO thread yesterday, there was a comment regarding SAT -> SAT laser comms and additionally, SAT -> GROUND comms.

Laser comms could wind up being an interesting possibility for ground.
« Last Edit: 01/30/2021 05:47 pm by DigitalMan »

Offline Lar

  • Fan boy at large
  • Global Moderator
  • Senior Member
  • *****
  • Posts: 13469
  • Saw Gemini live on TV
  • A large LEGO storage facility ... in Michigan
  • Liked: 11869
  • Likes Given: 11116
FCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper.

As much as I know the FCC regulates the spectrum from 9 kHz to 300 GHz and 20+ THz lasers are not subject to FCC regulation
That’s my point.
That AND they're more efficient (in use of 3d space, not necessarily power) since they can be so tight.
« Last Edit: 01/30/2021 08:52 pm by Lar »
"I think it would be great to be born on Earth and to die on Mars. Just hopefully not at the point of impact." -Elon Musk
"We're a little bit like the dog who caught the bus" - Musk after CRS-8 S1 successfully landed on ASDS OCISLY

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
FCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper.

As much as I know the FCC regulates the spectrum from 9 kHz to 300 GHz and 20+ THz lasers are not subject to FCC regulation
That’s my point.
That AND they're more efficient (in use of 3d space, not necessarily power) since they can be so tight.
They're more efficient per bit sent (because you can easily get much higher gain). Depending on the laser source, similar or slightly lower efficiency than ~30GHz radio per watt of photons.
« Last Edit: 01/31/2021 02:22 am by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline mlindner

  • Software Engineer
  • Senior Member
  • *****
  • Posts: 2928
  • Space Capitalist
  • Silicon Valley, CA
  • Liked: 2240
  • Likes Given: 829
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.

In space you can't convect or conduct heat away, which are the two most efficient forms of heat dissipation. You can only radiate heat away which becomes very difficult with so much energy consumption going on. The size of the radiators are going to be massive, likely much larger than the solar panels used to collect the energy in the first place.

Secondly you have radiation issues. Single bit upsets become huge issues and the hardware to handle that isn't cheap. Data corruption would become a massive issue given the density of the high performance compute. (As a percentage of volume, satellites have very little space dedicated to transistors.) The transistors would also be much smaller than is commonly used on spacecraft right now. DRAM would need to be some form of enhanced ECC memory more resiliant than even normal ECC.

Data centers are already really hard to build properly. (For example the recent case of one burning down in Europe.)

If you're looking for exotic places to build data centers, the new fad is to build them under water and underground because the water/earth insulates them from radiation effects and in water it becomes much easier to dissipate all that heat rather than having to pay for the rather extreme cooling systems modern data centers require. It can just use pumps to pull in external water, circulate it and then expel it. Putting them in space is the complete opposite of that both in terms of ease of access to electrical power, ease of cooling, and radiation environment. Data centers are being built next to hydroelectric dams so they can use the water for cooling and the cheap power from the dam. Data centers are also being built in places like iceland with it's low temperatures for air cooling, it's cheap geothermal power.

It's NEVER going to be the case that we'll put data centers in space until we built the data centers there in the first place and there's a massive in-space human presence already.
« Last Edit: 04/17/2021 12:25 am by mlindner »
LEO is the ocean, not an island (let alone a continent). We create cruise liners to ride the oceans, not artificial islands in the middle of them. We need a physical place, which has physical resources, to make our future out there.

Offline Jarnis

  • Full Member
  • ****
  • Posts: 1314
  • Liked: 832
  • Likes Given: 204
This is very true. I could see some point in building some on the Moon (underground) for the ultimate offsite backup and to support operations there without the ping/lag to Earth based servers, but even then this would be extremely expensive undertaking and makes no sense until there is a serious permanent human presence.

On orbit I don't see anything more fancier than a "router" existing anytime soon. So only whatever "brains" are required for efficient packet routing and even there you want to absolutely minimize the stuff you do because power consumption and heat are big issues. Even for terrestial routers as speeds and packet volumes go up.

Offline matthewkantar

  • Senior Member
  • *****
  • Posts: 2191
  • Liked: 2647
  • Likes Given: 2314
I dunno. If Starship can someday put a 150 ton data center in orbit for a couple of million dollars, the cooling and power can be worked out. Free land, free power, virtually unlimited connectivity, and impregnable security may make orbital server farms appealing.

Offline Keldor

  • Full Member
  • ****
  • Posts: 725
  • Colorado
  • Liked: 903
  • Likes Given: 127
I dunno. If Starship can someday put a 150 ton data center in orbit for a couple of million dollars, the cooling and power can be worked out. Free land, free power, virtually unlimited connectivity, and impregnable security may make orbital server farms appealing.

For a couple million dollars, I could buy a huge parcel of land in the Sahara and build a nice building to hold my data center.  Plenty of sunlight for the solar panels, so power is "free".  Sure, cooling will be annoying, but the Sahara is a nice cool environment with easy ways to dissipate heat compared to low earth orbit.  Starlink can communicate to my data center in the Sahara just as easily as with a data cener in orbit.  My latency is nice and low for my customers in Europe, and I can build another one in, say, some salt flat in Nevada to serve North America.  At least they don't spend 40% of their time orbiting out over the Pacific far away from anyone like they would in orbit.

 Out in the middle of the Sahara, it's super easy to get technicians and replacement parts too.  Just fly them in from across the world or something.  Much cheaper than trying to get them into orbit!  And they can work in the peace and comfort of an environment with free gravity and virtually zero risk of explosive decompression followed by asphixiation.

Of course, no one is going to build a data center in a remote part of the Sahara for what I should think are obvious reasons.  Every single one of those reasons is true for orbit as well, only many times worse.  I suppose there are worse places, like the bottom of the Marianas Trench, or in orbit around Jupiter, though.

Seriously, though.  Orbit is a terrible place to put anything that doesn't have a very good reason for requiring to be there.

Offline mlindner

  • Software Engineer
  • Senior Member
  • *****
  • Posts: 2928
  • Space Capitalist
  • Silicon Valley, CA
  • Liked: 2240
  • Likes Given: 829
I dunno. If Starship can someday put a 150 ton data center in orbit for a couple of million dollars, the cooling and power can be worked out. Free land, free power, virtually unlimited connectivity, and impregnable security may make orbital server farms appealing.

Sure but what does that buy you? Land is highly available in the US already. Just most of it is so far away from civilization that people consider it worthless for most purposes. Also power isn't free in space as I just mentioned. You need to build and unfurl those solar panels. Even a Starship isn't big enough to launch them without some sort of autoassembly process, the same for the radiators.

Also there's one more aspect that's key. Data centers require constant maintenance. Hard drives fail regularly because there's so many of them, as do processors. Even a very low failure rate results in a single failure every few weeks/days that then needs to have the sled pulled and have the part replaced.

Regarding security, it's only physical security that's good here, simply because it's difficult to get to, but if Starship exists to launch these data centers, it's suddenly also cheap to launch a satellite to go fly up to the rack and hook into the data lines.
LEO is the ocean, not an island (let alone a continent). We create cruise liners to ride the oceans, not artificial islands in the middle of them. We need a physical place, which has physical resources, to make our future out there.

Offline matthewkantar

  • Senior Member
  • *****
  • Posts: 2191
  • Liked: 2647
  • Likes Given: 2314
So it will be cheap enough that someone can go and break in, but not cheap enough that someone can go and pull/push a few racks once a month?

Compared to other projects they are working on, orbital servers seem like an easy one. Even if they scope in robotic server swaps.If launch was cheap enough it would be attractive. Travel from Boca Chica to LEO would take an hour or two. How long would it take to get to the Sahara?

Originally I was wondering why SpaceX would want to go into a low margin business like server farms, but Amazon has shown that if done well it can be a money volcano.

Offline Ludus

  • Full Member
  • ****
  • Posts: 1744
  • Liked: 1255
  • Likes Given: 1019
There would be greater efficiency of orbital data centers if the data input (as well as power) was coming from orbit already. That might be true if there is a constellation with laser links to Starlink that’s dedicated to 24/7 observation of the whole surface of the earth generating vast amounts of data.
« Last Edit: 04/17/2021 01:26 am by Ludus »

Offline rsdavis9

There would be greater efficiency of orbital data centers if the data input (as well as power) was coming from orbit already. That might be true if there is a constellation with laser links to Starlink that’s dedicated to 24/7 observation of the whole surface of the earth generating vast amounts of data.

Not to mention the 24 channels of 4k video from the reality shows on mars
or all the space telescopes
or ...
With ELV best efficiency was the paradigm. The new paradigm is reusable, good enough, and commonality of design.
Same engines. Design once. Same vehicle. Design once. Reusable. Build once.

Offline AC in NC

  • Senior Member
  • *****
  • Posts: 2484
  • Raleigh NC
  • Liked: 3630
  • Likes Given: 1950
Post #51 should have placed a knife in the heart of this thread.  Don't believe it's at all feasible after reading that.

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.
Hundreds of racks?   That's still a small datacenter.    Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling.  I don't think many people really understand the industrial scale of large datacenters these days.

Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.

Offline matthewkantar

  • Senior Member
  • *****
  • Posts: 2191
  • Liked: 2647
  • Likes Given: 2314
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.
Hundreds of racks?   That's still a small datacenter.    Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling.  I don't think many people really understand the industrial scale of large datacenters these days.

Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.

LEO puts you close 24/7/365 if you have a global mesh of laser linked satellites.

Offline cdebuhr

  • Full Member
  • ****
  • Posts: 845
  • Calgary, AB
  • Liked: 1436
  • Likes Given: 592
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.
Hundreds of racks?   That's still a small datacenter.    Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling.  I don't think many people really understand the industrial scale of large datacenters these days.

Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.

LEO puts you close 24/7/365 if you have a global mesh of laser linked satellites.
If you've got a global mesh of laser linked satellites, then your massive, insanely power hungry, but very well-cooled terrestrial data center is just one short hop away from the Starlink network anyway.  I confess I found this idea intriguing at first, but the longer the discussion continues, the less sense it make to me.  YMMV.

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.
Hundreds of racks?   That's still a small datacenter.    Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling.  I don't think many people really understand the industrial scale of large datacenters these days.

Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.

LEO puts you close 24/7/365 if you have a global mesh of laser linked satellites.
Close as in ~20-40ms round trip time (or better).

Small deployments -- cache/compute/front-end/... etc --  on the ground stay close enough to their users (and would be satellite hop away via a LEO network).   

Similar racks in orbit will be too far away most of the time.   Migrating the data/state/... from compute satellite to compute satellite to keep it over the users would be insanely expensive in inter-satellite bandwidth and/or excess capacity.   If you're serving users on the ground it would be much cheaper to put a few racks in a small building with a couple starlink antennas.

Offline Ludus

  • Full Member
  • ****
  • Posts: 1744
  • Liked: 1255
  • Likes Given: 1019
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.
Hundreds of racks?   That's still a small datacenter.    Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling.  I don't think many people really understand the industrial scale of large datacenters these days.

Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.

It’s definitely not competitive for most purposes. That doesn’t mean there aren’t niches where it would make some sense. One might be related to legal requirements for where data is stored, not being on the territory of any country might be a selling point.

Offline LiamS

  • Member
  • Posts: 24
  • UK
  • Liked: 18
  • Likes Given: 73
There was an application that I saw mentioned a while ago somewhere I cant recall that I thought might have some merit. It was a proposal to have a satellite 'data center' that would be placed into orbit around the moon or mars for instance then act as a processing/data storage offload for robotic missions in the vicinity.

So you could have a rover on mars as an example that would live stream a camera feed up to the satellite, where you could use more energy intensive processing techniques like GPU based machine learning/AI stuff to determine things like path planning and higher level goal determination more rapidly than would be possible using the rovers onboard computation capabilities. This would (potentially) allow much faster traversing surface rovers while not requiring significantly more power, while maintaining the fast vision based path planning that makes operating a fast rover practical remotely (say on the order of 1 m/s). This sort of thing is obviously much more potentially beneficial for locations a long distance from earth, with a high round trip communication time

at risk of Reductio ad absurdum, you could likely get away with an off the shelf RC car and a webcam (with some insulation for the electronics/battery of course), while still being able to leverage the multi-kW processing capability of the orbiting data center


Offline jboone

  • Member
  • Posts: 6
  • Portland, OR
  • Liked: 23
  • Likes Given: 7
An engineer I've been acquainted with for a few decades now has some interesting ideas on this subject: http://server-sky.com/

Offline jak Kennedy

  • Full Member
  • **
  • Posts: 265
  • Liked: 137
  • Likes Given: 763
Perhaps someone could clarify the difference in energy used by data centres as opposed to the energy needed by a satellite to receive, probably temporarily store and then transmit the data? It seems that even small amounts of storage on a satellite for high demand repeat data would be benificial. ie the data for just released movies on Netflix.

Edit: Do data centres only use so much power because they deal with 100,000's users/requests? If so then having data stored on Starlinks where they are only dealing with smaller numbers of users should need less power. Then comparing Starlink data centres to ground based centres isn't valid.
« Last Edit: 04/18/2021 09:24 am by jak Kennedy »
... the way that we will ratchet up our species, is to take the best and to spread it around everybody, so that everybody grows up with better things. - Steve Jobs

Offline Nilof

  • Full Member
  • ****
  • Posts: 1173
  • Liked: 593
  • Likes Given: 707
Imho, Earth based data centers have the advantage of much easier cooling. But LEO servers may happen for hyperspecialized applications like fast content delivery caches for Starlink users, once Starlink is big enough. Otherwise it's extremely impractical for most applications and only really makes sense if the customer is in space as well (obviously internet connectivity on Mars is going to go through Martian servers).


Imho, if the only requirement is being able to communicate with starlink via laser even when the sky is overcast, there's probably a better option: a conventional data center with a high altitude balloon tethered to it, with laser receivers on the balloon and optical fiber along the tether.
« Last Edit: 04/19/2021 01:11 pm by Nilof »
For a variable Isp spacecraft running at constant power and constant acceleration, the mass ratio is linear in delta-v.   Δv = ve0(MR-1). Or equivalently: Δv = vef PMF. Also, this is energy-optimal for a fixed delta-v and mass ratio.

Offline volker2020

  • Full Member
  • ***
  • Posts: 319
  • Frankfurt, Germany
  • Liked: 326
  • Likes Given: 857
Being in Orbit has a number of disadvantages.

- Cooling
- Energy
- Radiation
- No access for repair or upgrades

The only application I could think of, worth all the trouble might be criminal enterprises outside the law or espionage. Looking at the current costs of launching, that might become a reality.

Online VaBlue

  • Full Member
  • ***
  • Posts: 321
  • Spotsylvania, VA
  • Liked: 507
  • Likes Given: 187
Perhaps someone could clarify the difference in energy used by data centres as opposed to the energy needed by a satellite to receive, probably temporarily store and then transmit the data? It seems that even small amounts of storage on a satellite for high demand repeat data would be benificial. ie the data for just released movies on Netflix.

Edit: Do data centres only use so much power because they deal with 100,000's users/requests? If so then having data stored on Starlinks where they are only dealing with smaller numbers of users should need less power. Then comparing Starlink data centres to ground based centres isn't valid.

Power draw has little to do with the amount of users, and lots to do with the HW that's drawing said power.  Each hard drive, each fan, each chip, each compute process itself, draws power.  Onboard power supplies have to make enough power (by drawing power) to handle surges, normal ops, cooling, etc...  Yes, a large number of users will create more work for HW (more processing), but power requirements per rack are designed to handle the maximum power draw every component in that rack can pull - so user load is already accounted for.

There is virtually no difference in power needs between a data center or a satellite, depending on what cooling (or heating) you might need.  More HW = more power draw.  Satellites can only create so much energy, so its not very likely that we'll see a big data center-type satellites anytime soon.

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
Perhaps someone could clarify the difference in energy used by data centres as opposed to the energy needed by a satellite to receive, probably temporarily store and then transmit the data? It seems that even small amounts of storage on a satellite for high demand repeat data would be benificial. ie the data for just released movies on Netflix.
Content caches on starlink are not orbital datacenters connecting directly to starlink via laser.
Quote
Edit: Do data centres only use so much power because they deal with 100,000's users/requests? If so then having data stored on Starlinks where they are only dealing with smaller numbers of users should need less power. Then comparing Starlink data centres to ground based centres isn't valid.
A significant fraction of the energy used by a computer does not depend on how busy it is.   DRAM refresh, for instance, has to keep going or else memory contents are lost.   That fraction has been getting smaller as system designers increasingly focus on power management -- but most of that work is centered on single-user portable devices because that's where the biggest payback is in terms of number of units sold, and very often turning on power management increases latency (as it takes time for powered-off functional units to wake up and stabilize after the power comes back on).   So often power management features in server hardware have to be disabled or adjusted to avoid disrupting time-sensitive workloads.

So the path to high power efficiency is high utilization, and one path to high utilization is to have batchy background load that can be preempted on short notice when capacity is needed for more time-sensitive work, and that's easier to do that with small numbers of big  datacenters with 10+MW of capacity instead of hundreds of data centers of 100kw capacity scattered across a continent.


Offline Asteroza

  • Senior Member
  • *****
  • Posts: 2911
  • Liked: 1127
  • Likes Given: 33
There was an application that I saw mentioned a while ago somewhere I cant recall that I thought might have some merit. It was a proposal to have a satellite 'data center' that would be placed into orbit around the moon or mars for instance then act as a processing/data storage offload for robotic missions in the vicinity.

So you could have a rover on mars as an example that would live stream a camera feed up to the satellite, where you could use more energy intensive processing techniques like GPU based machine learning/AI stuff to determine things like path planning and higher level goal determination more rapidly than would be possible using the rovers onboard computation capabilities. This would (potentially) allow much faster traversing surface rovers while not requiring significantly more power, while maintaining the fast vision based path planning that makes operating a fast rover practical remotely (say on the order of 1 m/s). This sort of thing is obviously much more potentially beneficial for locations a long distance from earth, with a high round trip communication time

at risk of Reductio ad absurdum, you could likely get away with an off the shelf RC car and a webcam (with some insulation for the electronics/battery of course), while still being able to leverage the multi-kW processing capability of the orbiting data center

There was a NIAC 2021 proposal for a pony express style system of data hauling cyclers (functionally semi-mobile satellite datacenters) to shuttle data from far probes.

Taking the station wagon full of tapes to the stars...

But the NIAC proposal sounds like they are still doing onload/offload via laser, rather than physically picking up the equivalent of an Amazon Snowball from a probe.

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
There was a NIAC 2021 proposal for a pony express style system of data hauling cyclers (functionally semi-mobile satellite datacenters) to shuttle data from far probes.

Taking the station wagon full of tapes to the stars...

But the NIAC proposal sounds like they are still doing onload/offload via laser, rather than physically picking up the equivalent of an Amazon Snowball from a probe.
TESS is doing something like this for itself; it's in a highly elliptical orbit.   Spends most of the orbit doing observations and storing the data, and then does high-rate data dumps around perigee while close to earth.


Online meekGee

  • Senior Member
  • *****
  • Posts: 14680
  • N. California
  • Liked: 14693
  • Likes Given: 1421
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.

In space you can't convect or conduct heat away, which are the two most efficient forms of heat dissipation. You can only radiate heat away which becomes very difficult with so much energy consumption going on. The size of the radiators are going to be massive, likely much larger than the solar panels used to collect the energy in the first place.

Secondly you have radiation issues. Single bit upsets become huge issues and the hardware to handle that isn't cheap. Data corruption would become a massive issue given the density of the high performance compute. (As a percentage of volume, satellites have very little space dedicated to transistors.) The transistors would also be much smaller than is commonly used on spacecraft right now. DRAM would need to be some form of enhanced ECC memory more resiliant than even normal ECC.

Data centers are already really hard to build properly. (For example the recent case of one burning down in Europe.)

If you're looking for exotic places to build data centers, the new fad is to build them under water and underground because the water/earth insulates them from radiation effects and in water it becomes much easier to dissipate all that heat rather than having to pay for the rather extreme cooling systems modern data centers require. It can just use pumps to pull in external water, circulate it and then expel it. Putting them in space is the complete opposite of that both in terms of ease of access to electrical power, ease of cooling, and radiation environment. Data centers are being built next to hydroelectric dams so they can use the water for cooling and the cheap power from the dam. Data centers are also being built in places like iceland with it's low temperatures for air cooling, it's cheap geothermal power.

It's NEVER going to be the case that we'll put data centers in space until we built the data centers there in the first place and there's a massive in-space human presence already.
Explain this 100 times, it won't make a bit of difference...  :)
ABCD - Always Be Counting Down

Offline su27k

  • Senior Member
  • *****
  • Posts: 6414
  • Liked: 9104
  • Likes Given: 885
This is not directly related to Starlink but it looks like Space Force is interested in data centers in orbit: Space Force chief technologist hints at future plans to build a digital infrastructure

Quote from: SpaceNews
Lisa Costa, chief technology and innovation officer of the U.S. Space Force, said the service is eyeing investments in edge computing, data centers in space and other technologies needed to build a digital infrastructure. 

“Clearly, the imperative for data driven, threat informed decisions is number one, and that means that we need computational and storage power in space, and high-speed resilient communications on orbit,” Costa said Jan. 13 at a virtual event hosted by GovConWire, a government contracting news site.

<snip>

A key goal of the Space Force is to be agile and “outpace our adversaries,” said Costa. Timely and relevant data is imperative, and that will require investments in government-owned and in commercial infrastructure in space, she added. “Things like cloud storage, elastic computing, critical computation for machine learning, infrastructure in and across orbits.”

Offline su27k

  • Senior Member
  • *****
  • Posts: 6414
  • Liked: 9104
  • Likes Given: 885
More indirectly related news:

Living on the edge: Satellites adopt powerful computers

Quote from: SpaceNews
The latest Apple Watch has 16 times the memory of the central processor on NASA’s Mars 2020 rover. For the new iPhone, 64 times the car-size rover’s memory comes standard.

For decades, people dismissed comparisons of terrestrial and space-based processors by pointing out the harsh radiation and temperature extremes facing space-based electronics. Only components custom built for spaceflight and proven to function well after many years in orbit were considered resilient enough for multibillion-dollar space agency missions.

While that may still be the best bet for high-profile deep space missions, spacecraft operating closer to Earth are adopting state-of-the-art onboard processors. Upcoming missions will require even greater computing capability.



Hewlett Packard Enterprise’s space station computer is in demand

Quote from: SpaceNews
Since traveling in February 2020 to the International Space Station, Spaceborne Computer-2 has completed 20 experiments focused on health care, communications, Earth observation and life sciences. Still, the queue for access to the off-the-shelf commercial computer linked to Microsoft’s Azure cloud keeps growing.

Mark Fernandez, principal investigator for Spaceborne Computer-2, sees a promising future for space-based computing. He expects increasingly capable computers to be installed on satellites and housed in orbiting data centers in the coming years. Edge processors will crunch data on the moon, and NASA’s lunar Gateway will host advanced computing resources, Fernandez told SpaceNews.

Fernandez, who holds a doctorate in scientific computing from the University of Southern Mississippi, served as software payload developer for HPE’s original Spaceborne Computer, a supercomputer that reached ISS in August 2017 and returned to Earth a year and a half later in a SpaceX Dragon cargo capsule.

Offline su27k

  • Senior Member
  • *****
  • Posts: 6414
  • Liked: 9104
  • Likes Given: 885
Space Development Agency experiment demonstrates on-orbit data processing

Quote from: SpaceNews
A data processor launched to orbit by the Space Development Agency has performed an early demonstration of autonomous data fusion in space, said one of the companies supporting the experiment.

Scientific Systems Company Inc. (SSCI) developed an artificial intelligence-enabled edge computer for the experiment known as POET, short for prototype on-orbit experimental testbed.

The POET payload rode to orbit on a Loft Orbital satellite that launched June 30 on the SpaceX Transporter-2 rideshare mission.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
I honestly think this does make sense, long-term.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
When I hear "data centers" I think "megawatts"

I honestly think this does make sense, long-term.
Long term, yes.   When there are "millions of people living and working in space" or a "city on mars" there will be need for megawatts of compute near them, whether on Mars, the Moon, or in orbital habitats.

Before that, when there is need for computing power near sensors to reduce communications volume it could make sense to put tens of kilowatts or more of compute near the sensors.   

But I haven't seen a concrete proposal involving Starlink in LEO that beats leaving most of the compute power on the ground.   Especially anything with latency-sensitive constraints.
 


Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
If your users are accessing the data set using Starlink, however, putting the data on Starlink itself cuts down the latency by half. It also eliminates the bandwidth to the Gateways if you cache stuff on Starlink itself.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Online DanClemmensen

  • Senior Member
  • *****
  • Posts: 6045
  • Earth (currently)
  • Liked: 4765
  • Likes Given: 2021
If your users are accessing the data set using Starlink, however, putting the data on Starlink itself cuts down the latency by half. It also eliminates the bandwidth to the Gateways if you cache stuff on Starlink itself.
Sorry but that does not work. The satellites move, so your data would be orbiting the earth unless it was being continuously moved from satellite to satellite. Such movement of any appreciable amount of data is infeasible as it would require too much ISL bandwidth and power.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
If your users are accessing the data set using Starlink, however, putting the data on Starlink itself cuts down the latency by half. It also eliminates the bandwidth to the Gateways if you cache stuff on Starlink itself.
Sorry but that does not work. The satellites move, so your data would be orbiting the earth unless it was being continuously moved from satellite to satellite. Such movement of any appreciable amount of data is infeasible as it would require too much ISL bandwidth and power.
It works just fine for super common data you can just put on each Starlink satellite. Like Netflix cache, for instance.

“Infeasible” is a function of capability. Just handwaving without analysis doesn’t help anything.

The Netflix catalogue isn’t that big (can easily fit on small solid state devices) but it uses a large portion of Starlink bandwidth. Similar for other streaming video catalogues.
We explored this quantitatively I believe on this thread.

Might not make sense for current Starlink satellites, but once they become multiple tons, it may be worth doing.

Near-line storage like this (where commonly used data is copied dozens of times so data doesn’t have to traverse the whole internet) is super common for other ISPs, and I don’t see why it won’t eventually happen for Internet ISPs like Starlink.

Since many satellites will be in view at any one time, it may even be possible to interleave the satellites. Starlink sat A has Netflix catalogue, Starlink sat B has a Disney+ catalogue, etc.
« Last Edit: 02/10/2022 11:23 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Online DanClemmensen

  • Senior Member
  • *****
  • Posts: 6045
  • Earth (currently)
  • Liked: 4765
  • Likes Given: 2021
If your users are accessing the data set using Starlink, however, putting the data on Starlink itself cuts down the latency by half. It also eliminates the bandwidth to the Gateways if you cache stuff on Starlink itself.
Sorry but that does not work. The satellites move, so your data would be orbiting the earth unless it was being continuously moved from satellite to satellite. Such movement of any appreciable amount of data is infeasible as it would require too much ISL bandwidth and power.
It works just fine for super common data you can just put on each Starlink satellite. Like Netflix cache, for instance.

“Infeasible” is a function of capability. Just handwaving without analysis doesn’t help anything.

The Netflix catalogue isn’t that big (can easily fit on small solid state devices) but it uses a large portion of Starlink bandwidth. Similar for other streaming video catalogues.
We explored this quantitatively I believe on this thread.

Might not make sense for current Starlink satellites, but once they become multiple tons, it may be worth doing.
Sorry, but the term "data center" usually implies something like Amazon AWS, not just a global cache of static data, at least to me. When we looked at a Netflix-type application, the best place for it was at the user terminal on the ground. Latency is zero, update was by continuous low-priority data stream in every beam. Its basically easier to store on millions of UTs than on tens of thousands of satellites.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
“You’re not allowed to pick the low hanging fruit of data center functionality!” is not a very persuasive argument to me.

And while the Netflix catalogue may be relatively small, it’s still way too big to put on all the user terminals:

(I already showed this picture, but you ignored it?)
« Last Edit: 02/11/2022 12:04 am by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
“You’re not allowed to pick the low hanging fruit of data center functionality!” is not a very persuasive argument to me.

And while the Netflix catalogue may be relatively small, it’s still way too big to put on all the user terminals:

(I already showed this picture, but you ignored it?)
I've looked over https://openconnect.netflix.com/en/appliances/

The entire Netflix catalog (~3000 TB in 2013 according to one source) cannot fit on one of these devices (360TB raw capacity on the largest one through a mix of traditional rotating media and solid state storage, but useful capacity will be lower).   It's a cache.   If it serves users in one particular locality it would likely have a higher hit rate than if it had to serve users all over the globe.

A couple TB of cache in each ground station would likely be more cost effective for netflix-like workloads than 300TB in each satellite.   

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
300TB is only like $30,000 worth of enterprise SSD. It’s not that much. If Starlink satellites get larger as some have speculated, that’ll be pretty doable (with some basic shielding!).
« Last Edit: 02/11/2022 12:57 am by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Online abaddon

  • Senior Member
  • *****
  • Posts: 3176
  • Liked: 4167
  • Likes Given: 5624
Netflix is waaaaay bigger now with far more content in 4k and far more content in general.  I’d be shocked if it’s not at least an order of magnitude bigger now.  The idea of caching the entire catalogue in a satellite is a non-starter.  A smart cache that keeps the top N% of shows, where N is a small number?  Sure.
« Last Edit: 02/11/2022 01:05 am by abaddon »

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Netflix is waaaaay bigger now with far more content in 4k and far more content in general.  I’d be shocked if it’s not at least an order of magnitude bigger now.  The idea of caching the entire catalogue in a satellite is a non-starter.  A smart cache that keeps the top N% of shows, where N is a small number?  Sure.
Give me a number for its size. Also, I bet that 90% of watch time is from 10% or less of their catalogue.

EDIT: (see my post below… this became a lot more than just an edit LOL)
« Last Edit: 02/11/2022 06:10 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
Netflix is waaaaay bigger now with far more content in 4k and far more content in general.  I’d be shocked if it’s not at least an order of magnitude bigger now.  The idea of caching the entire catalogue in a satellite is a non-starter.  A smart cache that keeps the top N% of shows, where N is a small number?  Sure.
The cache size vs cache hit rate curves for each service would be interesting to see (and likely both highly proprietary, and very different for netflix vs youtube vs ...).

Given how existing caches are operated and the inability to visit satellites to swap out cache appliance, the system architecture of on-board caches would likely have to resemble a virtual machine hosting model which would have lower efficiency than the bare iron appliance model currently used in caches.

On thinking about it some more I suspect you're better off with larger caches in/near ground stations, each provided and operated by each streaming service, placed into colocation racks in the equipment huts.   With power, cooling, and mass much cheaper on the ground you could provision much more capacity and caches that stay near their clients would benefit from any geographic effects on cache locality.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Ah, but maybe those things wouldn’t be much cheaper indefinitely. If you start with that as a given, it will force a certain result.

SpaceX uses pretty cheap solar panels, the cells of which are manufactured for terrestrial applications (I believe). And they get much better capacity factor than on the ground.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Online Naito

  • Full Member
  • **
  • Posts: 206
  • Toronto, Canada
  • Liked: 72
  • Likes Given: 54
......

clearly none of you guys have ever run a datacenter
Carl C.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
......

clearly none of you guys have ever run a datacenter
I did, in a previous life.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Nomadd

  • Senior Member
  • *****
  • Posts: 8895
  • Lower 48
  • Liked: 60678
  • Likes Given: 1334
......
clearly none of you guys have ever run a datacenter
I did, in a previous life.
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.
Those who danced were thought to be quite insane by those who couldn't hear the music.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Netflix’s catalogue varies in size (and may actually have gotten smaller in terms of hours as usage rights vary over time). The most recent articles from 2020-2021 say that Netflix has about 13,000-15,000 titles (but at most around 5000 available per country), translating to about 36,000 hours.

Netflix streams “4K” at just 7GB/hour, HD at 3GB/hour, so you could fit their entire worldwide catalogue in both 4K and HD in just 360TB.

The largest enterprise solid state drives in the 3.5” form factor is 100TB, so you could easily fit those in an EXISTING Starlink satellite.

(Although you wouldn’t use that exact form factor and you probably want to wait until the Starlink satellites grow significantly.)

And it’s not even that much data to serve. A Starlink satellite serves, what, 20Gbps?

Netflix’s OpenConnect appliances were serving 100Gbps of 4K content back in 2017.

I don’t think this will happen right away. Starlink satellites need to probably be about 10 times their current size so the overhead of hosting CDN boxes (plus a few inches of shielding) isn’t a problem. They also need millions of customers. The terminals probably need to talk to multiple satellites simultaneously so you don’t have to put the same cont nt on every box.

But it’s not nearly as absurd as what you all are saying.

Again, in 2017, 100Gbps for a box this size containing the whole Netflix catalogue, which is much lighter than an existing Starlink satellite let alone whatever size they eventually grow to. About 30 liters volume, 30kg mass, and 650W peak power. Could probably knock that volume and mass by a factor of 4 or better, and the power possibly as well. But for a larger Starlink satellite, not required.

Probably only SpaceX has the experience and the data to know if or when this would be worth it. Only they have experience with using lots and lots of COTS electronics in orbit. But I think this is worth considering eventually.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Starlink has about 6500W worth of solar power peak. One of these appliances uses about a tenth of that to serve 5 times the throughput. So from an energy perspective, it doesn’t look absurd.

Small data center rental near each gateway would cost about $17,000 annually (large data centers would be much cheaper, but can’t always find a data center over 5000 sqft nearby) for a 650W appliance like that. A Starlink satellite costs ~$250k to build, lasts 4-10 years. So we’re in the right ballpark here for something using a tenth its capacity.
« Last Edit: 02/11/2022 06:25 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Netflix is waaaaay bigger now with far more content in 4k and far more content in general.  I’d be shocked if it’s not at least an order of magnitude bigger now.  The idea of caching the entire catalogue in a satellite is a non-starter.  A smart cache that keeps the top N% of shows, where N is a small number?  Sure.
oh really? Non-starter? Did you even try to calculate it?

As I proved above, this is just false. Netflix has 36,000 hours, equivalent to 360TB (if stored at both 4K & HD streaming rates). You can get 100TB SSDs that weigh 538 grams each, so 4 of those would weigh just 2kg (and 360TB worth of microSD cards weighs just 100 grams… although you’d want to double that to get any kind of useful life). (A full server may weigh 10 times that, but still reasonable… and realistically, SpaceX would probably just make their own CDN server-on-a-board with a bunch of NAND chips soldered to it that sits flush with the rest of the Starlink electronics with thermal interface and behind some shielding.)

It’s just not that much data anymore.

But I get it, what you’re saying SOUNDS true, whereas the idea you can cache all of Netflix in orbit SOUNDS false… But the numbers just don’t look absurd at all.
« Last Edit: 02/11/2022 06:53 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline DreamyPickle

  • Full Member
  • ****
  • Posts: 955
  • Home
  • Liked: 921
  • Likes Given: 205
This is absolutely ridiculous.

What does this get you compared to placing Starlink ground stations next to existing data centers? This is a sensible thing that SpaceX is already doing.

You get maybe 20ms less latency towards Starlink customers at the expense of having to spread computing resources equally around the globe (because they move).

Online DanClemmensen

  • Senior Member
  • *****
  • Posts: 6045
  • Earth (currently)
  • Liked: 4765
  • Likes Given: 2021
This is absolutely ridiculous.

What does this get you compared to placing Starlink ground stations next to existing data centers? This is a sensible thing that SpaceX is already doing.

You get maybe 20ms less latency towards Starlink customers at the expense of having to spread computing resources equally around the globe (because they move).
Also, for the only use case being proposed (Netflix), latency is not important.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
This is absolutely ridiculous.

What does this get you compared to placing Starlink ground stations next to existing data centers? This is a sensible thing that SpaceX is already doing.

You get maybe 20ms less latency towards Starlink customers at the expense of having to spread computing resources equally around the globe (because they move).
It saves you Gateway bandwidth.

For every Gbps served from on-board CDN, that’s one less Gbps that has to traverse the atmosphere. Since streaming video traffic is most of today’s bandwidth usage, that’s not a small effect.

And this makes an even bigger difference once you add ISLs because it means people far from a gateway don’t need to wait. And you’re not using up all that ISL bandwidth just for streaming video.

Also, renting a small data center near a gateway isn’t free, either.
« Last Edit: 02/11/2022 07:10 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Online Naito

  • Full Member
  • **
  • Posts: 206
  • Toronto, Canada
  • Liked: 72
  • Likes Given: 54
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
Carl C.

Offline DreamyPickle

  • Full Member
  • ****
  • Posts: 955
  • Home
  • Liked: 921
  • Likes Given: 205
Is bandwidth between ground stations and satellites known to be one of the limitations of the system? So far all I heard is how they're limited on how many user terminals they can support.

Also, why would inter-satellite links be faster/better than bouncing down to the nearest ground station?

Quote
Also, renting a small data center near a gateway isn’t free, either
SpaceX is building ground stations next to data centers that already exist.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.
« Last Edit: 02/11/2022 07:30 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Is bandwidth between ground stations and satellites known to be one of the limitations of the system? So far all I heard is how they're limited on how many user terminals they can support.

Also, why would inter-satellite links be faster/better than bouncing down to the nearest ground station?

Quote
Also, renting a small data center near a gateway isn’t free, either
SpaceX is building ground stations next to data centers that already exist.
Lasers cheaper than radio or they would just use radio to connect the satellites. But also, I think it makes the most sense to just cohost the CDN server directly on the Starlink bus.

…once Starlinks get bigger.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Online Naito

  • Full Member
  • **
  • Posts: 206
  • Toronto, Canada
  • Liked: 72
  • Likes Given: 54
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.

yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience?  like what would you host up there that makes more sense than hosting down on earth??  what exactly is the use case?  it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?
« Last Edit: 02/11/2022 07:59 pm by Naito »
Carl C.

Offline rsdavis9

I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.

yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience?  like what would you host up there that makes more sense than hosting down on earth??  what exactly is the use case?  it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?

The closest place to a remote user is the satellite.
With ELV best efficiency was the paradigm. The new paradigm is reusable, good enough, and commonality of design.
Same engines. Design once. Same vehicle. Design once. Reusable. Build once.

Offline Lars-J

  • Senior Member
  • *****
  • Posts: 6809
  • California
  • Liked: 8487
  • Likes Given: 5385
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.

yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience?  like what would you host up there that makes more sense than hosting down on earth??  what exactly is the use case?  it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?

Because a person came up with an impractical idea, gets defensive about it when confronted about its practicality and usefulness, and then refuses to back down. It happens frequently on this forum, usually by newbies but even relative old-timers are not immune to it, it seems.

Online Naito

  • Full Member
  • **
  • Posts: 206
  • Toronto, Canada
  • Liked: 72
  • Likes Given: 54
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.

yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience?  like what would you host up there that makes more sense than hosting down on earth??  what exactly is the use case?  it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?

The closest place to a remote user is the satellite.

Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful.  But moving the datacentre too provides zero additional benefit.
Carl C.

Online envy887

  • Senior Member
  • *****
  • Posts: 8166
  • Liked: 6836
  • Likes Given: 2972
Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful.  But moving the datacentre too provides zero additional benefit.

It would save uplink bandwidth. Maybe enough to repurpose the spectrum or save mass, but it seems like a bit of a stretch. What fraction of peak bandwidth does Netflix account for?

Online shark0302

  • Member
  • Posts: 20
  • Liked: 11
  • Likes Given: 0
Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful.  But moving the datacentre too provides zero additional benefit.

It would save uplink bandwidth. Maybe enough to repurpose the spectrum or save mass, but it seems like a bit of a stretch. What fraction of peak bandwidth does Netflix account for?
I think it if you take into account laser links and assume  each satellite holds  1 pb of the most requested data like a normal isp pop does that cuts down on your round trip time. Best example would actually be Playstation and xbox game downloads. That would literally give more bandwidth for latency sensitive things.

I'm not thinking of this as a full data center. I'm looking at it as a pop or back bone interchange which makes sense to get the heaviest file transfers. As an example  your  average  xbox  game is 50 to 100 Gigabytes. If you are connected to a single  starlink satellite that  has that game file stored locally.  From start of downloading the file to finishing, assuming the rate senr to user terminal at 200mb/s for 50GB it would be about 120 seconds.

Sent from my SM-T860 using Tapatalk
« Last Edit: 02/12/2022 12:58 am by shark0302 »

Offline dplot123

  • Member
  • Posts: 6
  • Liked: 3
  • Likes Given: 17
Starlink pays tier 1 ISPs backhaul costs for its data which would be avoided if the content is hosted directly on the satellite. Hard to find any numbers on this, but some sources say $2 for 1 month of 1Mbps worth of backhaul from a tier 1 ISP. 12% of traffic is Netflix traffic and 20Gbps per satellite is $4,800/month and $57,600/year for 1 satellite using 2.4Gbps on Netflix. No idea how many Gigabits per month are used on Netflix for their entire constellation since the satellites aren't being used at max capacity 24/7, but the savings add up depending on CDN costs. They are already building ground stations next to data centers, but I assume there are substantial expenses with that and it can't be done everywhere. https://blog.telegeography.com/wan-pricing-mythbusters-do-tier-1-carriers-charge-more-for-dia
« Last Edit: 02/12/2022 09:47 am by dplot123 »

Offline vsatman

Is bandwidth between ground stations and satellites known to be one of the limitations of the system? So far all I heard is how they're limited on how many user terminals they can support.

Also, why would inter-satellite links be faster/better than bouncing down to the nearest ground station?

Quote
Also, renting a small data center near a gateway isn’t free, either
SpaceX is building ground stations next to data centers that already exist.
Lasers cheaper than radio or they would just use radio to connect the satellites. But also, I think it makes the most sense to just cohost the CDN server directly on the Starlink bus.
…once Starlinks get bigger.

One of the arguments against placing a server on a satellite is that the satellite is moving around the planet. And what content should be stored on the server so that it is in demand all over the planet. The Super Bowl is certainly a hit in the US, but who in Europe or Asia is interested in it??

And one more note on the practice of designing satellites - getting a kilowatt on board a satellite if you are in Earth orbit is not a problem, the real problem is how to get rid of it because 95% of the transmitter's energy turns into heat .. and you can get rid of heat only by thermal radiation ..

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
Starlink pays tier 1 ISPs backhaul costs for its data which would be avoided if the content is hosted directly on the satellite.
Backhaul costs could also be avoided if they provision space & power for third-party CDN appliances near their ground-based networking gear -- either at the ground station site or at other locations where it makes sense within their ground-based network.  If there's enough traffic to matter, netflix and friends will show up with the gear.

Online Barley

  • Full Member
  • ****
  • Posts: 1075
  • Liked: 739
  • Likes Given: 409
Starlink pays tier 1 ISPs backhaul costs for its data which would be avoided if the content is hosted directly on the satellite.
This was discussed several years ago.  I was told that transit costs are for sourcing data and that there was no cost for sinking data so a consumer level ISP that sinks more data than it sources would not be paying for transit.  I would appreciate a pointer to information on what is actually metered and charged for in interactions between different tiers.

If transmitter pays then this is not Starlink's problem.  Netflix can do what it wants to do.  Which would probably be to provide caches at or close to some but not all Starlink gateway.

Offline AC in NC

  • Senior Member
  • *****
  • Posts: 2484
  • Raleigh NC
  • Liked: 3630
  • Likes Given: 1950
Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful.  But moving the datacentre too provides zero additional benefit.

It would save uplink bandwidth. Maybe enough to repurpose the spectrum or save mass, but it seems like a bit of a stretch. What fraction of peak bandwidth does Netflix account for?

40%

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
This was discussed several years ago.  I was told that transit costs are for sourcing data and that there was no cost for sinking data so a consumer level ISP that sinks more data than it sources would not be paying for transit.  I would appreciate a pointer to information on what is actually metered and charged for in interactions between different tiers.
It's.. more complicated than that.   Much is covered by confidential bilateral agreements between providers but the results of those agreements can be observed from the paths that packets follow..

When two entities have equipment in the same facility, it generally costs little to string a short cable or two between them to exchange traffic but it may cost a lot to haul the traffic to and from that point. 

Each provider has its own policies.  Some are picky about who they'll interconnect with; others are more open.  Generally, if traffic flow is approximately balanced, no money changes hands, but some entities don't even care about balanced traffic. 

A random blog post that's more or less consistent with what I've heard over the years:

https://blog.telegeography.com/settlement-free-paid-peering-definition

https://peeringdb.com/ claims to track about 1/3 of the inter-ISP connection points; Starlink's entry is here: https://www.peeringdb.com/net/18747

Tags:
 

Advertisement NovaTech
Advertisement Northrop Grumman
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
0