Author Topic: Orbital Data Centers connecting directly to Starlink via laser  (Read 24227 times)

Online Naito

  • Full Member
  • **
  • Posts: 206
  • Toronto, Canada
  • Liked: 72
  • Likes Given: 54
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
Carl C.

Offline DreamyPickle

  • Full Member
  • ****
  • Posts: 955
  • Home
  • Liked: 921
  • Likes Given: 205
Is bandwidth between ground stations and satellites known to be one of the limitations of the system? So far all I heard is how they're limited on how many user terminals they can support.

Also, why would inter-satellite links be faster/better than bouncing down to the nearest ground station?

Quote
Also, renting a small data center near a gateway isn’t free, either
SpaceX is building ground stations next to data centers that already exist.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.
« Last Edit: 02/11/2022 07:30 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Is bandwidth between ground stations and satellites known to be one of the limitations of the system? So far all I heard is how they're limited on how many user terminals they can support.

Also, why would inter-satellite links be faster/better than bouncing down to the nearest ground station?

Quote
Also, renting a small data center near a gateway isn’t free, either
SpaceX is building ground stations next to data centers that already exist.
Lasers cheaper than radio or they would just use radio to connect the satellites. But also, I think it makes the most sense to just cohost the CDN server directly on the Starlink bus.

…once Starlinks get bigger.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Online Naito

  • Full Member
  • **
  • Posts: 206
  • Toronto, Canada
  • Liked: 72
  • Likes Given: 54
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.

yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience?  like what would you host up there that makes more sense than hosting down on earth??  what exactly is the use case?  it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?
« Last Edit: 02/11/2022 07:59 pm by Naito »
Carl C.

Offline rsdavis9

I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.

yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience?  like what would you host up there that makes more sense than hosting down on earth??  what exactly is the use case?  it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?

The closest place to a remote user is the satellite.
With ELV best efficiency was the paradigm. The new paradigm is reusable, good enough, and commonality of design.
Same engines. Design once. Same vehicle. Design once. Reusable. Build once.

Offline Lars-J

  • Senior Member
  • *****
  • Posts: 6809
  • California
  • Liked: 8487
  • Likes Given: 5385
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.

yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience?  like what would you host up there that makes more sense than hosting down on earth??  what exactly is the use case?  it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?

Because a person came up with an impractical idea, gets defensive about it when confronted about its practicality and usefulness, and then refuses to back down. It happens frequently on this forum, usually by newbies but even relative old-timers are not immune to it, it seems.

Online Naito

  • Full Member
  • **
  • Posts: 206
  • Toronto, Canada
  • Liked: 72
  • Likes Given: 54
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. 
 I miss being able to tell data speed by listening to it.

ok I take it back =D

I'm not sure what putting a datacentre up in space will help is all.  Power may be abundant, but
1) processing creates a lot of heat, how are you going to shed it all?  radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......
2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work
3) if you're doing SSDs, well unless you also have processing with it, what's the point?  the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth.  If you fly higher to reduce drag, then you're increasing comms time and latency again......

It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.

I'm sure eventually we'll have big computing in space, but......what's the point now?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.

Once Starlink’s are much larger, it really actually does make sense.

yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience?  like what would you host up there that makes more sense than hosting down on earth??  what exactly is the use case?  it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?

The closest place to a remote user is the satellite.

Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful.  But moving the datacentre too provides zero additional benefit.
Carl C.

Online envy887

  • Senior Member
  • *****
  • Posts: 8166
  • Liked: 6836
  • Likes Given: 2972
Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful.  But moving the datacentre too provides zero additional benefit.

It would save uplink bandwidth. Maybe enough to repurpose the spectrum or save mass, but it seems like a bit of a stretch. What fraction of peak bandwidth does Netflix account for?

Offline shark0302

  • Member
  • Posts: 20
  • Liked: 11
  • Likes Given: 0
Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful.  But moving the datacentre too provides zero additional benefit.

It would save uplink bandwidth. Maybe enough to repurpose the spectrum or save mass, but it seems like a bit of a stretch. What fraction of peak bandwidth does Netflix account for?
I think it if you take into account laser links and assume  each satellite holds  1 pb of the most requested data like a normal isp pop does that cuts down on your round trip time. Best example would actually be Playstation and xbox game downloads. That would literally give more bandwidth for latency sensitive things.

I'm not thinking of this as a full data center. I'm looking at it as a pop or back bone interchange which makes sense to get the heaviest file transfers. As an example  your  average  xbox  game is 50 to 100 Gigabytes. If you are connected to a single  starlink satellite that  has that game file stored locally.  From start of downloading the file to finishing, assuming the rate senr to user terminal at 200mb/s for 50GB it would be about 120 seconds.

Sent from my SM-T860 using Tapatalk
« Last Edit: 02/12/2022 12:58 am by shark0302 »

Offline dplot123

  • Member
  • Posts: 6
  • Liked: 3
  • Likes Given: 17
Starlink pays tier 1 ISPs backhaul costs for its data which would be avoided if the content is hosted directly on the satellite. Hard to find any numbers on this, but some sources say $2 for 1 month of 1Mbps worth of backhaul from a tier 1 ISP. 12% of traffic is Netflix traffic and 20Gbps per satellite is $4,800/month and $57,600/year for 1 satellite using 2.4Gbps on Netflix. No idea how many Gigabits per month are used on Netflix for their entire constellation since the satellites aren't being used at max capacity 24/7, but the savings add up depending on CDN costs. They are already building ground stations next to data centers, but I assume there are substantial expenses with that and it can't be done everywhere. https://blog.telegeography.com/wan-pricing-mythbusters-do-tier-1-carriers-charge-more-for-dia
« Last Edit: 02/12/2022 09:47 am by dplot123 »

Offline vsatman

Is bandwidth between ground stations and satellites known to be one of the limitations of the system? So far all I heard is how they're limited on how many user terminals they can support.

Also, why would inter-satellite links be faster/better than bouncing down to the nearest ground station?

Quote
Also, renting a small data center near a gateway isn’t free, either
SpaceX is building ground stations next to data centers that already exist.
Lasers cheaper than radio or they would just use radio to connect the satellites. But also, I think it makes the most sense to just cohost the CDN server directly on the Starlink bus.
…once Starlinks get bigger.

One of the arguments against placing a server on a satellite is that the satellite is moving around the planet. And what content should be stored on the server so that it is in demand all over the planet. The Super Bowl is certainly a hit in the US, but who in Europe or Asia is interested in it??

And one more note on the practice of designing satellites - getting a kilowatt on board a satellite if you are in Earth orbit is not a problem, the real problem is how to get rid of it because 95% of the transmitter's energy turns into heat .. and you can get rid of heat only by thermal radiation ..

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
Starlink pays tier 1 ISPs backhaul costs for its data which would be avoided if the content is hosted directly on the satellite.
Backhaul costs could also be avoided if they provision space & power for third-party CDN appliances near their ground-based networking gear -- either at the ground station site or at other locations where it makes sense within their ground-based network.  If there's enough traffic to matter, netflix and friends will show up with the gear.

Online Barley

  • Full Member
  • ****
  • Posts: 1075
  • Liked: 739
  • Likes Given: 409
Starlink pays tier 1 ISPs backhaul costs for its data which would be avoided if the content is hosted directly on the satellite.
This was discussed several years ago.  I was told that transit costs are for sourcing data and that there was no cost for sinking data so a consumer level ISP that sinks more data than it sources would not be paying for transit.  I would appreciate a pointer to information on what is actually metered and charged for in interactions between different tiers.

If transmitter pays then this is not Starlink's problem.  Netflix can do what it wants to do.  Which would probably be to provide caches at or close to some but not all Starlink gateway.

Offline AC in NC

  • Senior Member
  • *****
  • Posts: 2484
  • Raleigh NC
  • Liked: 3630
  • Likes Given: 1950
Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful.  But moving the datacentre too provides zero additional benefit.

It would save uplink bandwidth. Maybe enough to repurpose the spectrum or save mass, but it seems like a bit of a stretch. What fraction of peak bandwidth does Netflix account for?

40%

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 766
  • Liked: 730
  • Likes Given: 996
This was discussed several years ago.  I was told that transit costs are for sourcing data and that there was no cost for sinking data so a consumer level ISP that sinks more data than it sources would not be paying for transit.  I would appreciate a pointer to information on what is actually metered and charged for in interactions between different tiers.
It's.. more complicated than that.   Much is covered by confidential bilateral agreements between providers but the results of those agreements can be observed from the paths that packets follow..

When two entities have equipment in the same facility, it generally costs little to string a short cable or two between them to exchange traffic but it may cost a lot to haul the traffic to and from that point. 

Each provider has its own policies.  Some are picky about who they'll interconnect with; others are more open.  Generally, if traffic flow is approximately balanced, no money changes hands, but some entities don't even care about balanced traffic. 

A random blog post that's more or less consistent with what I've heard over the years:

https://blog.telegeography.com/settlement-free-paid-peering-definition

https://peeringdb.com/ claims to track about 1/3 of the inter-ISP connection points; Starlink's entry is here: https://www.peeringdb.com/net/18747

Tags:
 

Advertisement NovaTech
Advertisement Northrop Grumman
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
1