Author Topic: Orbital Data Centers connecting directly to Starlink via laser  (Read 24228 times)

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Quote
SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:

I understand correctly that these 10 Gbps will be satellites in the V band 37.5..42.5 GHz??

Or optical.
It'd be interesting to use optical for direct comms to high altitude aircraft which fly above the vast majority of weather anyway. Certainly the military would be interested in that. Very stealthy. But potentially could be used for commercial airliners (but that's probably way overkill... the market isn't big enough to need like terabit transmission rates to long haul aircraft, and just a phased array would be a lot simpler and more robust for commercial airliners).

Actually, that gives me an idea. Over on the East Coast, there's basically always several high altitude jets overhead at any one time. You could put optical transceivers on commercial aircraft already doing regular high altitude flights over densely populated areas and those aircraft could act as local Starlink repeaters to enable higher density coverage. The user terminals shouldn't need to change. The jetliners get extremely fast in-flight Internet in return. I think there might've been some startup that had a similar idea (but without the optical connection to Starlink).
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline dlapine

  • Full Member
  • ***
  • Posts: 356
  • University of Illinois
  • Liked: 209
  • Likes Given: 326
I'd note that we had a presentation last year from an 3rd party experimenter on using standard HPE server equipment in orbit as a test. The simple result was that it was very doable. Especially now with non-mechanical storage prices coming down.

Looking at costs, that one 2MT cabinet would cost you over $100K for launch costs alone.

For reference one cabinet in a average center would cost you about $270 in monthly floor space costs (#1), and $5 + $5 an hour for power and cooling at the 50KW usage. Asumming that solar and radiators had all benefit and no cost, I'm not sure how long you'd need You'd need about 16 months orbital operations to reach a break even point at that rate, if ever, just for the launch costs.

Might need other benefits to do this in orbit.

#1 "The cost of commercial office space in the U.S. can range from $6 per square foot in low cost regions to over $12 per square foot in New York City. On average, a 50-cabinet data center will occupy about 1,700 square feet. At a median cost of $8 per square foot, the space alone would cost about $13,600 per month."

Edited to do simple math.
« Last Edit: 01/27/2021 08:45 pm by dlapine »

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
You only have a latency advantage if you’re in LEO as well.

But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.

For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.

If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.

When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.

https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM
No, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.

Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s, say, a second or less.

Ok, then what are the benefits of putting a high performance processing system in space? How is it cheaper or better than a data center on Earth? Power, cooling, maintenance, physical upgrades to name a few issues are all easier to do on Earth.

As Coastal Ron mentioned, physical security is a possible answer. Maybe video streaming. What else?
Maintenance and physical upgrades would simply not be done except very rarely (maybe 5 year intervals).

Power may actually be cheaper. The biggest problem with space based solar power isn't the cost of the solar panels (if you use silicon) or even launch (with Starship) but actually transmitting the energy, which requires a huge cost of transmitters (plus huge receiving array on the ground only half an order of magnitude smaller than the equivalent production solar farm) and a big efficiency loss in the whole process plus the regulatory headache of transmitting that much power through the atmosphere. If you just use the power in orbit, then it could be MUCH cheaper than space based solar power and potentially cheaper that terrestrial electricity since you have a near 100% capacity factor and no significant intermittency (so the same solar array produces 5-10x as much electricity and you don't need an expensive battery and no land is needed, either).

You still have to now radiate a lot of heat, but radiators could be made fairly inexpensively (and would wouldn't need refrigeration heat pump equipment like you would at a terrestrial data center, just radiate passively using coolant loops or heat pipes).

Electricity in orbit could be 1-2 cents per kWh for consistent, 24/7 electricity (depending on orbit, you could have occasional eclipses) if Starship works. Nothing approaches that on Earth except maybe remote, orphaned electricity like flare gas or maybe stranded hydro (already paid for).
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
I'd note that we had a presentation last year from an 3rd party experimenter on using standard HPE server equipment in orbit as a test. The simple result was that it was very doable. Especially now with non-mechanical storage prices coming down.

Looking at costs, that one 2MT cabinet would cost you over $100K for launch costs alone.

For reference one cabinet in a average center would cost you about $270 in monthly floor space costs (#1), and $5 + $5 an hour for power and cooling at the 50KW usage. Asumming that solar and radiators had all benefit and no cost, I'm not sure how long you'd need You'd need about 16 months orbital operations to reach a break even point at that rate, if ever, just for the launch costs.

Might need other benefits to do this in orbit.

#1 "The cost of commercial office space in the U.S. can range from $6 per square foot in low cost regions to over $12 per square foot in New York City. On average, a 50-cabinet data center will occupy about 1,700 square feet. At a median cost of $8 per square foot, the space alone would cost about $13,600 per month."

Edited to do simple math.
16 months?? Consider that power plants can take 1-2 decades or even more to pay off. So that's not a bad timeline at all, especially as Moore's Law slows down (which it already has... more like 4 years now except in some niches). Back when computing tech was falling in price-per-performance by half every 18 months, 16 months seemed like a long time. Now we're at more like 48 months, so 16 months is doable.

Also, launch costs could get even lower than $50/kg-to-high-orbit (although that's not a bad working figure for fully reusable Starship).
« Last Edit: 01/27/2021 08:52 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline dlapine

  • Full Member
  • ***
  • Posts: 356
  • University of Illinois
  • Liked: 209
  • Likes Given: 326
And given our forced remote operations due to covid in our data centers over the last 11 months, I'd feel a lot better about estimating how often we'd need to send up someone to physically do something or the amount of "hot spare hardware" needed.

With optimization for server weight, and some kind of standard in-orbit cargo unit, this would not be completely out of the question on costs, and with on orbit fast connectivity quite reasonable.

Just waiting for that $50/KG now. Or lower.  :D

Edit: typo
« Last Edit: 01/27/2021 08:53 pm by dlapine »

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Probably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!).

Gotta give those future astronauts some work to do. ;)
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).
« Last Edit: 01/27/2021 09:30 pm by Robotbeat »
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline watermod

  • Full Member
  • ****
  • Posts: 519
  • Liked: 177
  • Likes Given: 154
The following would be easy to do, be useful and not take huge storage like say a youtube cache on orbit and have the benefit of not needing bandwidth for these high use items constantly from the earth and being much less storage intensive than media:

Wikipedia was designed from day 1 to be clone-able and able to re-sync  on a regular basis.   An on orbit Wikipedia clone would be a major benefit for StarLink and not be a copyright hell.

Also, all those open source github codebases designed to synchronize copies.   Again copyright neutral and useful for SpaceX besides other users...

Update codebases for Windows, Linux, Apple, Android, cellphones and smart-tvs ... even Tesla cars....

Game servers and codebases for various X-Box and Playstation like products.

Portals for stock trading firms or even the markets - say a Dow Jones interface right in space.

Realtime tracking databases for everything.

Interfaces for services like FaceTime and Zoom

Any satellites that do imaging of Earth or space could feed their customers directly through StarLink and not even bother having high bandwidth connections and equipment to communicate to Earth.   - This would make the FCC part of satellite approval a lot easier.  Also consider the micro-satellites here in terms of power requirements - an IR laser and IR receiver have much less power needs than broadcasting and receiving from Earth and with NO FCC approval could have very fast approval periods for launch...














Offline RonM

  • Senior Member
  • *****
  • Posts: 3340
  • Atlanta, Georgia USA
  • Liked: 2233
  • Likes Given: 1584
Probably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!).

Gotta give those future astronauts some work to do. ;)

When I started working in data centers, the big issue was hard drive failure. Sometimes we'd have server hardware issues, but it wasn't common. The mainframe just kept running. By the time I retired 15 years later, our data center was a "lights out" facility. No one would go in the room unless there was a problem. I would get a call once every few months from the home office to let the EMC tech in to replace a failed hard drive on the SAN. The SAN was a rack full of hard drives, so 3 or 4 failing per year wasn't bad. That was five years ago and newer drives are probably more reliable.

Hot swap spares wouldn't be needed often and could keep the place running for years. So, having astronaut techs visit every couple of years could work.

The data center station would have to provide a low radiation environment. With low launch costs it shouldn't be a big deal to add enough shielding.

Offline Asteroza

  • Senior Member
  • *****
  • Posts: 2911
  • Liked: 1127
  • Likes Given: 33
BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).

The increasing RF environment due to megaconstellations makes it harder to receive at full speed as your receivers have to contend with signals other than what you want due to the fundamental basis of shared spectrum, whereas optical links don't overspread too much beyond the target sat so are effectively private/exclusive?

Offline watermod

  • Full Member
  • ****
  • Posts: 519
  • Liked: 177
  • Likes Given: 154
For the small satellites especially they could be designed much much simpler.

Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission.   So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier.   Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment.   So a lightweight chip quickly turned off forever. 

Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.

SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.

One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
Probably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!).

Gotta give those future astronauts some work to do. ;)

When I started working in data centers, the big issue was hard drive failure. Sometimes we'd have server hardware issues, but it wasn't common. The mainframe just kept running. By the time I retired 15 years later, our data center was a "lights out" facility. No one would go in the room unless there was a problem. I would get a call once every few months from the home office to let the EMC tech in to replace a failed hard drive on the SAN. The SAN was a rack full of hard drives, so 3 or 4 failing per year wasn't bad. That was five years ago and newer drives are probably more reliable.

Hot swap spares wouldn't be needed often and could keep the place running for years. So, having astronaut techs visit every couple of years could work.

The data center station would have to provide a low radiation environment. With low launch costs it shouldn't be a big deal to add enough shielding.
My experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline OTV Booster

  • Senior Member
  • *****
  • Posts: 5246
  • Terra is my nation; currently Kansas
  • Liked: 3640
  • Likes Given: 6204
The difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.
With a modest range of say 500 miles there is no technical need for disconnected ground stations.  Even the remote parts of western North Dakota can be served by ground stations in Billings or Fargo, or Denver or Winnipeg, were there are backbones.

Starlink may want to handle the backhaul themselves so they can negotiate more favorable peering agreements, but they don't have to unless there is massive collusion between many different geographically diverse internet companies.
There is every incentive for SpaceX to have its own backbone.
1 - End users geographically between ground stations will likely flip between ground stations. This can't be handled by BGP routing. Heck this isn't convenient to handle with routing at all. How do you allocate IPs in blocks between stations ?
When you have your own continent backbone you can handle all special characteristics of SL for routing/switching traffic.
2 - Its so much cheaper to lease dark fiber or 100G ptp ethernet links between stations than to purchase transit.
2 - You need a global network to peer with the big boys.
3 - Its much cheaper to purchase a dozens of 100G worldwide transit links than to purchase 100s of 10G transit links. And the price of those might be cheaper in some primary locations even if the transit provider has its own leased fiber going through SL ground stations.

I agree that it would be benificial, I would propose that they are more likely to partner with one of the big tech companies, of which I could see 3 being viable partners. Amazon, Microsoft, and Google, its unlikely to be amazon, because of the direct competition (their competing sat network), and google doesn't have the geographic diversity in data centers that microsoft has.

spacex and microsoft are already partnering on some offerings with microsofts new 'azure space' product line.

I could certainly see it being benificial to all parties if microsoft allowed spacex to put ground stations on the roof of all their data centres, spacex get easy access to high speed connectivity to the wider world, and much better physical security for their ground stations, and microsoft gets an edge with their azure space product, with lower response times to their own data centre offerings compared to competitors because the ground stations are on premises.



Heres an image of microsofts current and near future data centers, while there are certainly gaps in coverage if all of these locations became ground stations, it certainly gives them a credible start and eases ground station roll out significantly, no need to worry about site security, power redundancy, internet backhaul redundancy, having someone on call that can turn the control system off and on again if needed, etc, etc
I wonder if Microsoft's under water data center concept fits in anywhere.


https://arstechnica.com/information-technology/2020/09/microsoft-declares-its-underwater-data-center-test-was-a-success/


We've had discussions on how to do ground stations on big water. With a gyro stabilized buoy to carry an array...


Just thinking.
We are on the cusp of revolutionary access to space. One hallmark of a revolution is that there is a disjuncture through which projections do not work. The thread must be picked up anew and the tapestry of history woven with a fresh pattern.

Offline Asteroza

  • Senior Member
  • *****
  • Posts: 2911
  • Liked: 1127
  • Likes Given: 33
For the small satellites especially they could be designed much much simpler.

Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission.   So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier.   Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment.   So a lightweight chip quickly turned off forever. 

Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.

SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.

One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...

The small problem with that is each laser link receiver is exclusive during use. You would effectively have to use RF from the customer to the relay sat to schedule laser time on a relay receiver, and to service more than one customer at a time, you would need multiple receivers, separated sufficiently to not have beams potentially overlap.

Though that would be a great time to use an Archinaut or similar to build a truss to distance out the optical terminals on the relay sat. Better is a space corral like aggregate persistent platform, where you gradually add more truss and optical terminals. Though that does tend to favor putting the relay in GEO, which cancels out the latency advantage. But there are plenty of spacebourne customers who need throughput and not latency. By having the the timeshare negotiation RF link only pointing out to GEO, you probably can reduce the RF licensing needs considerably.

But, generally there's a strong preference for a minimal space to ground RF link for command/telemetry even if you were to use something else for bachkhaul. Which means you have to go through the licensing hoops anyways. Well, unless you were already someone who could live with just an iridium terminal for all your command comms (or just TDRS?)

Online tbellman

  • Full Member
  • ****
  • Posts: 662
  • Sweden
  • Liked: 977
  • Likes Given: 1
My experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.

And then you happen to get a generation of hard drives that fail a lot.  Or a server model that drains their CR2032 batteries, so they lose their BIOS settings after a year or two and then refuse to boot until you can replace the battery.  Or someone produces bad capacitor electrolyte again.  Or an unnoticed bug in SSD firmware causes them to wear out or die prematurely.  Or the server factory forgot to put in the rubber grommets on the fans, and the vibrations from the fans then kill the hard drives.  (None of these are hypothetical, by the way.)

Then suddenly you have an entire datacentre failing on you, with no ability to get someone from the manufacturer to replace the substandard/broken components.

The normal, random, component failures, like disks, CPUs, DIMMs, PSUs, et.c, failing now and then, you can plan for and live with.  But there is a definite risk that you will be hit by systematic failures that can take out more or less your entire DC.

Current off-the-shelf computer hardware are designed around the fact that the vast majority of deployed servers can be serviced or replaced.  If you are going to deploy your systems in locations where servicing is not possible, then you need to add much more redundancy, dissimilar redundancy if possible, or spend a lot of time and money to make sure that the stuff you buy are really high quality and dependable.

Online tbellman

  • Full Member
  • ****
  • Posts: 662
  • Sweden
  • Liked: 977
  • Likes Given: 1
Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT.

(I think you mean "t", tonne, not "MT", which means mega-Tesla...)

50 kilowatts per rack is quite dense.  For 80 HPC servers without GPUs, I expect more like 25-30 kW.  Servers with GPUs use more power, but you will rarely be able to fit 80 GPU servers in a single rack; certainly not if you have more than one GPU per server.  Managing 50 kW per rack requires careful design of your cooling.

Network: For HPC, and assuming your HPC applications are not constrained within single racks, then you will want at least twenty 100 Gbit/s links per such rack, and importantly sub-microsecond latency to any other node within the HPC cluster.  So don't expect to build an HPC cluster with one rack per satellite; the distance between the satellites, and thus the latency, will kill you.

For general IT, then it depends a lot on what kind of general IT you are doing.  Quite a lot of places use dual 100 Gbit/s uplinks per rack these days (dual mostly for redundancy).

As for mass, remember that a datacenter contains more than the servers and network equipment.  Cooling and power distribution is big and heavy.

All normal computer equipment is designed to operate in an atmosphere, to get cooling.  If your orbital DC is vented to vacuum, then you need some alternate way of cooling the components.  Liquid cooling loops connecting to all the major hot components (CPUs, GPUs, memory chips, flash chips, power electronics, network chips, et.c, et.c), and to the motherboards themselves for cooling all the components that don't have direct connection to the cooling loops.  Alternatively, you can put the entire servers in a bath of mineral oil, pumping that oil as coolant.

(Oh, and you can't use hard disk drives in vacuum.  Solid state storage only.)

All of this will drive up mass, and costs.

The other alternative is to have your oribital DC pressurized, and have "normal" HVAC units and ventilation fans.  That will also drive mass and costs.

Offline watermod

  • Full Member
  • ****
  • Posts: 519
  • Liked: 177
  • Likes Given: 154
For the small satellites especially they could be designed much much simpler.

Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission.   So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier.   Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment.   So a lightweight chip quickly turned off forever. 

Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.

SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.

One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...

The small problem with that is each laser link receiver is exclusive during use. You would effectively have to use RF from the customer to the relay sat to schedule laser time on a relay receiver, and to service more than one customer at a time, you would need multiple receivers, separated sufficiently to not have beams potentially overlap.

Though that would be a great time to use an Archinaut or similar to build a truss to distance out the optical terminals on the relay sat. Better is a space corral like aggregate persistent platform, where you gradually add more truss and optical terminals. Though that does tend to favor putting the relay in GEO, which cancels out the latency advantage. But there are plenty of spacebourne customers who need throughput and not latency. By having the the timeshare negotiation RF link only pointing out to GEO, you probably can reduce the RF licensing needs considerably.

But, generally there's a strong preference for a minimal space to ground RF link for command/telemetry even if you were to use something else for bachkhaul. Which means you have to go through the licensing hoops anyways. Well, unless you were already someone who could live with just an iridium terminal for all your command comms (or just TDRS?)

There are interesting ways around the RF request problem.  A good starting point is to look at the undersea laser basestation and end user units patented in the mid 1980s....  and yes that gets one into DARPA space..... but spreading to wide beam along with color options for the control/use requests are pretty useful... so that implies separate laser receivers/transmitters for control/use request and beam spreaders/collimation-devices so these might reside on a separate arm on the laser servers.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 39364
  • Minnesota
  • Liked: 25393
  • Likes Given: 12165
My experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.

And then you happen to get a generation of hard drives that fail a lot.  Or a server model that drains their CR2032 batteries, so they lose their BIOS settings after a year or two and then refuse to boot until you can replace the battery.  Or someone produces bad capacitor electrolyte again.  Or an unnoticed bug in SSD firmware causes them to wear out or die prematurely.  Or the server factory forgot to put in the rubber grommets on the fans, and the vibrations from the fans then kill the hard drives.  (None of these are hypothetical, by the way.)

Then suddenly you have an entire datacentre failing on you, with no ability to get someone from the manufacturer to replace the substandard/broken components.

The normal, random, component failures, like disks, CPUs, DIMMs, PSUs, et.c, failing now and then, you can plan for and live with.  But there is a definite risk that you will be hit by systematic failures that can take out more or less your entire DC.

Current off-the-shelf computer hardware are designed around the fact that the vast majority of deployed servers can be serviced or replaced.  If you are going to deploy your systems in locations where servicing is not possible, then you need to add much more redundancy, dissimilar redundancy if possible, or spend a lot of time and money to make sure that the stuff you buy are really high quality and dependable.

Wait, WHO said “no chance to replace or fix”? Not me! ;) In fact, I explicitly spoke about ability to service with astronauts if you need to.

If SpaceX is making human spaceflight super cheap, I see no reason you couldn’t have contingency service missions.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline archae86

  • Member
  • Posts: 52
  • Albuquerque, NM, USA
  • Liked: 66
  • Likes Given: 103
(Oh, and you can't use hard disk drives in vacuum.  Solid state storage only.)
Commodity drives have traditionally vented to ambient, and rely on the air pressure to fly the heads.  So they have a minimum air pressure limit, generally expressed as an altitude equivalent to roughly 10,000 feet.  There were lots of hard drive failures up at Everest Base Camp in the old days as a result.

But not all hard drives are like that.  I have an HGST helium drive I bought used for a nice low price, presumably after it was retired from about 3.5 years service in a data center somewhere.  I kinda think it must be sealed, as helium would leave in a jiffy otherwise.

Offline rsdavis9

BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).

It may or may not be cheaper but the lack of fcc spectrum allocation sure does help.

I would think satellites in MEO could communicate to the 550km fleet by just talking to satellites that are on the rim of the earth from the satellites view point.
1. Allows the laser links on 550km satellites to only look tangential to the earth. IE not up or down. Something they will already do to communicate to their in plane and adjacent planes.
2. Less interference from the earth or to the earth.
« Last Edit: 01/28/2021 02:27 pm by rsdavis9 »
With ELV best efficiency was the paradigm. The new paradigm is reusable, good enough, and commonality of design.
Same engines. Design once. Same vehicle. Design once. Reusable. Build once.

Tags:
 

Advertisement NovaTech
Advertisement Northrop Grumman
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
0