Quote from: vsatman on 01/26/2021 04:59 pmQuote SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:I understand correctly that these 10 Gbps will be satellites in the V band 37.5..42.5 GHz??Or optical.
Quote SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:I understand correctly that these 10 Gbps will be satellites in the V band 37.5..42.5 GHz??
SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:
Quote from: Robotbeat on 01/27/2021 08:14 pmQuote from: RonM on 01/27/2021 08:12 pmQuote from: Robotbeat on 01/27/2021 07:43 pmQuote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOMNo, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s, say, a second or less.Ok, then what are the benefits of putting a high performance processing system in space? How is it cheaper or better than a data center on Earth? Power, cooling, maintenance, physical upgrades to name a few issues are all easier to do on Earth.As Coastal Ron mentioned, physical security is a possible answer. Maybe video streaming. What else?
Quote from: RonM on 01/27/2021 08:12 pmQuote from: Robotbeat on 01/27/2021 07:43 pmQuote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOMNo, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s, say, a second or less.
Quote from: Robotbeat on 01/27/2021 07:43 pmQuote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM
Quote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
I'd note that we had a presentation last year from an 3rd party experimenter on using standard HPE server equipment in orbit as a test. The simple result was that it was very doable. Especially now with non-mechanical storage prices coming down. Looking at costs, that one 2MT cabinet would cost you over $100K for launch costs alone. For reference one cabinet in a average center would cost you about $270 in monthly floor space costs (#1), and $5 + $5 an hour for power and cooling at the 50KW usage. Asumming that solar and radiators had all benefit and no cost, I'm not sure how long you'd need You'd need about 16 months orbital operations to reach a break even point at that rate, if ever, just for the launch costs.Might need other benefits to do this in orbit.#1 "The cost of commercial office space in the U.S. can range from $6 per square foot in low cost regions to over $12 per square foot in New York City. On average, a 50-cabinet data center will occupy about 1,700 square feet. At a median cost of $8 per square foot, the space alone would cost about $13,600 per month."Edited to do simple math.
Probably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!). Gotta give those future astronauts some work to do.
BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).
Quote from: Robotbeat on 01/27/2021 08:57 pmProbably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!). Gotta give those future astronauts some work to do. When I started working in data centers, the big issue was hard drive failure. Sometimes we'd have server hardware issues, but it wasn't common. The mainframe just kept running. By the time I retired 15 years later, our data center was a "lights out" facility. No one would go in the room unless there was a problem. I would get a call once every few months from the home office to let the EMC tech in to replace a failed hard drive on the SAN. The SAN was a rack full of hard drives, so 3 or 4 failing per year wasn't bad. That was five years ago and newer drives are probably more reliable.Hot swap spares wouldn't be needed often and could keep the place running for years. So, having astronaut techs visit every couple of years could work.The data center station would have to provide a low radiation environment. With low launch costs it shouldn't be a big deal to add enough shielding.
Quote from: macpacheco on 01/27/2021 07:38 amQuote from: Barley on 09/13/2019 05:18 pmQuote from: Tuna-Fish on 09/13/2019 09:34 amThe difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.With a modest range of say 500 miles there is no technical need for disconnected ground stations. Even the remote parts of western North Dakota can be served by ground stations in Billings or Fargo, or Denver or Winnipeg, were there are backbones.Starlink may want to handle the backhaul themselves so they can negotiate more favorable peering agreements, but they don't have to unless there is massive collusion between many different geographically diverse internet companies.There is every incentive for SpaceX to have its own backbone.1 - End users geographically between ground stations will likely flip between ground stations. This can't be handled by BGP routing. Heck this isn't convenient to handle with routing at all. How do you allocate IPs in blocks between stations ?When you have your own continent backbone you can handle all special characteristics of SL for routing/switching traffic.2 - Its so much cheaper to lease dark fiber or 100G ptp ethernet links between stations than to purchase transit.2 - You need a global network to peer with the big boys.3 - Its much cheaper to purchase a dozens of 100G worldwide transit links than to purchase 100s of 10G transit links. And the price of those might be cheaper in some primary locations even if the transit provider has its own leased fiber going through SL ground stations.I agree that it would be benificial, I would propose that they are more likely to partner with one of the big tech companies, of which I could see 3 being viable partners. Amazon, Microsoft, and Google, its unlikely to be amazon, because of the direct competition (their competing sat network), and google doesn't have the geographic diversity in data centers that microsoft has.spacex and microsoft are already partnering on some offerings with microsofts new 'azure space' product line.I could certainly see it being benificial to all parties if microsoft allowed spacex to put ground stations on the roof of all their data centres, spacex get easy access to high speed connectivity to the wider world, and much better physical security for their ground stations, and microsoft gets an edge with their azure space product, with lower response times to their own data centre offerings compared to competitors because the ground stations are on premises.Heres an image of microsofts current and near future data centers, while there are certainly gaps in coverage if all of these locations became ground stations, it certainly gives them a credible start and eases ground station roll out significantly, no need to worry about site security, power redundancy, internet backhaul redundancy, having someone on call that can turn the control system off and on again if needed, etc, etc
Quote from: Barley on 09/13/2019 05:18 pmQuote from: Tuna-Fish on 09/13/2019 09:34 amThe difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.With a modest range of say 500 miles there is no technical need for disconnected ground stations. Even the remote parts of western North Dakota can be served by ground stations in Billings or Fargo, or Denver or Winnipeg, were there are backbones.Starlink may want to handle the backhaul themselves so they can negotiate more favorable peering agreements, but they don't have to unless there is massive collusion between many different geographically diverse internet companies.There is every incentive for SpaceX to have its own backbone.1 - End users geographically between ground stations will likely flip between ground stations. This can't be handled by BGP routing. Heck this isn't convenient to handle with routing at all. How do you allocate IPs in blocks between stations ?When you have your own continent backbone you can handle all special characteristics of SL for routing/switching traffic.2 - Its so much cheaper to lease dark fiber or 100G ptp ethernet links between stations than to purchase transit.2 - You need a global network to peer with the big boys.3 - Its much cheaper to purchase a dozens of 100G worldwide transit links than to purchase 100s of 10G transit links. And the price of those might be cheaper in some primary locations even if the transit provider has its own leased fiber going through SL ground stations.
Quote from: Tuna-Fish on 09/13/2019 09:34 amThe difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.With a modest range of say 500 miles there is no technical need for disconnected ground stations. Even the remote parts of western North Dakota can be served by ground stations in Billings or Fargo, or Denver or Winnipeg, were there are backbones.Starlink may want to handle the backhaul themselves so they can negotiate more favorable peering agreements, but they don't have to unless there is massive collusion between many different geographically diverse internet companies.
The difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.
For the small satellites especially they could be designed much much simpler.Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission. So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier. Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment. So a lightweight chip quickly turned off forever. Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...
My experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.
Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT.
Quote from: watermod on 01/27/2021 10:22 pmFor the small satellites especially they could be designed much much simpler.Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission. So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier. Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment. So a lightweight chip quickly turned off forever. Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...The small problem with that is each laser link receiver is exclusive during use. You would effectively have to use RF from the customer to the relay sat to schedule laser time on a relay receiver, and to service more than one customer at a time, you would need multiple receivers, separated sufficiently to not have beams potentially overlap.Though that would be a great time to use an Archinaut or similar to build a truss to distance out the optical terminals on the relay sat. Better is a space corral like aggregate persistent platform, where you gradually add more truss and optical terminals. Though that does tend to favor putting the relay in GEO, which cancels out the latency advantage. But there are plenty of spacebourne customers who need throughput and not latency. By having the the timeshare negotiation RF link only pointing out to GEO, you probably can reduce the RF licensing needs considerably.But, generally there's a strong preference for a minimal space to ground RF link for command/telemetry even if you were to use something else for bachkhaul. Which means you have to go through the licensing hoops anyways. Well, unless you were already someone who could live with just an iridium terminal for all your command comms (or just TDRS?)
Quote from: Robotbeat on 01/27/2021 10:51 pmMy experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.And then you happen to get a generation of hard drives that fail a lot. Or a server model that drains their CR2032 batteries, so they lose their BIOS settings after a year or two and then refuse to boot until you can replace the battery. Or someone produces bad capacitor electrolyte again. Or an unnoticed bug in SSD firmware causes them to wear out or die prematurely. Or the server factory forgot to put in the rubber grommets on the fans, and the vibrations from the fans then kill the hard drives. (None of these are hypothetical, by the way.)Then suddenly you have an entire datacentre failing on you, with no ability to get someone from the manufacturer to replace the substandard/broken components.The normal, random, component failures, like disks, CPUs, DIMMs, PSUs, et.c, failing now and then, you can plan for and live with. But there is a definite risk that you will be hit by systematic failures that can take out more or less your entire DC.Current off-the-shelf computer hardware are designed around the fact that the vast majority of deployed servers can be serviced or replaced. If you are going to deploy your systems in locations where servicing is not possible, then you need to add much more redundancy, dissimilar redundancy if possible, or spend a lot of time and money to make sure that the stuff you buy are really high quality and dependable.
(Oh, and you can't use hard disk drives in vacuum. Solid state storage only.)