The more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.
Quote from: Robotbeat on 01/27/2021 02:09 pmThe more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?
Quote from: ZachF on 01/27/2021 03:31 pmQuote from: Robotbeat on 01/27/2021 02:09 pmThe more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit. The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.
Quote from: Mark K on 01/27/2021 03:41 pmQuote from: ZachF on 01/27/2021 03:31 pmQuote from: Robotbeat on 01/27/2021 02:09 pmThe more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit. The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.Some datacenter hardware is designed not to need any servicing. Just sufficient redundancy for the life of the server. And the servers are often value-dense enough that $50/kg is pennies.Energy and cooling costs may be relevant. But in orbit you have brighter and more consistent sunlight, so if you have cheap enough radiators, you have a chance of having lower energy/thermal costs than on the ground. Like space based solar power but without the most expensive part of that, which is the massive high power radio transmitters and receivers (and the inefficiency of all that).
Quote from: vsatman on 01/26/2021 04:59 pmQuote SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:I understand correctly that these 10 Gbps will be satellites in the V band 37.5..42.5 GHz??Or optical.
Quote SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:I understand correctly that these 10 Gbps will be satellites in the V band 37.5..42.5 GHz??
SpaceX says it plans to increase Starlink's download speeds from ~100 Mbps currently to 10 Gbps in the future:
Quote from: Robotbeat on 01/27/2021 04:39 pmQuote from: Mark K on 01/27/2021 03:41 pmQuote from: ZachF on 01/27/2021 03:31 pmQuote from: Robotbeat on 01/27/2021 02:09 pmThe more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit. The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.Some datacenter hardware is designed not to need any servicing. Just sufficient redundancy for the life of the server. And the servers are often value-dense enough that $50/kg is pennies.Energy and cooling costs may be relevant. But in orbit you have brighter and more consistent sunlight, so if you have cheap enough radiators, you have a chance of having lower energy/thermal costs than on the ground. Like space based solar power but without the most expensive part of that, which is the massive high power radio transmitters and receivers (and the inefficiency of all that).Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT. Next gen starlink could easily support that with one or more links.Going any further in pricing and value and feasibility/optimization of space-based data centers is probably off topic for this thread.
Quote from: dlapine on 01/27/2021 05:30 pmQuote from: Robotbeat on 01/27/2021 04:39 pmQuote from: Mark K on 01/27/2021 03:41 pmQuote from: ZachF on 01/27/2021 03:31 pmQuote from: Robotbeat on 01/27/2021 02:09 pmThe more popular Starlink is, the less Starlink will need to peer at all. Data centers will just have their own Starlink terminals.What will really be interesting is when you’ll have orbital data enters communicating to Starlink directly with lasers.Has anyone done the economic math on orbital datacenters when the price to LEO is under $50/kg?As long as all the manufacturing is on earth it would almost never be cost effective to put a data center in orbit. The same reason we don't put data centers in the most expensive real estate on earth. Even $50/kg is super expensive compared to some cheap earth real estate. We are becoming really good at get getting data to and from earth orbit with Starlink so it just makes the case even better for nice data centers on Earth close to maintenance and utilities.I could see them relocating to the coldest available places or places with cheap electricity and cooling but still with maintenance labor available, generally the biggest operating costs for data centers.Some datacenter hardware is designed not to need any servicing. Just sufficient redundancy for the life of the server. And the servers are often value-dense enough that $50/kg is pennies.Energy and cooling costs may be relevant. But in orbit you have brighter and more consistent sunlight, so if you have cheap enough radiators, you have a chance of having lower energy/thermal costs than on the ground. Like space based solar power but without the most expensive part of that, which is the massive high power radio transmitters and receivers (and the inefficiency of all that).Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT. Next gen starlink could easily support that with one or more links.Going any further in pricing and value and feasibility/optimization of space-based data centers is probably off topic for this thread.Does it make sense to put these datacenters at a higher altitude.1. Can connect to more satellites directly.2. Easier cooling and better solar.
What are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?
Quote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.
Quote from: Robotbeat on 01/27/2021 07:43 pmQuote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOM
Quote from: RonM on 01/27/2021 08:12 pmQuote from: Robotbeat on 01/27/2021 07:43 pmQuote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOMNo, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s a second or less.
Quote from: Robotbeat on 01/27/2021 08:14 pmQuote from: RonM on 01/27/2021 08:12 pmQuote from: Robotbeat on 01/27/2021 07:43 pmQuote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOMNo, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s a second or less.Video streaming.For a 20000km orbit the latency is 66ms one way.Could be a good place for the mars laser interconnect.
Quote from: RonM on 01/27/2021 08:12 pmQuote from: Robotbeat on 01/27/2021 07:43 pmQuote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOMNo, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s, say, a second or less.
Quote from: Robotbeat on 01/27/2021 08:14 pmQuote from: RonM on 01/27/2021 08:12 pmQuote from: Robotbeat on 01/27/2021 07:43 pmQuote from: RonM on 01/27/2021 07:30 pmWhat are the perceived benefits for orbital data centers? Starlink's low latency and high bandwidth make connections to conventional ground-based data centers more efficient. What issues are we trying to solve here?You only have a latency advantage if you’re in LEO as well.But you might have a bandwidth and energy cost advantage due to ability to use unrestricted laser comms (all the way to UV) and potentially cheaper energy due to more consistently available sunlight.For high performance computing (simulations), artificial intelligence training, scamcurrency mining, and graphics rendering (ie for movies, etc), then there might be a cost advantage for beyond LEO orbital data centers.If performance is key then these tasks are best done in a local data center. Orbital data centers will only give you a performance increase if the user is also in orbit.When we have more space stations or even space colonies in LEO, then connecting their data centers via laser comms using vacuum frequencies would be amazing. ISS did some testing using soft X-rays. It was called XCOM and used NICER.https://en.wikipedia.org/wiki/Neutron_Star_Interior_Composition_Explorer#XCOMNo, if LATENCY is key, then local (or perhaps LEO, depending on details) is best.Not all performance considerations care about very low latency. NN training, simulations, scamcurrency mining, and non-real-time rendering don’t care about latency as long as it’s, say, a second or less.Ok, then what are the benefits of putting a high performance processing system in space? How is it cheaper or better than a data center on Earth? Power, cooling, maintenance, physical upgrades to name a few issues are all easier to do on Earth.As Coastal Ron mentioned, physical security is a possible answer. Maybe video streaming. What else?
I'd note that we had a presentation last year from an 3rd party experimenter on using standard HPE server equipment in orbit as a test. The simple result was that it was very doable. Especially now with non-mechanical storage prices coming down. Looking at costs, that one 2MT cabinet would cost you over $100K for launch costs alone. For reference one cabinet in a average center would cost you about $270 in monthly floor space costs (#1), and $5 + $5 an hour for power and cooling at the 50KW usage. Asumming that solar and radiators had all benefit and no cost, I'm not sure how long you'd need You'd need about 16 months orbital operations to reach a break even point at that rate, if ever, just for the launch costs.Might need other benefits to do this in orbit.#1 "The cost of commercial office space in the U.S. can range from $6 per square foot in low cost regions to over $12 per square foot in New York City. On average, a 50-cabinet data center will occupy about 1,700 square feet. At a median cost of $8 per square foot, the space alone would cost about $13,600 per month."Edited to do simple math.
Probably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!). Gotta give those future astronauts some work to do.
BTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).
Quote from: Robotbeat on 01/27/2021 08:57 pmProbably most of that mass isn't even electronics, just metal structure in the rack and servers (and power supplies and such), so potentially you could be providing some valuable work for future astronauts who would tear down the racks and servers and replace the active electronics with upgraded versions every 48 months or so, saving you much of that launch cost (at the expense of human in-orbit labor costs, which hopefully also reduce by a lot!). Gotta give those future astronauts some work to do. When I started working in data centers, the big issue was hard drive failure. Sometimes we'd have server hardware issues, but it wasn't common. The mainframe just kept running. By the time I retired 15 years later, our data center was a "lights out" facility. No one would go in the room unless there was a problem. I would get a call once every few months from the home office to let the EMC tech in to replace a failed hard drive on the SAN. The SAN was a rack full of hard drives, so 3 or 4 failing per year wasn't bad. That was five years ago and newer drives are probably more reliable.Hot swap spares wouldn't be needed often and could keep the place running for years. So, having astronaut techs visit every couple of years could work.The data center station would have to provide a low radiation environment. With low launch costs it shouldn't be a big deal to add enough shielding.
Quote from: macpacheco on 01/27/2021 07:38 amQuote from: Barley on 09/13/2019 05:18 pmQuote from: Tuna-Fish on 09/13/2019 09:34 amThe difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.With a modest range of say 500 miles there is no technical need for disconnected ground stations. Even the remote parts of western North Dakota can be served by ground stations in Billings or Fargo, or Denver or Winnipeg, were there are backbones.Starlink may want to handle the backhaul themselves so they can negotiate more favorable peering agreements, but they don't have to unless there is massive collusion between many different geographically diverse internet companies.There is every incentive for SpaceX to have its own backbone.1 - End users geographically between ground stations will likely flip between ground stations. This can't be handled by BGP routing. Heck this isn't convenient to handle with routing at all. How do you allocate IPs in blocks between stations ?When you have your own continent backbone you can handle all special characteristics of SL for routing/switching traffic.2 - Its so much cheaper to lease dark fiber or 100G ptp ethernet links between stations than to purchase transit.2 - You need a global network to peer with the big boys.3 - Its much cheaper to purchase a dozens of 100G worldwide transit links than to purchase 100s of 10G transit links. And the price of those might be cheaper in some primary locations even if the transit provider has its own leased fiber going through SL ground stations.I agree that it would be benificial, I would propose that they are more likely to partner with one of the big tech companies, of which I could see 3 being viable partners. Amazon, Microsoft, and Google, its unlikely to be amazon, because of the direct competition (their competing sat network), and google doesn't have the geographic diversity in data centers that microsoft has.spacex and microsoft are already partnering on some offerings with microsofts new 'azure space' product line.I could certainly see it being benificial to all parties if microsoft allowed spacex to put ground stations on the roof of all their data centres, spacex get easy access to high speed connectivity to the wider world, and much better physical security for their ground stations, and microsoft gets an edge with their azure space product, with lower response times to their own data centre offerings compared to competitors because the ground stations are on premises.Heres an image of microsofts current and near future data centers, while there are certainly gaps in coverage if all of these locations became ground stations, it certainly gives them a credible start and eases ground station roll out significantly, no need to worry about site security, power redundancy, internet backhaul redundancy, having someone on call that can turn the control system off and on again if needed, etc, etc
Quote from: Barley on 09/13/2019 05:18 pmQuote from: Tuna-Fish on 09/13/2019 09:34 amThe difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.With a modest range of say 500 miles there is no technical need for disconnected ground stations. Even the remote parts of western North Dakota can be served by ground stations in Billings or Fargo, or Denver or Winnipeg, were there are backbones.Starlink may want to handle the backhaul themselves so they can negotiate more favorable peering agreements, but they don't have to unless there is massive collusion between many different geographically diverse internet companies.There is every incentive for SpaceX to have its own backbone.1 - End users geographically between ground stations will likely flip between ground stations. This can't be handled by BGP routing. Heck this isn't convenient to handle with routing at all. How do you allocate IPs in blocks between stations ?When you have your own continent backbone you can handle all special characteristics of SL for routing/switching traffic.2 - Its so much cheaper to lease dark fiber or 100G ptp ethernet links between stations than to purchase transit.2 - You need a global network to peer with the big boys.3 - Its much cheaper to purchase a dozens of 100G worldwide transit links than to purchase 100s of 10G transit links. And the price of those might be cheaper in some primary locations even if the transit provider has its own leased fiber going through SL ground stations.
Quote from: Tuna-Fish on 09/13/2019 09:34 amThe difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.With a modest range of say 500 miles there is no technical need for disconnected ground stations. Even the remote parts of western North Dakota can be served by ground stations in Billings or Fargo, or Denver or Winnipeg, were there are backbones.Starlink may want to handle the backhaul themselves so they can negotiate more favorable peering agreements, but they don't have to unless there is massive collusion between many different geographically diverse internet companies.
The difference in cost of placing a disconnected groundstation within reach of every potential US customer to getting backbone access within reach of everyone is massive.
For the small satellites especially they could be designed much much simpler.Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission. So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier. Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment. So a lightweight chip quickly turned off forever. Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...
My experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.
Figure that 1 rack of 80 standard servers and networking weigh in at about 2MT (~4400LBs), takes 50KW of power and would like to have at least a 100 Gbs network link for HPC operations or at least a 10Gbs link for general IT.
Quote from: watermod on 01/27/2021 10:22 pmFor the small satellites especially they could be designed much much simpler.Radio is still need to talk back and forth to the deployment carrier for flights like the SpaceXTransporter1 mission. So make those communication systems short distance WiFi that would be turned off as the small sats move away from the carrier. Since it's not communication of any distance or to/from Earth it shouldn't require FCC approval or any power or use after deployment. So a lightweight chip quickly turned off forever. Then so the small sat lasers don't mess up StarLink satellites StarLink/SpaceX should have laser satellite servers in a different orbit that service the small sats and aggregate connect them to StarLink sats via lasers. This removes all need for Earth communication equipment from the small sats and FCC like regulations and removes any potential interference with StarLink while providing direct connections to the operators and customers of the small sats.SpaceX/StarLInk could sell these modules and services directly to the small sat makers as nothing more than a small device with a service charge.One could even see moon exploration, mining and construction equipment talking via these laser modules directly to moon orbiting laser servers that supports connections to StarLink and then the end user...The small problem with that is each laser link receiver is exclusive during use. You would effectively have to use RF from the customer to the relay sat to schedule laser time on a relay receiver, and to service more than one customer at a time, you would need multiple receivers, separated sufficiently to not have beams potentially overlap.Though that would be a great time to use an Archinaut or similar to build a truss to distance out the optical terminals on the relay sat. Better is a space corral like aggregate persistent platform, where you gradually add more truss and optical terminals. Though that does tend to favor putting the relay in GEO, which cancels out the latency advantage. But there are plenty of spacebourne customers who need throughput and not latency. By having the the timeshare negotiation RF link only pointing out to GEO, you probably can reduce the RF licensing needs considerably.But, generally there's a strong preference for a minimal space to ground RF link for command/telemetry even if you were to use something else for bachkhaul. Which means you have to go through the licensing hoops anyways. Well, unless you were already someone who could live with just an iridium terminal for all your command comms (or just TDRS?)
Quote from: Robotbeat on 01/27/2021 10:51 pmMy experience is similar (I used to architect and build and service SANs and NASes), and I concur. I remember a company that was marketing a NAS that had sufficient spares & redundancy built in so that zero maintenance would be needed for its entire 10 year life.And then you happen to get a generation of hard drives that fail a lot. Or a server model that drains their CR2032 batteries, so they lose their BIOS settings after a year or two and then refuse to boot until you can replace the battery. Or someone produces bad capacitor electrolyte again. Or an unnoticed bug in SSD firmware causes them to wear out or die prematurely. Or the server factory forgot to put in the rubber grommets on the fans, and the vibrations from the fans then kill the hard drives. (None of these are hypothetical, by the way.)Then suddenly you have an entire datacentre failing on you, with no ability to get someone from the manufacturer to replace the substandard/broken components.The normal, random, component failures, like disks, CPUs, DIMMs, PSUs, et.c, failing now and then, you can plan for and live with. But there is a definite risk that you will be hit by systematic failures that can take out more or less your entire DC.Current off-the-shelf computer hardware are designed around the fact that the vast majority of deployed servers can be serviced or replaced. If you are going to deploy your systems in locations where servicing is not possible, then you need to add much more redundancy, dissimilar redundancy if possible, or spend a lot of time and money to make sure that the stuff you buy are really high quality and dependable.
(Oh, and you can't use hard disk drives in vacuum. Solid state storage only.)
Quote from: Robotbeat on 01/27/2021 09:23 pmBTW, part of this thread is the assumption that in-space direct satellite-to-satellite comms is significantly cheaper with lasers than with radio. Otherwise, why would SpaceX bother with lasers? So that's a potential advantage (above terrestrial datacenters also connected via Starlink).It may or may not be cheaper but the lack of fcc spectrum allocation sure does help.I would think satellites in MEO could communicate to the 550km fleet by just talking to satellites that are on the rim of the earth from the satellites view point. 1. Allows the laser links on 550km satellites to only look tangential to the earth. IE not up or down. Something they will already do to communicate to their in plane and adjacent planes. 2. Less interference from the earth or to the earth.
Quote from: OTV Booster on 01/28/2021 04:33 amWe've had discussions on how to do ground stations on big water. With a gyro stabilized buoy to carry an array...Just thinking.Why gyrostablize, when you could have a quad antenna pyramid on the buoy with an IMU and steer the beam electronically?
We've had discussions on how to do ground stations on big water. With a gyro stabilized buoy to carry an array...Just thinking.
I am still not seeing the use case for this.Everything about isolated container data centers still makes more sense to me on Earth than in orbit, especially if you are talking high orbit. If we have a "semi-trailer" data center it is a heck of a lot cheaper than 13K per month to dump it next to a warehouse in, say, upper Michigan with a connection to cold Lake Superior water for cooling. Latency?This would be quicker than MEO for sure. Power? Yes we will pay for power but that is cheap relative to the capital cost of the solar cells and especially the cooling which I feel people are writing off too easily. For big sustained power that is going to be an issue as you only get radiant heat loss so you will need to create shaded structure. If you are in low earth orbit you will need a lot of batteries to power you over the night periods, unless you have some kind of beamed power (more capital cost).Starlink big connections make leaving the processing on Earth even easier with the good connectivity it allowsto out of the way places.
..The small problem with that is each laser link receiver is exclusive during use. You would effectively have to use RF from the customer to the relay sat to schedule laser time on a relay receiver, and to service more than one customer at a time, you would need multiple receivers, separated sufficiently to not have beams potentially overlap....
FCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper.
Quote from: Robotbeat on 01/28/2021 03:31 pmFCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper. As much as I know the FCC regulates the spectrum from 9 kHz to 300 GHz and 20+ THz lasers are not subject to FCC regulation
Quote from: vsatman on 01/30/2021 05:32 pmQuote from: Robotbeat on 01/28/2021 03:31 pmFCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper. As much as I know the FCC regulates the spectrum from 9 kHz to 300 GHz and 20+ THz lasers are not subject to FCC regulationThat’s my point.
Quote from: Robotbeat on 01/30/2021 05:40 pmQuote from: vsatman on 01/30/2021 05:32 pmQuote from: Robotbeat on 01/28/2021 03:31 pmFCC allocation (and limited radio spectrum, etc) is a cost, so I’m including things like that in my stated assumption that lasers could be cheaper. As much as I know the FCC regulates the spectrum from 9 kHz to 300 GHz and 20+ THz lasers are not subject to FCC regulationThat’s my point.That AND they're more efficient (in use of 3d space, not necessarily power) since they can be so tight.
I dunno. If Starship can someday put a 150 ton data center in orbit for a couple of million dollars, the cooling and power can be worked out. Free land, free power, virtually unlimited connectivity, and impregnable security may make orbital server farms appealing.
There would be greater efficiency of orbital data centers if the data input (as well as power) was coming from orbit already. That might be true if there is a constellation with laser links to Starlink that’s dedicated to 24/7 observation of the whole surface of the earth generating vast amounts of data.
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.
Quote from: mlindner on 04/16/2021 09:12 amI'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.Hundreds of racks? That's still a small datacenter. Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling. I don't think many people really understand the industrial scale of large datacenters these days.Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.
Quote from: launchwatcher on 04/17/2021 02:56 pmQuote from: mlindner on 04/16/2021 09:12 amI'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.Hundreds of racks? That's still a small datacenter. Big would be tens or hundreds of megawatts per site, multiple buildings, on cheap land near cheap power and cheap cooling. I don't think many people really understand the industrial scale of large datacenters these days.Smaller deployments close to your users are still useful, but LEO only puts you that close for a small fraction of the orbit.LEO puts you close 24/7/365 if you have a global mesh of laser linked satellites.
Perhaps someone could clarify the difference in energy used by data centres as opposed to the energy needed by a satellite to receive, probably temporarily store and then transmit the data? It seems that even small amounts of storage on a satellite for high demand repeat data would be benificial. ie the data for just released movies on Netflix.Edit: Do data centres only use so much power because they deal with 100,000's users/requests? If so then having data stored on Starlinks where they are only dealing with smaller numbers of users should need less power. Then comparing Starlink data centres to ground based centres isn't valid.
Perhaps someone could clarify the difference in energy used by data centres as opposed to the energy needed by a satellite to receive, probably temporarily store and then transmit the data? It seems that even small amounts of storage on a satellite for high demand repeat data would be benificial. ie the data for just released movies on Netflix.
Edit: Do data centres only use so much power because they deal with 100,000's users/requests? If so then having data stored on Starlinks where they are only dealing with smaller numbers of users should need less power. Then comparing Starlink data centres to ground based centres isn't valid.
There was an application that I saw mentioned a while ago somewhere I cant recall that I thought might have some merit. It was a proposal to have a satellite 'data center' that would be placed into orbit around the moon or mars for instance then act as a processing/data storage offload for robotic missions in the vicinity. So you could have a rover on mars as an example that would live stream a camera feed up to the satellite, where you could use more energy intensive processing techniques like GPU based machine learning/AI stuff to determine things like path planning and higher level goal determination more rapidly than would be possible using the rovers onboard computation capabilities. This would (potentially) allow much faster traversing surface rovers while not requiring significantly more power, while maintaining the fast vision based path planning that makes operating a fast rover practical remotely (say on the order of 1 m/s). This sort of thing is obviously much more potentially beneficial for locations a long distance from earth, with a high round trip communication timeat risk of Reductio ad absurdum, you could likely get away with an off the shelf RC car and a webcam (with some insulation for the electronics/battery of course), while still being able to leverage the multi-kW processing capability of the orbiting data center
There was a NIAC 2021 proposal for a pony express style system of data hauling cyclers (functionally semi-mobile satellite datacenters) to shuttle data from far probes.Taking the station wagon full of tapes to the stars...But the NIAC proposal sounds like they are still doing onload/offload via laser, rather than physically picking up the equivalent of an Amazon Snowball from a probe.
I'm not sure anyone in this thread actually knows anything about data centers based on some of the comments. A server rack in an average data center pulls 10 kilowatts of power per rack. In a data center like Amazon's or Google's where it's been custom designed they'll pull upwards of 30-40 kilowats of power per rack. Once you're at 4-5 racks you're at solar panels the size of the ISS's panels. And a good sized data center has dozens to hundreds of racks.In space you can't convect or conduct heat away, which are the two most efficient forms of heat dissipation. You can only radiate heat away which becomes very difficult with so much energy consumption going on. The size of the radiators are going to be massive, likely much larger than the solar panels used to collect the energy in the first place.Secondly you have radiation issues. Single bit upsets become huge issues and the hardware to handle that isn't cheap. Data corruption would become a massive issue given the density of the high performance compute. (As a percentage of volume, satellites have very little space dedicated to transistors.) The transistors would also be much smaller than is commonly used on spacecraft right now. DRAM would need to be some form of enhanced ECC memory more resiliant than even normal ECC.Data centers are already really hard to build properly. (For example the recent case of one burning down in Europe.)If you're looking for exotic places to build data centers, the new fad is to build them under water and underground because the water/earth insulates them from radiation effects and in water it becomes much easier to dissipate all that heat rather than having to pay for the rather extreme cooling systems modern data centers require. It can just use pumps to pull in external water, circulate it and then expel it. Putting them in space is the complete opposite of that both in terms of ease of access to electrical power, ease of cooling, and radiation environment. Data centers are being built next to hydroelectric dams so they can use the water for cooling and the cheap power from the dam. Data centers are also being built in places like iceland with it's low temperatures for air cooling, it's cheap geothermal power.It's NEVER going to be the case that we'll put data centers in space until we built the data centers there in the first place and there's a massive in-space human presence already.
Lisa Costa, chief technology and innovation officer of the U.S. Space Force, said the service is eyeing investments in edge computing, data centers in space and other technologies needed to build a digital infrastructure. “Clearly, the imperative for data driven, threat informed decisions is number one, and that means that we need computational and storage power in space, and high-speed resilient communications on orbit,” Costa said Jan. 13 at a virtual event hosted by GovConWire, a government contracting news site. <snip>A key goal of the Space Force is to be agile and “outpace our adversaries,” said Costa. Timely and relevant data is imperative, and that will require investments in government-owned and in commercial infrastructure in space, she added. “Things like cloud storage, elastic computing, critical computation for machine learning, infrastructure in and across orbits.”
The latest Apple Watch has 16 times the memory of the central processor on NASA’s Mars 2020 rover. For the new iPhone, 64 times the car-size rover’s memory comes standard.For decades, people dismissed comparisons of terrestrial and space-based processors by pointing out the harsh radiation and temperature extremes facing space-based electronics. Only components custom built for spaceflight and proven to function well after many years in orbit were considered resilient enough for multibillion-dollar space agency missions.While that may still be the best bet for high-profile deep space missions, spacecraft operating closer to Earth are adopting state-of-the-art onboard processors. Upcoming missions will require even greater computing capability.
Since traveling in February 2020 to the International Space Station, Spaceborne Computer-2 has completed 20 experiments focused on health care, communications, Earth observation and life sciences. Still, the queue for access to the off-the-shelf commercial computer linked to Microsoft’s Azure cloud keeps growing.Mark Fernandez, principal investigator for Spaceborne Computer-2, sees a promising future for space-based computing. He expects increasingly capable computers to be installed on satellites and housed in orbiting data centers in the coming years. Edge processors will crunch data on the moon, and NASA’s lunar Gateway will host advanced computing resources, Fernandez told SpaceNews.Fernandez, who holds a doctorate in scientific computing from the University of Southern Mississippi, served as software payload developer for HPE’s original Spaceborne Computer, a supercomputer that reached ISS in August 2017 and returned to Earth a year and a half later in a SpaceX Dragon cargo capsule.
A data processor launched to orbit by the Space Development Agency has performed an early demonstration of autonomous data fusion in space, said one of the companies supporting the experiment.Scientific Systems Company Inc. (SSCI) developed an artificial intelligence-enabled edge computer for the experiment known as POET, short for prototype on-orbit experimental testbed.The POET payload rode to orbit on a Loft Orbital satellite that launched June 30 on the SpaceX Transporter-2 rideshare mission.
I honestly think this does make sense, long-term.
If your users are accessing the data set using Starlink, however, putting the data on Starlink itself cuts down the latency by half. It also eliminates the bandwidth to the Gateways if you cache stuff on Starlink itself.
Quote from: Robotbeat on 02/10/2022 08:03 pmIf your users are accessing the data set using Starlink, however, putting the data on Starlink itself cuts down the latency by half. It also eliminates the bandwidth to the Gateways if you cache stuff on Starlink itself.Sorry but that does not work. The satellites move, so your data would be orbiting the earth unless it was being continuously moved from satellite to satellite. Such movement of any appreciable amount of data is infeasible as it would require too much ISL bandwidth and power.
Quote from: DanClemmensen on 02/10/2022 09:35 pmQuote from: Robotbeat on 02/10/2022 08:03 pmIf your users are accessing the data set using Starlink, however, putting the data on Starlink itself cuts down the latency by half. It also eliminates the bandwidth to the Gateways if you cache stuff on Starlink itself.Sorry but that does not work. The satellites move, so your data would be orbiting the earth unless it was being continuously moved from satellite to satellite. Such movement of any appreciable amount of data is infeasible as it would require too much ISL bandwidth and power.It works just fine for super common data you can just put on each Starlink satellite. Like Netflix cache, for instance.“Infeasible” is a function of capability. Just handwaving without analysis doesn’t help anything.The Netflix catalogue isn’t that big (can easily fit on small solid state devices) but it uses a large portion of Starlink bandwidth. Similar for other streaming video catalogues.We explored this quantitatively I believe on this thread.Might not make sense for current Starlink satellites, but once they become multiple tons, it may be worth doing.
“You’re not allowed to pick the low hanging fruit of data center functionality!” is not a very persuasive argument to me.And while the Netflix catalogue may be relatively small, it’s still way too big to put on all the user terminals: (I already showed this picture, but you ignored it?)
Netflix is waaaaay bigger now with far more content in 4k and far more content in general. I’d be shocked if it’s not at least an order of magnitude bigger now. The idea of caching the entire catalogue in a satellite is a non-starter. A smart cache that keeps the top N% of shows, where N is a small number? Sure.
......clearly none of you guys have ever run a datacenter
Quote from: Naito on 02/11/2022 05:36 pm......clearly none of you guys have ever run a datacenterI did, in a previous life.
This is absolutely ridiculous.What does this get you compared to placing Starlink ground stations next to existing data centers? This is a sensible thing that SpaceX is already doing.You get maybe 20ms less latency towards Starlink customers at the expense of having to spread computing resources equally around the globe (because they move).
I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. I miss being able to tell data speed by listening to it.
Also, renting a small data center near a gateway isn’t free, either
Quote from: Nomadd on 02/11/2022 05:56 pm I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. I miss being able to tell data speed by listening to it.ok I take it back =DI'm not sure what putting a datacentre up in space will help is all. Power may be abundant, but1) processing creates a lot of heat, how are you going to shed it all? radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work3) if you're doing SSDs, well unless you also have processing with it, what's the point? the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth. If you fly higher to reduce drag, then you're increasing comms time and latency again......It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.I'm sure eventually we'll have big computing in space, but......what's the point now?
Is bandwidth between ground stations and satellites known to be one of the limitations of the system? So far all I heard is how they're limited on how many user terminals they can support.Also, why would inter-satellite links be faster/better than bouncing down to the nearest ground station?QuoteAlso, renting a small data center near a gateway isn’t free, eitherSpaceX is building ground stations next to data centers that already exist.
Quote from: Naito on 02/11/2022 07:15 pmQuote from: Nomadd on 02/11/2022 05:56 pm I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. I miss being able to tell data speed by listening to it.ok I take it back =DI'm not sure what putting a datacentre up in space will help is all. Power may be abundant, but1) processing creates a lot of heat, how are you going to shed it all? radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work3) if you're doing SSDs, well unless you also have processing with it, what's the point? the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth. If you fly higher to reduce drag, then you're increasing comms time and latency again......It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.I'm sure eventually we'll have big computing in space, but......what's the point now?I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.Once Starlink’s are much larger, it really actually does make sense.
Quote from: Robotbeat on 02/11/2022 07:28 pmQuote from: Naito on 02/11/2022 07:15 pmQuote from: Nomadd on 02/11/2022 05:56 pm I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. I miss being able to tell data speed by listening to it.ok I take it back =DI'm not sure what putting a datacentre up in space will help is all. Power may be abundant, but1) processing creates a lot of heat, how are you going to shed it all? radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work3) if you're doing SSDs, well unless you also have processing with it, what's the point? the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth. If you fly higher to reduce drag, then you're increasing comms time and latency again......It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.I'm sure eventually we'll have big computing in space, but......what's the point now?I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.Once Starlink’s are much larger, it really actually does make sense.yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience? like what would you host up there that makes more sense than hosting down on earth?? what exactly is the use case? it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?
Quote from: Robotbeat on 02/11/2022 07:28 pmI literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.Once Starlink’s are much larger, it really actually does make sense.yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience? like what would you host up there that makes more sense than hosting down on earth?? what exactly is the use case? it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?
I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.Once Starlink’s are much larger, it really actually does make sense.
Quote from: Naito on 02/11/2022 07:57 pmQuote from: Robotbeat on 02/11/2022 07:28 pmQuote from: Naito on 02/11/2022 07:15 pmQuote from: Nomadd on 02/11/2022 05:56 pm I sort of did, when 9600bps was considered high speed and an interstate circuit at that rate ran about 2 grand a month. I miss being able to tell data speed by listening to it.ok I take it back =DI'm not sure what putting a datacentre up in space will help is all. Power may be abundant, but1) processing creates a lot of heat, how are you going to shed it all? radiators can only get so big, starlink is in low orbit and pointing downwards, so unless you fly lower than starlink satellites, which means you'll have more drag and can't have such giant radiators......2) storage is physically very heavy, and if you're going for capacity then most spinning rust still expects an atmosphere for drive heads to work3) if you're doing SSDs, well unless you also have processing with it, what's the point? the performance provided by SSDs is negated by the distance, increasing lag and reducing bandwidth. If you fly higher to reduce drag, then you're increasing comms time and latency again......It all seems like a much harder more expensive way to do something that really doesn't need to be done in space, and only as a "cuz we can" thought exercise.I'm sure eventually we'll have big computing in space, but......what's the point now?I literally answered all these questions upthread. A 360TB Netflix appliance capable of serving 100Gbps uses just 650W peak (Starlink has ~6500W peak solar panels, and a LOT of that already ends up as heat), weighs ~30kg and that’s not weight-optimized at all. And likely you wouldn’t use an appliance like that as-is, but instead like a server-on-a-PCB that plugs directly into the Starlink bus.Once Starlink’s are much larger, it really actually does make sense.yeah but.....why? why would you put a netflix appliance up in space, when you want to actually have it more local to wherever you're serving your audience? like what would you host up there that makes more sense than hosting down on earth?? what exactly is the use case? it's not cheaper, it's not more reliable, it's definitely more complex and exposed to additional dangers like radiation and space debris.......why?The closest place to a remote user is the satellite.
Sure….that makes sense for starlink itself in order to provide the connection and why it’s been successful. But moving the datacentre too provides zero additional benefit.
Quote from: Naito on 02/11/2022 10:54 pmSure….that makes sense for starlink itself in order to provide the connection and why it’s been successful. But moving the datacentre too provides zero additional benefit.It would save uplink bandwidth. Maybe enough to repurpose the spectrum or save mass, but it seems like a bit of a stretch. What fraction of peak bandwidth does Netflix account for?
Quote from: DreamyPickle on 02/11/2022 07:24 pmIs bandwidth between ground stations and satellites known to be one of the limitations of the system? So far all I heard is how they're limited on how many user terminals they can support.Also, why would inter-satellite links be faster/better than bouncing down to the nearest ground station?QuoteAlso, renting a small data center near a gateway isn’t free, eitherSpaceX is building ground stations next to data centers that already exist.Lasers cheaper than radio or they would just use radio to connect the satellites. But also, I think it makes the most sense to just cohost the CDN server directly on the Starlink bus.…once Starlinks get bigger.
Starlink pays tier 1 ISPs backhaul costs for its data which would be avoided if the content is hosted directly on the satellite.
This was discussed several years ago. I was told that transit costs are for sourcing data and that there was no cost for sinking data so a consumer level ISP that sinks more data than it sources would not be paying for transit. I would appreciate a pointer to information on what is actually metered and charged for in interactions between different tiers.