This is less convincing. Fiber-connected users at the edge often still have a tough time getting more than ~10mbps, yet they can easily consume many multiples of that between their cloud compute instance and their data storage service. Even with Ka band RF Starlink is getting something like 20 Gbps to each of their groundstation gateways? That's not exactly chump-change.
The report finds that data centers consumed about 4.4% of total U.S. electricity in 2023 and are expected to consume approximately 6.7 to 12% of total U.S. electricity by 2028. The report indicates that total data center electricity usage climbed from 58 TWh in 2014 to 176 TWh in 2023 and estimates an increase between 325 to 580 TWh by 2028.
The earth intercepts 174,000 TW (terawatts) of energy from the sun. 70,000 TW makes it to the surface. About 20,000 TW of that energy is captured by land plants, from rain forests to the soil microbes of the Mojave. Most of that captured energy drives chemical processes in the cells of the plants. About 200 TW remains to become food and fiber, which feeds animals and microbes and fungi. While the fraction used by animals and fungi is small, these organisms are the recyclers of nature, converting dead plants into the materials to make more plants. Interrupting those flows damages the biosphere.A healthy adult consumes 2000 Calories a day, which is about 100 watts. That is about half a terawatt for all the humans alive today. We consume about 14 TW of manufactured energy globally. A sustainable world of 10 billion humans, living at first-world standards, might consume 100 TW of manufactured energy by the end of the century. Obviously we are not going to get that by burning 200 TW of plant product, and robbing the other animals and fungi. We are not going to get 100 TW by wind and solar, because without gigantic year-capacity energy storage devices, we must displace huge amounts of habitat to have enough production capacity for winter. Our bite of the biosphere is already too large. What to do?In space, energy is not interrupted by clouds, or winter, or night. One square kilometer of sunlight is 1.36 billion watts. Solar cells in space, with 15% efficency and 70% availability, can deliver 140 million watts per square kilometer. A 200nm thick, graded junction InP solar cell is highly radiation resistant, and weighs 1 gram per square meter, ten tons per square kilometer with a 10x thicker substrate. At $10,000/kg launch cost, that is 70 cents per watt - and launch can be orders of magnitude cheaper.One pixel in the picture above, 75 by 75 kilometers, is 800 billion watts, 0.8 TW. Florida is 170,000 square kilometers, and that area could produce 24 TW in space. The highlight in the middle of the marble represents more than 100 TW. Because of clouds, and night, and winter, the amount of area needed for ground solar would be far larger.
Megaconstallations are [...] absolutely abysmal at achieving very high data rates to a single point.
i don't see how "cloud in orbit" can work. The latency from a user on the ground to any given orbital server wll vary continuously from about 10 ms to about 70 ms as the server orbits.
There is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …
Quote from: RedLineTrain on 07/03/2025 04:00 pmThere is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …This isn’t true. In orbit, you either have short, 40 minute shadow periods (way shorter than 12-18 hour times of low solar in Earth) which means very small batteries (as batteries are INCREDIBLY good at high power but are heavy for long duration storage… although better than people think)or are in a sun synchronous or high orbit and don’t need batteries at all(other than very short periods of eclipse perhaps).Training workloads last for days, weeks, or months at a time. So in fact they’re MORE consistent than inference, which goes based on demand from humans.
We consume about 14 TW of manufactured energy globally. A sustainable world of 10 billion humans, living at first-world standards, might consume 100 TW of manufactured energy by the end of the century.
Quote from: VSECOTSPE on 07/03/2025 01:57 pmWe consume about 14 TW of manufactured energy globally. A sustainable world of 10 billion humans, living at first-world standards, might consume 100 TW of manufactured energy by the end of the century.I know it's not a quote from you, but this particular claim now has a name - "the Primary Energy Fallacy". It's the idea that all energy we consume in fossil fuels has to be replaced by energy consumed in electricity in the future.To give you an idea how utterly false that is, I was "consuming" 77MWH per year in total energy for home and transportation. When I electrified my fossil energy consumption went to zero and I now consume 10MWH per year in total energy consumption, with 12.5MWH per year produced by the solar on my roof.In other words, there's no good reason that getting 10 billion humans to first-world standards couldn't be done while consuming *less* energy than we consume now, not 7 times more.
i don't see how "cloud in orbit" can work. The latency from a user on the ground to any given orbital server wll vary continuously from about 10 ms to about 70 ms as the server orbits. I suppose you could artificially fix the latency at 70 ms, but that's not very good service. Sure, the cloud could make all servers virtual and hop them to keep each virtual server near a fixed location, the the bandwidth to support this would massively exceed the user bandwidth.
Quote from: Robotbeat on 07/04/2025 04:54 pmQuote from: RedLineTrain on 07/03/2025 04:00 pmThere is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …This isn’t true. In orbit, you either have short, 40 minute shadow periods (way shorter than 12-18 hour times of low solar in Earth) which means very small batteries (as batteries are INCREDIBLY good at high power but are heavy for long duration storage… although better than people think)or are in a sun synchronous or high orbit and don’t need batteries at all(other than very short periods of eclipse perhaps).Training workloads last for days, weeks, or months at a time. So in fact they’re MORE consistent than inference, which goes based on demand from humans.1) That means you would need to size the solar array to peak power demand and not to average power demand, and deal with dumping that power somewhere when actual demand dips below that peak output (a PV cell is active as long as it is illuminated). 2) Training workloads are 'bursty', even during training operations, with spikes on the order of the actual peak draw occurring on the sub-second scale. This is even more of a problem for a solar-powered datacentre (in space or on the surface) because solar power is DC, not AC. This means there are no physically spinning generators to provide literal (mechanical) and electrical inertia to the grid to buffer load spikes. Worse still, for an orbiting datacentre the spacecraft must handle 100% of the fluctuations internally, as there is no grid connection.
Putting a server farm underwater makes it hard to maintain.Putting a server farm in space makes it hard to power, hard to cool and VERY hard to maintain.
Quote from: Lee Jay on 07/04/2025 11:30 pmPutting a server farm underwater makes it hard to maintain.Putting a server farm in space makes it hard to power, hard to cool and VERY hard to maintain.It’s not harder to cool or power unless you already think large reusable launch vehicles won’t happen, in which case the whole thing won’t happen anyway.
Quote from: edzieba on 07/04/2025 08:00 pmQuote from: Robotbeat on 07/04/2025 04:54 pmQuote from: RedLineTrain on 07/03/2025 04:00 pmThere is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …This isn’t true. In orbit, you either have short, 40 minute shadow periods (way shorter than 12-18 hour times of low solar in Earth) which means very small batteries (as batteries are INCREDIBLY good at high power but are heavy for long duration storage… although better than people think)or are in a sun synchronous or high orbit and don’t need batteries at all(other than very short periods of eclipse perhaps).Training workloads last for days, weeks, or months at a time. So in fact they’re MORE consistent than inference, which goes based on demand from humans.1) That means you would need to size the solar array to peak power demand and not to average power demand, and deal with dumping that power somewhere when actual demand dips below that peak output (a PV cell is active as long as it is illuminated). 2) Training workloads are 'bursty', even during training operations, with spikes on the order of the actual peak draw occurring on the sub-second scale. This is even more of a problem for a solar-powered datacentre (in space or on the surface) because solar power is DC, not AC. This means there are no physically spinning generators to provide literal (mechanical) and electrical inertia to the grid to buffer load spikes. Worse still, for an orbiting datacentre the spacecraft must handle 100% of the fluctuations internally, as there is no grid connection.Oh please. Sub-second fluctuations? They’re called capacitors.I used to build large computer systems for datacenters and even ran a small datacenter for a while. The server power supplies already have to deal with all of the constraints you’re talking about. This isn’t exotic.
I'm one of those pathetic amateur electronics designers who don't really distinguish between capacitors, super-capacitors and rechargeable batteries. From a systems engineering perspective isn't the functionality essentially the same?
They all create a buffer between something that's variable and something the designer wants to keep reasonably constant. On the scale of a datacenter are batteries the obvious choice?
Not only is the scale massive, but AI training workloads have a very unique load profile, unexpectedly rising and falling from full load to nearly idle in fractions of a second. . . The issue caught leading AI labs by surprise. Meta’s LLaMa 3 paper mentions challenges with power fluctuations, and that is “only” a 24,000 H100 Cluster (30MW of IT capacity).