Does this imply AI datacenters in orbit, needing to buffer power, ultimately are overpowered and have energy storage? This sounds like the batteries in orbit power station in principle?So what, AI orbital datacenters would have an ideal side gig as SSO SPS proving dawn/dusk supplemental power to the grid due to the available energy storage?
Quote from: Robotbeat on 07/04/2025 04:54 pmQuote from: RedLineTrain on 07/03/2025 04:00 pmThere is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …This isn’t true. In orbit, you either have short, 40 minute shadow periods (way shorter than 12-18 hour times of low solar in Earth) which means very small batteries (as batteries are INCREDIBLY good at high power but are heavy for long duration storage… although better than people think)or are in a sun synchronous or high orbit and don’t need batteries at all(other than very short periods of eclipse perhaps).Training workloads last for days, weeks, or months at a time. So in fact they’re MORE consistent than inference, which goes based on demand from humans.That's not the way it works, unfortunately.https://semianalysis.com/2025/06/25/ai-training-load-fluctuations-at-gigawatt-scale-risk-of-power-grid-blackout/QuoteNot only is the scale massive, but AI training workloads have a very unique load profile, unexpectedly rising and falling from full load to nearly idle in fractions of a second. . . The issue caught leading AI labs by surprise. Meta’s LLaMa 3 paper mentions challenges with power fluctuations, and that is “only” a 24,000 H100 Cluster (30MW of IT capacity).
Quote from: RedLineTrain on 07/03/2025 04:00 pmThere is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …This isn’t true. In orbit, you either have short, 40 minute shadow periods (way shorter than 12-18 hour times of low solar in Earth) which means very small batteries (as batteries are INCREDIBLY good at high power but are heavy for long duration storage… although better than people think)or are in a sun synchronous or high orbit and don’t need batteries at all(other than very short periods of eclipse perhaps).Training workloads last for days, weeks, or months at a time. So in fact they’re MORE consistent than inference, which goes based on demand from humans.
There is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …
Not only is the scale massive, but AI training workloads have a very unique load profile, unexpectedly rising and falling from full load to nearly idle in fractions of a second. . . The issue caught leading AI labs by surprise. Meta’s LLaMa 3 paper mentions challenges with power fluctuations, and that is “only” a 24,000 H100 Cluster (30MW of IT capacity).
Quote from: Asteroza on 07/08/2025 09:20 amDoes this imply AI datacenters in orbit, needing to buffer power, ultimately are overpowered and have energy storage? This sounds like the batteries in orbit power station in principle?So what, AI orbital datacenters would have an ideal side gig as SSO SPS proving dawn/dusk supplemental power to the grid due to the available energy storage?The problem is, grid power needs power on demand (and to stop power delivery on demand, too), but using 'excess' power from a bursty AI training datacentre does not deliver that: it delivers effectively massive spikes of power at 'random' (from the grid's perspective). Or put another way: as the power draw variation of AI training datacentres is already a problem for power grids, then turning that into a power supply problem is just the exact same problem with the sign flipped.
Quote from: RedLineTrain on 07/06/2025 02:20 pmQuote from: Robotbeat on 07/04/2025 04:54 pmQuote from: RedLineTrain on 07/03/2025 04:00 pmThere is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …This isn’t true. In orbit, you either have short, 40 minute shadow periods (way shorter than 12-18 hour times of low solar in Earth) which means very small batteries (as batteries are INCREDIBLY good at high power but are heavy for long duration storage… although better than people think)or are in a sun synchronous or high orbit and don’t need batteries at all(other than very short periods of eclipse perhaps).Training workloads last for days, weeks, or months at a time. So in fact they’re MORE consistent than inference, which goes based on demand from humans.That's not the way it works, unfortunately.https://semianalysis.com/2025/06/25/ai-training-load-fluctuations-at-gigawatt-scale-risk-of-power-grid-blackout/QuoteNot only is the scale massive, but AI training workloads have a very unique load profile, unexpectedly rising and falling from full load to nearly idle in fractions of a second. . . The issue caught leading AI labs by surprise. Meta’s LLaMa 3 paper mentions challenges with power fluctuations, and that is “only” a 24,000 H100 Cluster (30MW of IT capacity).Yes, it DOES work that way. Inverters (both solar and batteries) can respond in fractions of a second. This is NOT true for most of the terrestrial grid, which is why it can be a problem for terrestrial datacenters.
Quote from: edzieba on 07/08/2025 04:33 pmQuote from: Asteroza on 07/08/2025 09:20 amDoes this imply AI datacenters in orbit, needing to buffer power, ultimately are overpowered and have energy storage? This sounds like the batteries in orbit power station in principle?So what, AI orbital datacenters would have an ideal side gig as SSO SPS proving dawn/dusk supplemental power to the grid due to the available energy storage?The problem is, grid power needs power on demand (and to stop power delivery on demand, too), but using 'excess' power from a bursty AI training datacentre does not deliver that: it delivers effectively massive spikes of power at 'random' (from the grid's perspective). Or put another way: as the power draw variation of AI training datacentres is already a problem for power grids, then turning that into a power supply problem is just the exact same problem with the sign flipped.The reason why it’s a problem for TERRESTRIAL grids is because the terrestrial grid is dominated by thermal power plants which respond slowly over minutes or hours. Satellites use solar and batteries with DC-DC converters that can respond in fractions of a second.This is pretty fundamental.
Quote from: Robotbeat on 07/08/2025 09:21 pmQuote from: RedLineTrain on 07/06/2025 02:20 pmQuote from: Robotbeat on 07/04/2025 04:54 pmQuote from: RedLineTrain on 07/03/2025 04:00 pmThere is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …This isn’t true. In orbit, you either have short, 40 minute shadow periods (way shorter than 12-18 hour times of low solar in Earth) which means very small batteries (as batteries are INCREDIBLY good at high power but are heavy for long duration storage… although better than people think)or are in a sun synchronous or high orbit and don’t need batteries at all(other than very short periods of eclipse perhaps).Training workloads last for days, weeks, or months at a time. So in fact they’re MORE consistent than inference, which goes based on demand from humans.That's not the way it works, unfortunately.https://semianalysis.com/2025/06/25/ai-training-load-fluctuations-at-gigawatt-scale-risk-of-power-grid-blackout/QuoteNot only is the scale massive, but AI training workloads have a very unique load profile, unexpectedly rising and falling from full load to nearly idle in fractions of a second. . . The issue caught leading AI labs by surprise. Meta’s LLaMa 3 paper mentions challenges with power fluctuations, and that is “only” a 24,000 H100 Cluster (30MW of IT capacity).Yes, it DOES work that way. Inverters (both solar and batteries) can respond in fractions of a second. This is NOT true for most of the terrestrial grid, which is why it can be a problem for terrestrial datacenters.If you have a source, yes. So batteries yes, solar no. But you're missing the point that these changes can be far, far faster than even inverters can respond. As I posted above, we *measured* 7kHz variations at the MW scale. Inverters typically have an open-loop bandwidth that's less than 60Hz and switching frequencies in the 3kHz range (for large ones). These high-frequency changes cannot be controlled by inverter controls, they have to be managed at the electrical level.
Yes, it DOES work that way. Inverters (both solar and batteries) can respond in fractions of a second. This is NOT true for most of the terrestrial grid, which is why it can be a problem for terrestrial datacenters.
Megapack is obviously a dumb comparison as it’s not at all weight optimized.
Quote from: Robotbeat on 07/08/2025 10:13 pmMegapack is obviously a dumb comparison as it’s not at all weight optimized.I cross-edited you. As stated, one of my objections was that on Earth we are able to throw mass at the problem while in orbit we are less able to do so. That holds even with Starship.
Quote from: RedLineTrain on 07/08/2025 10:16 pmQuote from: Robotbeat on 07/08/2025 10:13 pmMegapack is obviously a dumb comparison as it’s not at all weight optimized.I cross-edited you. As stated, one of my objections was that on Earth we are able to throw mass at the problem while in orbit we are less able to do so. That holds even with Starship. it is not. I cross edited you again. Even with your mega pack, starship lowers battery mass launch cost to less than one percent of server costs.
Full reuse makes it possible to just throw mass at the problem if you really want to. A lot of people just still think about things from the gold plated NASAA or spy satellite probe. And not from the perspective of space launch about as cheap as airfreight.
Quote from: Robotbeat on 07/08/2025 10:19 pmQuote from: RedLineTrain on 07/08/2025 10:16 pmQuote from: Robotbeat on 07/08/2025 10:13 pmMegapack is obviously a dumb comparison as it’s not at all weight optimized.I cross-edited you. As stated, one of my objections was that on Earth we are able to throw mass at the problem while in orbit we are less able to do so. That holds even with Starship. it is not. I cross edited you again. Even with your mega pack, starship lowers battery mass launch cost to less than one percent of server costs.I think that you're too sanguine on the energy density at pack level by maybe a factor of 2-4. Megapack XL is at about 100 Wh/kg.
Quote from: RedLineTrain on 07/08/2025 10:44 pmQuote from: Robotbeat on 07/08/2025 10:19 pmQuote from: RedLineTrain on 07/08/2025 10:16 pmQuote from: Robotbeat on 07/08/2025 10:13 pmMegapack is obviously a dumb comparison as it’s not at all weight optimized.I cross-edited you. As stated, one of my objections was that on Earth we are able to throw mass at the problem while in orbit we are less able to do so. That holds even with Starship. it is not. I cross edited you again. Even with your mega pack, starship lowers battery mass launch cost to less than one percent of server costs.I think that you're too sanguine on the energy density at pack level by maybe a factor of 2-4. Megapack XL is at about 100 Wh/kg.SpaceX ALREADY uses lithium-ion batteries on Starlink, dragon, Falcon, and starship, and you can look up the performance. 160-200Wh/kg.Sandbagging the numbers doesn’t make sense. SpaceX is likely to use automotive weight packs or better. You likely would for an orbital datacenter as well.
Quote from: Lee Jay on 07/08/2025 09:36 pmQuote from: Robotbeat on 07/08/2025 09:21 pmQuote from: RedLineTrain on 07/06/2025 02:20 pmQuote from: Robotbeat on 07/04/2025 04:54 pmQuote from: RedLineTrain on 07/03/2025 04:00 pmThere is turning out to be a few types of data centers. For the AI data centers, there are two types: training and inference. Data centers devoted to training need buffered power because there are dramatic usage spikes. Buffering requires batteries, which are very heavy. …This isn’t true. In orbit, you either have short, 40 minute shadow periods (way shorter than 12-18 hour times of low solar in Earth) which means very small batteries (as batteries are INCREDIBLY good at high power but are heavy for long duration storage… although better than people think)or are in a sun synchronous or high orbit and don’t need batteries at all(other than very short periods of eclipse perhaps).Training workloads last for days, weeks, or months at a time. So in fact they’re MORE consistent than inference, which goes based on demand from humans.That's not the way it works, unfortunately.https://semianalysis.com/2025/06/25/ai-training-load-fluctuations-at-gigawatt-scale-risk-of-power-grid-blackout/QuoteNot only is the scale massive, but AI training workloads have a very unique load profile, unexpectedly rising and falling from full load to nearly idle in fractions of a second. . . The issue caught leading AI labs by surprise. Meta’s LLaMa 3 paper mentions challenges with power fluctuations, and that is “only” a 24,000 H100 Cluster (30MW of IT capacity).Yes, it DOES work that way. Inverters (both solar and batteries) can respond in fractions of a second. This is NOT true for most of the terrestrial grid, which is why it can be a problem for terrestrial datacenters.If you have a source, yes. So batteries yes, solar no. But you're missing the point that these changes can be far, far faster than even inverters can respond. As I posted above, we *measured* 7kHz variations at the MW scale. Inverters typically have an open-loop bandwidth that's less than 60Hz and switching frequencies in the 3kHz range (for large ones). These high-frequency changes cannot be controlled by inverter controls, they have to be managed at the electrical level.there is no need to respond to changes n DC loads any faster than fractions of a second because you literally just use capacitors, which are in every power supply. Capacitors respond effectively infinitely fast for our purposes. This is, again, fundamental.
Batteries are around 200-300Wh/kg for low end ones for your car. For space, you can use newer ones at 400-500Wh/kg but they cost more.
Quote from: Robotbeat on 07/08/2025 10:51 pmQuote from: RedLineTrain on 07/08/2025 10:44 pmQuote from: Robotbeat on 07/08/2025 10:19 pmQuote from: RedLineTrain on 07/08/2025 10:16 pmQuote from: Robotbeat on 07/08/2025 10:13 pmMegapack is obviously a dumb comparison as it’s not at all weight optimized.I cross-edited you. As stated, one of my objections was that on Earth we are able to throw mass at the problem while in orbit we are less able to do so. That holds even with Starship. it is not. I cross edited you again. Even with your mega pack, starship lowers battery mass launch cost to less than one percent of server costs.I think that you're too sanguine on the energy density at pack level by maybe a factor of 2-4. Megapack XL is at about 100 Wh/kg.SpaceX ALREADY uses lithium-ion batteries on Starlink, dragon, Falcon, and starship, and you can look up the performance. 160-200Wh/kg.Sandbagging the numbers doesn’t make sense. SpaceX is likely to use automotive weight packs or better. You likely would for an orbital datacenter as well.I have no problems accepting that range (160-200 Wh/kg).