So fast forward to AI. You think the type of tasks it does is exhausted,
I'm pretty sure "TettaWatts" was not thrown around without regard.
Quote from: DigitalMan on 12/01/2025 06:35 pmElon has spoken about inference tasks, with the reason being that a short delay is acceptable Other way around. Inference requires low latency. Training (which can take months) does not.
Elon has spoken about inference tasks, with the reason being that a short delay is acceptable
Quote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.
I think Musk is talking inference not training for the initial build out.Training data-centers need to be colocated with extremely high-speed interconnects between nodes. This requires On-Orbit assembly.
At the altitude of our planned constellation, the non-sphericity of Earth's gravitational field, and potentially atmospheric drag, are the dominant non-Keplerian effects impacting satellite orbital dynamics. In the figure below, we show trajectories (over one full orbit) for an illustrative 81-satellite constellation configuration in the orbital plane, at a mean cluster altitude of 650 km. The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity.The models show that, with satellites positioned just hundreds of meters apart, we will likely only require modest station-keeping maneuvers to maintain stable constellations within our desired sun-synchronous orbit.
Quote from: Vultur on 12/03/2025 03:09 pmQuote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.But doesn't that limit mean you're imagining only a "person interacting with an AI" kind of application?What if the AI is driving robotics? Or vehicles? Or corporate AI workers?
Quote from: DanielW on 12/03/2025 06:01 pmI think Musk is talking inference not training for the initial build out.Training data-centers need to be colocated with extremely high-speed interconnects between nodes. This requires On-Orbit assembly.Google is investigating whether they can avoid on-orbit assembly by instead using free-space optical data links between satellites flying in close (under 1km) formation:https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/QuoteAt the altitude of our planned constellation, the non-sphericity of Earth's gravitational field, and potentially atmospheric drag, are the dominant non-Keplerian effects impacting satellite orbital dynamics. In the figure below, we show trajectories (over one full orbit) for an illustrative 81-satellite constellation configuration in the orbital plane, at a mean cluster altitude of 650 km. The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity.The models show that, with satellites positioned just hundreds of meters apart, we will likely only require modest station-keeping maneuvers to maintain stable constellations within our desired sun-synchronous orbit.
It will also be hard to beat the latency since the inference will most likely happen right over your head and beamed straight back down. (not as good as living close to a datacenter)
That raises the limit but does not change the fundamentals. Those robots building stuff or vehicles delivering stuff are still ultimately providing goods and services to people. The total demand in the economy is still based on people."Technologically connected" people in this context doesn't necessarily just mean an internet connection, it means people who have decent access to/participation in the technological world economy (which isn't everyone).There's no point in having automated factories building things there is no demand for. It's physically possible, but doesn't make sense.
Quote from: Vultur on 12/03/2025 07:04 pmThat raises the limit but does not change the fundamentals. Those robots building stuff or vehicles delivering stuff are still ultimately providing goods and services to people. The total demand in the economy is still based on people."Technologically connected" people in this context doesn't necessarily just mean an internet connection, it means people who have decent access to/participation in the technological world economy (which isn't everyone).There's no point in having automated factories building things there is no demand for. It's physically possible, but doesn't make sense.Ultimately people benefit, yes. But the customers might not be nearby. There are people remote driving mine excavators from 1500km away in Australia right now.With global low-latency inference, you can easily answer the call to "We want to deploy thousands of robots in the middle of Congo" by simply flying them in, instead of having to plan all the necessary support infrastructure locally taking months/years.
Quote from: launchwatcher on 12/03/2025 06:47 pmQuote from: DanielW on 12/03/2025 06:01 pmI think Musk is talking inference not training for the initial build out.Training data-centers need to be colocated with extremely high-speed interconnects between nodes. This requires On-Orbit assembly.Google is investigating whether they can avoid on-orbit assembly by instead using free-space optical data links between satellites flying in close (under 1km) formation:https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/QuoteAt the altitude of our planned constellation, the non-sphericity of Earth's gravitational field, and potentially atmospheric drag, are the dominant non-Keplerian effects impacting satellite orbital dynamics. In the figure below, we show trajectories (over one full orbit) for an illustrative 81-satellite constellation configuration in the orbital plane, at a mean cluster altitude of 650 km. The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity.The models show that, with satellites positioned just hundreds of meters apart, we will likely only require modest station-keeping maneuvers to maintain stable constellations within our desired sun-synchronous orbit.I think Musk is a step ahead. Since most power consumption is inference he can launch that without the added complexity of tight interconnects. If built into larger Starlink satellites, you increase the addressable market for Starlink while having built-in data connects for the AI workloads.It will also be hard to beat the latency since the inference will most likely happen right over your head and beamed straight back down. (not as good as living close to a datacenter)
Quote from: meekGee on 12/03/2025 05:57 pmQuote from: Vultur on 12/03/2025 03:09 pmQuote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.But doesn't that limit mean you're imagining only a "person interacting with an AI" kind of application?What if the AI is driving robotics? Or vehicles? Or corporate AI workers?That raises the limit but does not change the fundamentals. Those robots building stuff or vehicles delivering stuff are still ultimately providing goods and services to people. The total demand in the economy is still based on people."Technologically connected" people in this context doesn't necessarily just mean an internet connection, it means people who have decent access to/participation in the technological world economy (which isn't everyone).There's no point in having automated factories building things there is no demand for. It's physically possible, but doesn't make sense.Also I think the cost issues still hit. I don't think AI doing a lot of those things will end up being cost effective at current total costs, once (or if) those costs are passed on to the end user. So the terawatt+ scenario isn't impossible, no. But I think it requires a fairly narrow needle to be threaded. If AI gets much more efficient, you don't get terawatt power use; if AI doesn't get more efficient, its use probably becomes more limited due to cost. There's a very narrow range in there (assuming those two scenarios don't overlap entirely, leaving no range) where AI is still energy hungry enough to demand terawatt+ power but cheap enough to be used widely once real costs catch up.(And that's assuming other factors not directly related to the technology itself don't get in the way. Which strikes me as likely.)
Sam Altman Has Explored Deal to Build Competitor to Elon Musk’s SpaceXThe OpenAI CEO has publicly talked about the possibility of building ‘a rocket company’ and the potential for developing data centers in space.OpenAI Chief Executive Sam Altman has explored putting together funds to either acquire or partner with a rocket company, a move that would position him to compete against Elon Musk’s SpaceX.Altman reached out to at least one rocket maker, Stoke Space, in the summer, and the discussions picked up in the fall, according to people familiar with the talks. Among the proposals was for OpenAI to make a series of equity investments in the company and end up with a controlling stake. Such an investment would total billions of dollars over time.
Since we are all number persons here. Let's lay out some numbers and let us think:One traditional GWh data center for AI on earth has initial setup costs of around 80.000.000.000$ and has self live of around 5 years, before the hardware has to be replace. I am sure that a clever man with a lot of money can reduce that cost by using custom self produced AI chips and mass production, by some margin but assuming that will be more than 50% does not sound very realistic, especially thinking of the need to space hardening the whole setup, putting it into a rocket, make it save to 2g of acceleration .... Than there will be the extra costs of moving that hardware.
But wait, there's more:QuoteSam Altman Has Explored Deal to Build Competitor to Elon Musk’s SpaceX
Sam Altman Has Explored Deal to Build Competitor to Elon Musk’s SpaceX
Quote from: Vultur on 12/03/2025 07:04 pmQuote from: meekGee on 12/03/2025 05:57 pmQuote from: Vultur on 12/03/2025 03:09 pmQuote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.But doesn't that limit mean you're imagining only a "person interacting with an AI" kind of application?What if the AI is driving robotics? Or vehicles? Or corporate AI workers?That raises the limit but does not change the fundamentals. Those robots building stuff or vehicles delivering stuff are still ultimately providing goods and services to people. The total demand in the economy is still based on people."Technologically connected" people in this context doesn't necessarily just mean an internet connection, it means people who have decent access to/participation in the technological world economy (which isn't everyone).There's no point in having automated factories building things there is no demand for. It's physically possible, but doesn't make sense.Also I think the cost issues still hit. I don't think AI doing a lot of those things will end up being cost effective at current total costs, once (or if) those costs are passed on to the end user. So the terawatt+ scenario isn't impossible, no. But I think it requires a fairly narrow needle to be threaded. If AI gets much more efficient, you don't get terawatt power use; if AI doesn't get more efficient, its use probably becomes more limited due to cost. There's a very narrow range in there (assuming those two scenarios don't overlap entirely, leaving no range) where AI is still energy hungry enough to demand terawatt+ power but cheap enough to be used widely once real costs catch up.(And that's assuming other factors not directly related to the technology itself don't get in the way. Which strikes me as likely.)But if I'm understanding it correctly, you're saying that the breadth of AI activity is proportional to the population (e.g. times "participation fraction" times "utilization factor" etc..) so it can't grow exponentially.Which is true.
But the utilization factor - AI footprint per participating person - that factor can still grow by 1000x or a 1,000,000x from what it is today.
I mean - look at semiconductor technology. It's capped, but we're not close to scratching the limit, even 70 years in.
That's part of what I'm saying, but not all of it.The other half is that (once things settle out and it becomes clear what "AI" works best for what applications*) AI will presumably only be used when it's the more cost effective way of doing things.So cost effectiveness sets a limit on that utilization factor. If the energy (and hardware) cost of the AI is greater than the cost of doing the same thing without AI, a rational business won't use AI for that task.I think that if AI energy costs don't decrease a lot, it won't prove to be cost-effective for a lot of things once full costs are passed on.*Which may well mean a move away from LLMs to more specialized, efficient, and reliable systems, using far less energyQuoteBut the utilization factor - AI footprint per participating person - that factor can still grow by 1000x or a 1,000,000x from what it is today.This is what I disagree with, at least if you define footprint in terms of energy use. AI using that much energy won't be cost effective. Not even with space solar power.Datacenters are already using several percent of total electricity use. Though some of that is non-AI stuff ... But AI is definitely well into the gigawatts. 1,000,000x would be well into petawatts, probably more than 1MW/person even at a hypothetical peak world population of maybe 10-11B. I don't see how it can be cost effective for 99.7% of civilization's energy use to be AI.QuoteI mean - look at semiconductor technology. It's capped, but we're not close to scratching the limit, even 70 years in.We may not be close to the physical limit but we're into the diminishing returns phase. Moore's Law has effectively ended.IMO we are (as a civilization) now investing far too much of our R&D effort into computer/IT technologies rather than technologies which are earlier on their S curve, where investment would be far more beneficial.
Eh. I don't really agree, but we'll have to see what happens ... And it might be a couple decades before the answer is clear. What I'm more concerned about in the short term is an investment bubble burst bringing down space stuff along with everything else. This could happen *regardless* of the soundness of the underlying technology - the dot com bubble of the late 90s wrecked private space projects then, though both the Internet in general and satellite internet specifically ultimately did work out.