Sounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ct
Elon has spoken about inference tasks, with the reason being that a short delay is acceptable
The idea, clearly, is that AI training tasks let you beam the value back to Earth (ie the finished trained model) without beaming the BTUs back to Earth (and vaporizing the planet, in the limiting growth case).
Quote from: leovinus on 12/01/2025 06:44 pmSounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ctYep. Remember the huge render farms that were needed in the year 2000 to create CGI that we would now consider barely adequate?
Quote from: DigitalMan on 12/01/2025 06:35 pmElon has spoken about inference tasks, with the reason being that a short delay is acceptable Other way around. Inference requires low latency. Training (which can take months) does not.
Remember the huge render farms that were needed in the year 2000 to create CGI that we would now consider barely adequate?
Quote from: DanClemmensen on 12/01/2025 06:55 pmQuote from: leovinus on 12/01/2025 06:44 pmSounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ctYep. Remember the huge render farms that were needed in the year 2000 to create CGI that we would now consider barely adequate?The flip side is that miniaturization and diminishing power draws of compute did not make the semiconductor industry shrink, but))⁹ instead enabled more applications.I have no doubt that the cost of a single.inference will go down with time. The question is just how pervasive AI will get.
Quote from: meekGee on 12/01/2025 10:17 pmQuote from: DanClemmensen on 12/01/2025 06:55 pmQuote from: leovinus on 12/01/2025 06:44 pmSounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ctYep. Remember the huge render farms that were needed in the year 2000 to create CGI that we would now consider barely adequate?The flip side is that miniaturization and diminishing power draws of compute did not make the semiconductor industry shrink, but))⁹ instead enabled more applications.I have no doubt that the cost of a single.inference will go down with time. The question is just how pervasive AI will get.I personally believe there is a fairly hard upper limit on demand. There are only so many people in the world, expected to peak somewhere in the general neighborhood of 10B sometime in the second half of this century, and no immediate prospect of getting everyone on Earth fully connected into the information economy either. I tend to think that 20-30 years from now the "wave of the future" will be in stuff that has very little to do with computers or IT. Nearly every technology has a sort of A curve - slow start, rapid growth, plateau.
While talented people are using AI to do amazing things, the overwhelming majority of current AI output is low quality garbage. I can't believe that the post-hype future of AI is going to be just a scaled-up version of what we have now.
Quote from: leovinus on 12/01/2025 06:44 pmSounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ctThat will be fun to watch... Billions in investment under the assumption that inference cannot be performed more efficiently. At some point somebody will breakthrough and "poof".
Quote from: steveleach on 12/02/2025 07:41 amWhile talented people are using AI to do amazing things, the overwhelming majority of current AI output is low quality garbage. I can't believe that the post-hype future of AI is going to be just a scaled-up version of what we have now.Exactly.AI is not just LLMs and video monitoring.Just a few years ago what you see today was strictly Sci Fi, and already people think they can see all the way to the far wall...The world market can accommodate at least 5 computers, was it?
Quote from: meekGee on 12/02/2025 08:12 amQuote from: steveleach on 12/02/2025 07:41 amWhile talented people are using AI to do amazing things, the overwhelming majority of current AI output is low quality garbage. I can't believe that the post-hype future of AI is going to be just a scaled-up version of what we have now.Exactly.AI is not just LLMs and video monitoring.Just a few years ago what you see today was strictly Sci Fi, and already people think they can see all the way to the far wall...The world market can accommodate at least 5 computers, was it?I don't think it is analogous to that at all. At the time that quote was said, computers were a super extreme niche thing. AI is already all over the Internet (eg every Google search pulls up an AI Overview). Also, even if it were analogous... There's an upper limit on use of computers too. You could maybe get x10 the number of computerized devices we have now (if every person on Earth had a smartphone, smart watch, smart home etc) but not 100x (there just aren't going to be enough people to use that many devices)..I am not claiming that current AI use is near peak (though it could be, if much of the current use is "artificial" demand - like AI Overviews on every Google search - which wouldn't be used once the true costs are passed on). I am only claiming that the peak is probably well below terawatt level power use.
So fast forward to AI. You think the type of tasks it does is exhausted,
I'm pretty sure "TettaWatts" was not thrown around without regard.
Quote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.
I think Musk is talking inference not training for the initial build out.Training data-centers need to be colocated with extremely high-speed interconnects between nodes. This requires On-Orbit assembly.
At the altitude of our planned constellation, the non-sphericity of Earth's gravitational field, and potentially atmospheric drag, are the dominant non-Keplerian effects impacting satellite orbital dynamics. In the figure below, we show trajectories (over one full orbit) for an illustrative 81-satellite constellation configuration in the orbital plane, at a mean cluster altitude of 650 km. The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity.The models show that, with satellites positioned just hundreds of meters apart, we will likely only require modest station-keeping maneuvers to maintain stable constellations within our desired sun-synchronous orbit.
Quote from: Vultur on 12/03/2025 03:09 pmQuote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.But doesn't that limit mean you're imagining only a "person interacting with an AI" kind of application?What if the AI is driving robotics? Or vehicles? Or corporate AI workers?
Quote from: DanielW on 12/03/2025 06:01 pmI think Musk is talking inference not training for the initial build out.Training data-centers need to be colocated with extremely high-speed interconnects between nodes. This requires On-Orbit assembly.Google is investigating whether they can avoid on-orbit assembly by instead using free-space optical data links between satellites flying in close (under 1km) formation:https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/QuoteAt the altitude of our planned constellation, the non-sphericity of Earth's gravitational field, and potentially atmospheric drag, are the dominant non-Keplerian effects impacting satellite orbital dynamics. In the figure below, we show trajectories (over one full orbit) for an illustrative 81-satellite constellation configuration in the orbital plane, at a mean cluster altitude of 650 km. The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity.The models show that, with satellites positioned just hundreds of meters apart, we will likely only require modest station-keeping maneuvers to maintain stable constellations within our desired sun-synchronous orbit.
It will also be hard to beat the latency since the inference will most likely happen right over your head and beamed straight back down. (not as good as living close to a datacenter)
That raises the limit but does not change the fundamentals. Those robots building stuff or vehicles delivering stuff are still ultimately providing goods and services to people. The total demand in the economy is still based on people."Technologically connected" people in this context doesn't necessarily just mean an internet connection, it means people who have decent access to/participation in the technological world economy (which isn't everyone).There's no point in having automated factories building things there is no demand for. It's physically possible, but doesn't make sense.
Quote from: Vultur on 12/03/2025 07:04 pmThat raises the limit but does not change the fundamentals. Those robots building stuff or vehicles delivering stuff are still ultimately providing goods and services to people. The total demand in the economy is still based on people."Technologically connected" people in this context doesn't necessarily just mean an internet connection, it means people who have decent access to/participation in the technological world economy (which isn't everyone).There's no point in having automated factories building things there is no demand for. It's physically possible, but doesn't make sense.Ultimately people benefit, yes. But the customers might not be nearby. There are people remote driving mine excavators from 1500km away in Australia right now.With global low-latency inference, you can easily answer the call to "We want to deploy thousands of robots in the middle of Congo" by simply flying them in, instead of having to plan all the necessary support infrastructure locally taking months/years.
Quote from: launchwatcher on 12/03/2025 06:47 pmQuote from: DanielW on 12/03/2025 06:01 pmI think Musk is talking inference not training for the initial build out.Training data-centers need to be colocated with extremely high-speed interconnects between nodes. This requires On-Orbit assembly.Google is investigating whether they can avoid on-orbit assembly by instead using free-space optical data links between satellites flying in close (under 1km) formation:https://research.google/blog/exploring-a-space-based-scalable-ai-infrastructure-system-design/QuoteAt the altitude of our planned constellation, the non-sphericity of Earth's gravitational field, and potentially atmospheric drag, are the dominant non-Keplerian effects impacting satellite orbital dynamics. In the figure below, we show trajectories (over one full orbit) for an illustrative 81-satellite constellation configuration in the orbital plane, at a mean cluster altitude of 650 km. The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity.The models show that, with satellites positioned just hundreds of meters apart, we will likely only require modest station-keeping maneuvers to maintain stable constellations within our desired sun-synchronous orbit.I think Musk is a step ahead. Since most power consumption is inference he can launch that without the added complexity of tight interconnects. If built into larger Starlink satellites, you increase the addressable market for Starlink while having built-in data connects for the AI workloads.It will also be hard to beat the latency since the inference will most likely happen right over your head and beamed straight back down. (not as good as living close to a datacenter)
Quote from: meekGee on 12/03/2025 05:57 pmQuote from: Vultur on 12/03/2025 03:09 pmQuote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.But doesn't that limit mean you're imagining only a "person interacting with an AI" kind of application?What if the AI is driving robotics? Or vehicles? Or corporate AI workers?That raises the limit but does not change the fundamentals. Those robots building stuff or vehicles delivering stuff are still ultimately providing goods and services to people. The total demand in the economy is still based on people."Technologically connected" people in this context doesn't necessarily just mean an internet connection, it means people who have decent access to/participation in the technological world economy (which isn't everyone).There's no point in having automated factories building things there is no demand for. It's physically possible, but doesn't make sense.Also I think the cost issues still hit. I don't think AI doing a lot of those things will end up being cost effective at current total costs, once (or if) those costs are passed on to the end user. So the terawatt+ scenario isn't impossible, no. But I think it requires a fairly narrow needle to be threaded. If AI gets much more efficient, you don't get terawatt power use; if AI doesn't get more efficient, its use probably becomes more limited due to cost. There's a very narrow range in there (assuming those two scenarios don't overlap entirely, leaving no range) where AI is still energy hungry enough to demand terawatt+ power but cheap enough to be used widely once real costs catch up.(And that's assuming other factors not directly related to the technology itself don't get in the way. Which strikes me as likely.)
Sam Altman Has Explored Deal to Build Competitor to Elon Musk’s SpaceXThe OpenAI CEO has publicly talked about the possibility of building ‘a rocket company’ and the potential for developing data centers in space.OpenAI Chief Executive Sam Altman has explored putting together funds to either acquire or partner with a rocket company, a move that would position him to compete against Elon Musk’s SpaceX.Altman reached out to at least one rocket maker, Stoke Space, in the summer, and the discussions picked up in the fall, according to people familiar with the talks. Among the proposals was for OpenAI to make a series of equity investments in the company and end up with a controlling stake. Such an investment would total billions of dollars over time.
Since we are all number persons here. Let's lay out some numbers and let us think:One traditional GWh data center for AI on earth has initial setup costs of around 80.000.000.000$ and has self live of around 5 years, before the hardware has to be replace. I am sure that a clever man with a lot of money can reduce that cost by using custom self produced AI chips and mass production, by some margin but assuming that will be more than 50% does not sound very realistic, especially thinking of the need to space hardening the whole setup, putting it into a rocket, make it save to 2g of acceleration .... Than there will be the extra costs of moving that hardware.
But wait, there's more:QuoteSam Altman Has Explored Deal to Build Competitor to Elon Musk’s SpaceX
Sam Altman Has Explored Deal to Build Competitor to Elon Musk’s SpaceX
Quote from: Vultur on 12/03/2025 07:04 pmQuote from: meekGee on 12/03/2025 05:57 pmQuote from: Vultur on 12/03/2025 03:09 pmQuote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.But doesn't that limit mean you're imagining only a "person interacting with an AI" kind of application?What if the AI is driving robotics? Or vehicles? Or corporate AI workers?That raises the limit but does not change the fundamentals. Those robots building stuff or vehicles delivering stuff are still ultimately providing goods and services to people. The total demand in the economy is still based on people."Technologically connected" people in this context doesn't necessarily just mean an internet connection, it means people who have decent access to/participation in the technological world economy (which isn't everyone).There's no point in having automated factories building things there is no demand for. It's physically possible, but doesn't make sense.Also I think the cost issues still hit. I don't think AI doing a lot of those things will end up being cost effective at current total costs, once (or if) those costs are passed on to the end user. So the terawatt+ scenario isn't impossible, no. But I think it requires a fairly narrow needle to be threaded. If AI gets much more efficient, you don't get terawatt power use; if AI doesn't get more efficient, its use probably becomes more limited due to cost. There's a very narrow range in there (assuming those two scenarios don't overlap entirely, leaving no range) where AI is still energy hungry enough to demand terawatt+ power but cheap enough to be used widely once real costs catch up.(And that's assuming other factors not directly related to the technology itself don't get in the way. Which strikes me as likely.)But if I'm understanding it correctly, you're saying that the breadth of AI activity is proportional to the population (e.g. times "participation fraction" times "utilization factor" etc..) so it can't grow exponentially.Which is true.
But the utilization factor - AI footprint per participating person - that factor can still grow by 1000x or a 1,000,000x from what it is today.
I mean - look at semiconductor technology. It's capped, but we're not close to scratching the limit, even 70 years in.
That's part of what I'm saying, but not all of it.The other half is that (once things settle out and it becomes clear what "AI" works best for what applications*) AI will presumably only be used when it's the more cost effective way of doing things.So cost effectiveness sets a limit on that utilization factor. If the energy (and hardware) cost of the AI is greater than the cost of doing the same thing without AI, a rational business won't use AI for that task.I think that if AI energy costs don't decrease a lot, it won't prove to be cost-effective for a lot of things once full costs are passed on.*Which may well mean a move away from LLMs to more specialized, efficient, and reliable systems, using far less energyQuoteBut the utilization factor - AI footprint per participating person - that factor can still grow by 1000x or a 1,000,000x from what it is today.This is what I disagree with, at least if you define footprint in terms of energy use. AI using that much energy won't be cost effective. Not even with space solar power.Datacenters are already using several percent of total electricity use. Though some of that is non-AI stuff ... But AI is definitely well into the gigawatts. 1,000,000x would be well into petawatts, probably more than 1MW/person even at a hypothetical peak world population of maybe 10-11B. I don't see how it can be cost effective for 99.7% of civilization's energy use to be AI.QuoteI mean - look at semiconductor technology. It's capped, but we're not close to scratching the limit, even 70 years in.We may not be close to the physical limit but we're into the diminishing returns phase. Moore's Law has effectively ended.IMO we are (as a civilization) now investing far too much of our R&D effort into computer/IT technologies rather than technologies which are earlier on their S curve, where investment would be far more beneficial.
Eh. I don't really agree, but we'll have to see what happens ... And it might be a couple decades before the answer is clear. What I'm more concerned about in the short term is an investment bubble burst bringing down space stuff along with everything else. This could happen *regardless* of the soundness of the underlying technology - the dot com bubble of the late 90s wrecked private space projects then, though both the Internet in general and satellite internet specifically ultimately did work out.
Quote from: Vultur on 12/05/2025 05:57 amEh. I don't really agree, but we'll have to see what happens ... And it might be a couple decades before the answer is clear. What I'm more concerned about in the short term is an investment bubble burst bringing down space stuff along with everything else. This could happen *regardless* of the soundness of the underlying technology - the dot com bubble of the late 90s wrecked private space projects then, though both the Internet in general and satellite internet specifically ultimately did work out.I'm pretty sure that it'll go down a lot faster than it did with thermodynamic engines.
Quote from: meekGee on 12/05/2025 07:18 amQuote from: Vultur on 12/05/2025 05:57 amEh. I don't really agree, but we'll have to see what happens ... And it might be a couple decades before the answer is clear. What I'm more concerned about in the short term is an investment bubble burst bringing down space stuff along with everything else. This could happen *regardless* of the soundness of the underlying technology - the dot com bubble of the late 90s wrecked private space projects then, though both the Internet in general and satellite internet specifically ultimately did work out.I'm pretty sure that it'll go down a lot faster than it did with thermodynamic engines.Probably*, but that doesn't really answer the underlying question. There was a very long time between the invention of steam engines and them really becoming widespread; 20-25 years or so (like the time gap between Teledesic and Starlink) would still be much faster.Economic problems/investment bubble bursts/etc can happen on a much, much shorter timescale than that, and can happen almost regardless of the underlying value or not of the technology. (The dot com bubble didn't mean the Internet didn't ultimately get everywhere, the housing bubble didn't mean people stopped needing housing.)I am *also* much less optimistic than you about the ultimate value of LLM based AI technology or "generative AI", but that's basically a *completely separate question*. The short term (next 4-5 years) concerns remain either way. I just don't want SpaceX to get so economically tied to xAI or some other AI thing that an investment bubble burst pushes Moon and Mars exploration back 20-25 years. And I want them to keep focus on Mars for the next couple synods, at least as much as they can with Artemis obligations.*Though I don't think the 19th-20th century industrial revolution analogy really holds. That was a period of very rapid demand growth driven by both overall population growth and much more of the world being drawn into the industrial economy.
I don't know that SpaceX is betting the farm on AI though. I think financially Starlink was already a good enough foundation.
But it's not just "if you don't try, you'll surely miss out". It's that in Musk world, the act of trying is what potentially makes it happen. It's not about catching the wave, it's about making the wave. As in electric cars, LEO constellations, etc.
That then raises the question of whether it's a good thing to make happen. I am far from convinced that AI that *is* cost-effective would actually be an overall net good for humanity, in a world with limited demand growth.
Quote from: meekGee on 12/05/2025 07:18 amQuote from: Vultur on 12/05/2025 05:57 amEh. I don't really agree, but we'll have to see what happens ... And it might be a couple decades before the answer is clear. What I'm more concerned about in the short term is an investment bubble burst bringing down space stuff along with everything else. This could happen *regardless* of the soundness of the underlying technology - the dot com bubble of the late 90s wrecked private space projects then, though both the Internet in general and satellite internet specifically ultimately did work out.I'm pretty sure that it'll go down a lot faster than it did with thermodynamic engines.Compared to thermodynamic engines, AI is in the early "play around and see what works" stage.Eventually, with steam engines, we figured out the limiting laws that governed their operation (Carnot's efficiency limit and what would later become Odum's specific power limit), and this new theoretical understanding enabled rapid progress which quickly approached that limit.I expect we'll see the same in AI, where we develop a "Carnot's limit" for the maximum algorithmic efficiency of computation and ML (ie a software equivalent of what Landauer's Principle is for compute hardware).
Quote from: Twark_Main on 12/06/2025 01:11 pmQuote from: meekGee on 12/05/2025 07:18 amQuote from: Vultur on 12/05/2025 05:57 amEh. I don't really agree, but we'll have to see what happens ... And it might be a couple decades before the answer is clear. What I'm more concerned about in the short term is an investment bubble burst bringing down space stuff along with everything else. This could happen *regardless* of the soundness of the underlying technology - the dot com bubble of the late 90s wrecked private space projects then, though both the Internet in general and satellite internet specifically ultimately did work out.I'm pretty sure that it'll go down a lot faster than it did with thermodynamic engines.Compared to thermodynamic engines, AI is in the early "play around and see what works" stage.Eventually, with steam engines, we figured out the limiting laws that governed their operation (Carnot's efficiency limit and what would later become Odum's specific power limit), and this new theoretical understanding enabled rapid progress which quickly approached that limit.I expect we'll see the same in AI, where we develop a "Carnot's limit" for the maximum algorithmic efficiency of computation and ML (ie a software equivalent of what Landauer's Principle is for compute hardware).Yup and cyber science in the 21st century moves a lot faster than the very primitive investigation of things like combustion dynamics that limited early 20th century development of thermodynamic engines (and are only robustly solved since maybe a couple decades ago)
....as we develop Super AI, we play the roles of creators, sort of little demi-gods.6. The biggest single problem I see in this mad rush of ours to evolve, is that it seems very unlikely that we will be able to keep control of our AI invention, as it becomes more intelligent than we are.
Pets or batteries. Take your pick.
Quote from: seb21051 on 12/05/2025 08:47 pm....as we develop Super AI, we play the roles of creators, sort of little demi-gods.6. The biggest single problem I see in this mad rush of ours to evolve, is that it seems very unlikely that we will be able to keep control of our AI invention, as it becomes more intelligent than we are. The biggest problem is that the demi-gods are motivated by quarterly financial reporting and not What Is Best For The Species. QuotePets or batteries. Take your pick.Iain Banks' "Minds" are best-case scenario.Pets is almost second-best, and almost certainly preferable to our extinction.
What I would give to be able to be a fly on the wall for the next 200 years. Neuralink needs to get its Psyche-Upload-To-The-Cloud service launched.
Quote from: seb21051 on 12/15/2025 01:22 amWhat I would give to be able to be a fly on the wall for the next 200 years. Neuralink needs to get its Psyche-Upload-To-The-Cloud service launched.I am very skeptical that uploading minds is even theoretically possible, even with arbitrarily advanced technology. The brain doesn't store information in the same way an electronic computer does; how would you get all the information out without destroying the 3D structure that is key to storing it? I think you'd need something like Star Trek scanners, which probably aren't physically possible.(Anyway even if possible it would be the Star Trek transporter problem .. it's a copy of you not *you*. It's not immortality just a nonbiological form of reproduction, a sort of mental cloning.)
Quote from: Vultur on 12/15/2025 05:22 amQuote from: seb21051 on 12/15/2025 01:22 amWhat I would give to be able to be a fly on the wall for the next 200 years. Neuralink needs to get its Psyche-Upload-To-The-Cloud service launched.I am very skeptical that uploading minds is even theoretically possible, even with arbitrarily advanced technology. The brain doesn't store information in the same way an electronic computer does; how would you get all the information out without destroying the 3D structure that is key to storing it? I think you'd need something like Star Trek scanners, which probably aren't physically possible.(Anyway even if possible it would be the Star Trek transporter problem .. it's a copy of you not *you*. It's not immortality just a nonbiological form of reproduction, a sort of mental cloning.)All I can do is live in hope, lol. I really wouldn't be worried what version of me lives on if it became a possibility.
Summarizing, there seem to be two main arguments in favor of space-based AI compute: 1. Fewer PV and batteries needed vs terrestrial solar. Because AI requires so much power 24/7, this is actually a big deal. 2. It doesn't deplete the (large but still finite) anthropogenic waste heat rejection capacity of the surface of the Earth. This is the "if you tried to beam the Sun's power to Earth the planet would melt" reason.#1 is the short-term reason, and it's the reason why Musk says space-based AI will be there cheapest option in 2-3 years.#2 is the long-term reason, and it doesn't effect current economics because we don't have a Joule Tax yet. This would be like a coal company planning around a Carbon Tax in 1890. It's too early.And yes, before someone says it, I'm aware that PV-powered chips don't change the Earth's radiant power balance, but it does still "leach" exergy (aka "useful work") from the biosphere and agriculture and other industrial processes. There is a finite (but large) limit on the amount of available exergy that can be siphoned off from the Earth's total exergy budget before those other systems start being starved of useful work, due to unavoidable physics (thermodynamic) constraints.Hopefully we all understand thermodynamics enough that we can skip the back-and-forth about entropy vs exergy vs energy.
Quote from: Twark_Main on 12/15/2025 05:16 pmSummarizing, there seem to be two main arguments in favor of space-based AI compute: 1. Fewer PV and batteries needed vs terrestrial solar. Because AI requires so much power 24/7, this is actually a big deal. 2. It doesn't deplete the (large but still finite) anthropogenic waste heat rejection capacity of the surface of the Earth. This is the "if you tried to beam the Sun's power to Earth the planet would melt" reason.#1 is the short-term reason, and it's the reason why Musk says space-based AI will be there cheapest option in 2-3 years.#2 is the long-term reason, and it doesn't effect current economics because we don't have a Joule Tax yet. This would be like a coal company planning around a Carbon Tax in 1890. It's too early.And yes, before someone says it, I'm aware that PV-powered chips don't change the Earth's radiant power balance, but it does still "leach" exergy (aka "useful work") from the biosphere and agriculture and other industrial processes. There is a finite (but large) limit on the amount of available exergy that can be siphoned off from the Earth's total exergy budget before those other systems start being starved of useful work, due to unavoidable physics (thermodynamic) constraints.Hopefully we all understand thermodynamics enough that we can skip the back-and-forth about entropy vs exergy vs energy.#2 is true in theory but is sooooo far away that other mechanisms will kick in first.You're talking about directly influencing the energy balance of the planet, not just messing with greenhouse gasses.Starship: 1-100 GWatt/yr (reasonable estimate)Lunar: 0.1-10 TWatt/yr (very handwavy)Earth energy uptake 120,000 TWatt...Until then, it's mostly reason #1 and some other related ones.
So... If we install 10 TWatt/yr over 10 millennia, we'll have ourselves a competition.
Quote from: meekGee on 12/15/2025 07:46 pmQuote from: Twark_Main on 12/15/2025 05:16 pmSummarizing, there seem to be two main arguments in favor of space-based AI compute: 1. Fewer PV and batteries needed vs terrestrial solar. Because AI requires so much power 24/7, this is actually a big deal. 2. It doesn't deplete the (large but still finite) anthropogenic waste heat rejection capacity of the surface of the Earth. This is the "if you tried to beam the Sun's power to Earth the planet would melt" reason.#1 is the short-term reason, and it's the reason why Musk says space-based AI will be there cheapest option in 2-3 years.#2 is the long-term reason, and it doesn't effect current economics because we don't have a Joule Tax yet. This would be like a coal company planning around a Carbon Tax in 1890. It's too early.And yes, before someone says it, I'm aware that PV-powered chips don't change the Earth's radiant power balance, but it does still "leach" exergy (aka "useful work") from the biosphere and agriculture and other industrial processes. There is a finite (but large) limit on the amount of available exergy that can be siphoned off from the Earth's total exergy budget before those other systems start being starved of useful work, due to unavoidable physics (thermodynamic) constraints.Hopefully we all understand thermodynamics enough that we can skip the back-and-forth about entropy vs exergy vs energy.#2 is true in theory but is sooooo far away that other mechanisms will kick in first.You're talking about directly influencing the energy balance of the planet, not just messing with greenhouse gasses.Starship: 1-100 GWatt/yr (reasonable estimate)Lunar: 0.1-10 TWatt/yr (very handwavy)Earth energy uptake 120,000 TWatt...Until then, it's mostly reason #1 and some other related ones.Indeed! I thought I made that clear (in fact, pointing out this distinction was my main reason for posting), but it's always good to re-iterate and re-phrase the point."#2... would be like a coal company planning around a Carbon Tax in 1890. It's too early.""(large but still finite) anthropogenic... heat capacity"We may wish that fossil fuel companies had the foresight to address global warming from the start. Well Tesla is doing exactly what we might wish, but the popular reaction is instead to ridicule the fix as "uneconomical."Fortunately, reason #1 is enough to make space-based AI economical even today. But you gotta give points for forward thinking... Quote from: meekGee on 12/15/2025 07:46 pmSo... If we install 10 TWatt/yr over 10 millennia, we'll have ourselves a competition.Growth is exponential, more like 2% CAGR.Thanks. A rare opportunity to use "exponential" in its real math meaning instead of "very big."
Quote from: Twark_Main on 12/15/2025 08:00 pmQuote from: meekGee on 12/15/2025 07:46 pmQuote from: Twark_Main on 12/15/2025 05:16 pmSummarizing, there seem to be two main arguments in favor of space-based AI compute: 1. Fewer PV and batteries needed vs terrestrial solar. Because AI requires so much power 24/7, this is actually a big deal. 2. It doesn't deplete the (large but still finite) anthropogenic waste heat rejection capacity of the surface of the Earth. This is the "if you tried to beam the Sun's power to Earth the planet would melt" reason.#1 is the short-term reason, and it's the reason why Musk says space-based AI will be there cheapest option in 2-3 years.#2 is the long-term reason, and it doesn't effect current economics because we don't have a Joule Tax yet. This would be like a coal company planning around a Carbon Tax in 1890. It's too early.And yes, before someone says it, I'm aware that PV-powered chips don't change the Earth's radiant power balance, but it does still "leach" exergy (aka "useful work") from the biosphere and agriculture and other industrial processes. There is a finite (but large) limit on the amount of available exergy that can be siphoned off from the Earth's total exergy budget before those other systems start being starved of useful work, due to unavoidable physics (thermodynamic) constraints.Hopefully we all understand thermodynamics enough that we can skip the back-and-forth about entropy vs exergy vs energy.#2 is true in theory but is sooooo far away that other mechanisms will kick in first.You're talking about directly influencing the energy balance of the planet, not just messing with greenhouse gasses.Starship: 1-100 GWatt/yr (reasonable estimate)Lunar: 0.1-10 TWatt/yr (very handwavy)Earth energy uptake 120,000 TWatt...Until then, it's mostly reason #1 and some other related ones.Indeed! I thought I made that clear (in fact, pointing out this distinction was my main reason for posting), but it's always good to re-iterate and re-phrase the point."#2... would be like a coal company planning around a Carbon Tax in 1890. It's too early.""(large but still finite) anthropogenic... heat capacity"We may wish that fossil fuel companies had the foresight to address global warming from the start. Well Tesla is doing exactly what we might wish, but the popular reaction is instead to ridicule the fix as "uneconomical."Fortunately, reason #1 is enough to make space-based AI economical even today. But you gotta give points for forward thinking... Quote from: meekGee on 12/15/2025 07:46 pmSo... If we install 10 TWatt/yr over 10 millennia, we'll have ourselves a competition.Growth is exponential, more like 2% CAGR.Thanks. A rare opportunity to use "exponential" in its real math meaning instead of "very big." Awright!---Meanwhile double checking: Start with 1 GWatt/yr and grow by 10% every year, getting to 100,000 TWatt is a 170 years.So yup, about the same time as from the industrial revolution to now.
Though, chat argues that worldwide energy use, since 1800, only grew by a factor of 30, not 100,000,000...(I pushed back a bit but he's defending that 30x pretty well.)
Your brain and mine require something like 20 to 25W of power. I don't think that the current models of AI capture what's really going on (by many orders-of-magnitude, even correcting for the magnitude of analog-to-digital conversion). I don't know the correct algorithm, but I do know that this isn't it. The current algorithms *might* be equivalent to the retina, but that's about it.
I see no reason to expect exponential growth in energy use over the next century or so. That's been driven so far by exponential growth in world population + more and more of the world population being brought into one connected economy. The first is slowing drastically and the second is self-limiting (there are no more markets as big as China and India). Growth may effectively end. It isn't a fundamental law that it will/must continue.
Quote from: Vultur on 12/16/2025 03:39 pmI see no reason to expect exponential growth in energy use over the next century or so. That's been driven so far by exponential growth in world population + more and more of the world population being brought into one connected economy. The first is slowing drastically and the second is self-limiting (there are no more markets as big as China and India). Growth may effectively end. It isn't a fundamental law that it will/must continue.If you're talking about the conversation between TM and myself, I don't think either of us took it literally or seriously. Just juggling nomenclature and OOMs.Nothing remains exponential across that many OOMs, some other limit always kicks in.
Quote from: meekGee on 12/16/2025 05:18 pmQuote from: Vultur on 12/16/2025 03:39 pmI see no reason to expect exponential growth in energy use over the next century or so. That's been driven so far by exponential growth in world population + more and more of the world population being brought into one connected economy. The first is slowing drastically and the second is self-limiting (there are no more markets as big as China and India). Growth may effectively end. It isn't a fundamental law that it will/must continue.If you're talking about the conversation between TM and myself, I don't think either of us took it literally or seriously. Just juggling nomenclature and OOMs.Nothing remains exponential across that many OOMs, some other limit always kicks in.exponential curves are an interesting topic. Yes, they all eventually turn into logistics (S) curves, due to some limitations. See for example DRAM prices, basically no change for 10 years. We have hit a physical cost/bit limit.OTOH, you CAN turn something that has matured (top of the S curve), into a new exponential growth. And there's no better example than "new space" (SpaceX), regarding launch costs per kilogram.Did physics change? No, the same chemical limitations in general apply, we haven't gone with fission or fusion.So what changed?1. Material Science (high volume 3D printing, new alloys)2. CFD being able to simulate an entire rocket engine - which was due to exponential reduction in compute price/performance in the last 30 years. The Methalox engine, it can be argued, isn't truly possible to make reliable without the ability to simulate it. This is happening in many fields.3. The ability to reuse rocket hardware, which is a combination of compute power, better engines, sensors, and maturing engineering, and the willingness to try and fail a lot (itself a societal product of the 1990s software boom)4. 40+ years of reusable heat shield improvements (slow but steady), reaching critical mass.5. Probably stuff I'm missing.Falcon-9 is about 3x cost reduction, and Starship will be another 10-40x on top of that. That's definitely in the exponential growth area of the S-curveSo, getting around to the topic of energy, do we have such an exponential growth? Kinda. Not in amount of watts produced for sure - we are nearing the top of that s-curve. But in the ability to produce those watts cheaply in space from solar power? Absolutely. It's now less than $1/watt of capacity to install solar power now on Earth, it was a lot more 2 decades ago. Mix that with the exponential drop in launch costs, and plenty of sunshine in space, and you have a new opportunity to grow the power produced by the human race.And rather than beaming it down (which Elon says will never be efficient enough), we can just beam down the small few hundred megabytes of results from compute (AI, whatever), which is micropennies of energy.So I think if one counts the net power produced by the human race, we will escape our current "top of the S curve is readily apparent" limits and go on another exponential binge. Doing this in space for compute is good, we aren't taking away electricity from starving babies.
The question was whether the use of solar in space could grow to be comparable with the total solar insolation the Earth receives.Because exponents will get you there, what's 100,000,000x between friends.My point was, no. It'll stop long before that, simply because Earth itself as a consumer is finite.
Quote from: meekGee on 12/16/2025 08:23 pmThe question was whether the use of solar in space could grow to be comparable with the total solar insolation the Earth receives.Because exponents will get you there, what's 100,000,000x between friends.My point was, no. It'll stop long before that, simply because Earth itself as a consumer is finite.Coal owners in the 1800s comforted themselves with similar hand-waving stories why the effects of CO2 could never change the total energy balance of Earth. Earth was equally finite then too.
Quote from: Twark_Main on 12/16/2025 08:51 pmQuote from: meekGee on 12/16/2025 08:23 pmThe question was whether the use of solar in space could grow to be comparable with the total solar insolation the Earth receives.Because exponents will get you there, what's 100,000,000x between friends.My point was, no. It'll stop long before that, simply because Earth itself as a consumer is finite.Coal owners in the 1800s comforted themselves with similar hand-waving stories why the effects of CO2 could never change the total energy balance of Earth. Earth was equally finite then too.Not really. I don't think the idea was considered at all (even to be dismissed) in the 1800s. Svante Arrhenius write something brief about it on the early 1900s though.(But not as a problem. Not only was the amount of fuel burned far smaller then, the *second order effects* that cause the real problems weren't understood. A 1 or 2 degree rise would have sounded more beneficial than harmful in northern/western Europe.)
But that aside, there's a huge difference. Not only was world population way lower then, only a tiny proportion of it (pretty much Western Europe and the Northeast/Midwest US) was industrialized at all. So the accessible demand, and thus energy use, grew enormously over the course of the 20th century, as world population grew AND a much larger proportion of the world population was brought into the world economy.That's not really happening any more. Sure not all the world population is participating in the global economy ... But with rural electrification in China and India, there aren't any *equivalently large* untapped markets.
This situation hasn't existed in at least 200 years; there's no real history to go by as to how that will interact with a technological/industrial economy, as the entire industrial age has been one of very rapid demand growth.
Arrhenius's paper was in 1896. I chose my dates carefully.
I imagine that in 1896 you would've argued that everyone only "needs" an electric clothes iron and a toaster, so there's no way this industrial revolution thing has much steam left in it anyway. There doesn't seem to be a ceiling on quality-of-life. Those rural Indian and Chinese people aren't "done." They want all the same creature comforts, including those from AI, and they should have access to it. People usually get (rightfully) upset if you suggest otherwise.
"And that's why I'm so confident in my prediction."
I"m confused. How does using googlewatts of power in space negatively affect the Earth at all?The fact that we don't have to generate the power on Earth is a positive effect. What's the negative? (leaving aside AI armageddon predictions, let's stick with energy)
Quote from: meekGee on 12/16/2025 08:23 pmThe question was whether the use of solar in space could grow to be comparable with the total solar insolation the Earth receives.Because exponents will get you there, what's 100,000,000x between friends.My point was, no. It'll stop long before that, simply because Earth itself as a consumer is finite.Coal owners in the 1800s comforted themselves with similar hand-waving stories why the effects of CO2 could never change the total energy balance of Earth. Earth was equally finite then too.When we first invented plastics, we never imagined that microplastic pollution could ever get so massive that it becomes an issue. Yet here we are.I, for one, welcome our strange new "driving by looking out the front windshield" overlords. Also, if you think humans can't effect the total exergy (heat engine) of Earth, do a quick check on the current impact of a little thing called "agriculture" on the total primary energy (*exergy) available for the biosphere.This isn't some theoretical / unimaginable scope and scale. We're applying major stress to this huge system already, and this would be on top of that.Note that impact can greatly exceed the watts of power you generate as a human. Cutting down a rain forest and replacing it with solar panels comes with no guarantee that the solar panels use energy as effectively as the previous rain forest.
Quote from: Twark_Main on 12/16/2025 08:51 pmQuote from: meekGee on 12/16/2025 08:23 pmThe question was whether the use of solar in space could grow to be comparable with the total solar insolation the Earth receives.Because exponents will get you there, what's 100,000,000x between friends.My point was, no. It'll stop long before that, simply because Earth itself as a consumer is finite.Coal owners in the 1800s comforted themselves with similar hand-waving stories why the effects of CO2 could never change the total energy balance of Earth. Earth was equally finite then too.When we first invented plastics, we never imagined that microplastic pollution could ever get so massive that it becomes an issue. Yet here we are.I, for one, welcome our strange new "driving by looking out the front windshield" overlords. Also, if you think humans can't effect the total exergy (heat engine) of Earth, do a quick check on the current impact of a little thing called "agriculture" on the total primary energy (*exergy) available for the biosphere.This isn't some theoretical / unimaginable scope and scale. We're applying major stress to this huge system already, and this would be on top of that.Note that impact can greatly exceed the watts of power you generate as a human. Cutting down a rain forest and replacing it with solar panels comes with no guarantee that the solar panels use energy as effectively as the previous rain forest. Yup but those are qualitative hand-wavy analogies. They don't hold up to the 100,000,000x mismatch.I thought world power consumption since the 1700s would be 100,000x. Turns out it's "only" less than 100x.Broadly, even though our great great grandfathers didn't think emissions would be the problem that we later figured it is, it does not follow that solar panels on Earth risk directly affecting Earth's energy balance, even if we don't think so today... It's a word analogy, but not a logical argument.Not even in 200 years.
Quote from: meekGee on 12/16/2025 10:21 pmQuote from: Twark_Main on 12/16/2025 08:51 pmQuote from: meekGee on 12/16/2025 08:23 pmThe question was whether the use of solar in space could grow to be comparable with the total solar insolation the Earth receives.Because exponents will get you there, what's 100,000,000x between friends.My point was, no. It'll stop long before that, simply because Earth itself as a consumer is finite.Coal owners in the 1800s comforted themselves with similar hand-waving stories why the effects of CO2 could never change the total energy balance of Earth. Earth was equally finite then too.When we first invented plastics, we never imagined that microplastic pollution could ever get so massive that it becomes an issue. Yet here we are.I, for one, welcome our strange new "driving by looking out the front windshield" overlords. Also, if you think humans can't effect the total exergy (heat engine) of Earth, do a quick check on the current impact of a little thing called "agriculture" on the total primary energy (*exergy) available for the biosphere.This isn't some theoretical / unimaginable scope and scale. We're applying major stress to this huge system already, and this would be on top of that.Note that impact can greatly exceed the watts of power you generate as a human. Cutting down a rain forest and replacing it with solar panels comes with no guarantee that the solar panels use energy as effectively as the previous rain forest. Yup but those are qualitative hand-wavy analogies. They don't hold up to the 100,000,000x mismatch.I thought world power consumption since the 1700s would be 100,000x. Turns out it's "only" less than 100x.Broadly, even though our great great grandfathers didn't think emissions would be the problem that we later figured it is, it does not follow that solar panels on Earth risk directly affecting Earth's energy balance, even if we don't think so today... It's a word analogy, but not a logical argument.Not even in 200 years.The bold says you're still thinking linearly, not exponentially.100x in 200 years? That's quite fast! That means 100,000x only takes 500 years. That's a moderately recent pub in certain parts of the world.
Quote from: Twark_Main on 12/18/2025 06:18 amQuote from: meekGee on 12/16/2025 10:21 pmQuote from: Twark_Main on 12/16/2025 08:51 pmQuote from: meekGee on 12/16/2025 08:23 pmThe question was whether the use of solar in space could grow to be comparable with the total solar insolation the Earth receives.Because exponents will get you there, what's 100,000,000x between friends.My point was, no. It'll stop long before that, simply because Earth itself as a consumer is finite.Coal owners in the 1800s comforted themselves with similar hand-waving stories why the effects of CO2 could never change the total energy balance of Earth. Earth was equally finite then too.When we first invented plastics, we never imagined that microplastic pollution could ever get so massive that it becomes an issue. Yet here we are.I, for one, welcome our strange new "driving by looking out the front windshield" overlords. Also, if you think humans can't effect the total exergy (heat engine) of Earth, do a quick check on the current impact of a little thing called "agriculture" on the total primary energy (*exergy) available for the biosphere.This isn't some theoretical / unimaginable scope and scale. We're applying major stress to this huge system already, and this would be on top of that.Note that impact can greatly exceed the watts of power you generate as a human. Cutting down a rain forest and replacing it with solar panels comes with no guarantee that the solar panels use energy as effectively as the previous rain forest. Yup but those are qualitative hand-wavy analogies. They don't hold up to the 100,000,000x mismatch.I thought world power consumption since the 1700s would be 100,000x. Turns out it's "only" less than 100x.Broadly, even though our great great grandfathers didn't think emissions would be the problem that we later figured it is, it does not follow that solar panels on Earth risk directly affecting Earth's energy balance, even if we don't think so today... It's a word analogy, but not a logical argument.Not even in 200 years.The bold says you're still thinking linearly, not exponentially.100x in 200 years? That's quite fast! That means 100,000x only takes 500 years. That's a moderately recent pub in certain parts of the world. No it does not, since exponential growth in reality doesn't persist like that. Something always kicks in. Otherwise, each species of bacteria would have outweighed the planet already.In our case, power consumption on earth can't increase again and again by the factor of 100x (30x actually but rounding up) that it did during the industrial revolution.It might do so as humans go into space in the next centuries, but that wasn't your premise - you were talking about affection the energy balance of the earth.It's just that not every extrapolation is true, you neednto always look wt underlying limitations, resource constraints, etc.
Quote from: meekGee on 12/18/2025 03:13 pmQuote from: Twark_Main on 12/18/2025 06:18 amQuote from: meekGee on 12/16/2025 10:21 pmQuote from: Twark_Main on 12/16/2025 08:51 pmQuote from: meekGee on 12/16/2025 08:23 pmThe question was whether the use of solar in space could grow to be comparable with the total solar insolation the Earth receives.Because exponents will get you there, what's 100,000,000x between friends.My point was, no. It'll stop long before that, simply because Earth itself as a consumer is finite.Coal owners in the 1800s comforted themselves with similar hand-waving stories why the effects of CO2 could never change the total energy balance of Earth. Earth was equally finite then too.When we first invented plastics, we never imagined that microplastic pollution could ever get so massive that it becomes an issue. Yet here we are.I, for one, welcome our strange new "driving by looking out the front windshield" overlords. Also, if you think humans can't effect the total exergy (heat engine) of Earth, do a quick check on the current impact of a little thing called "agriculture" on the total primary energy (*exergy) available for the biosphere.This isn't some theoretical / unimaginable scope and scale. We're applying major stress to this huge system already, and this would be on top of that.Note that impact can greatly exceed the watts of power you generate as a human. Cutting down a rain forest and replacing it with solar panels comes with no guarantee that the solar panels use energy as effectively as the previous rain forest. Yup but those are qualitative hand-wavy analogies. They don't hold up to the 100,000,000x mismatch.I thought world power consumption since the 1700s would be 100,000x. Turns out it's "only" less than 100x.Broadly, even though our great great grandfathers didn't think emissions would be the problem that we later figured it is, it does not follow that solar panels on Earth risk directly affecting Earth's energy balance, even if we don't think so today... It's a word analogy, but not a logical argument.Not even in 200 years.The bold says you're still thinking linearly, not exponentially.100x in 200 years? That's quite fast! That means 100,000x only takes 500 years. That's a moderately recent pub in certain parts of the world. No it does not, since exponential growth in reality doesn't persist like that. Something always kicks in. Otherwise, each species of bacteria would have outweighed the planet already.In our case, power consumption on earth can't increase again and again by the factor of 100x (30x actually but rounding up) that it did during the industrial revolution.It might do so as humans go into space in the next centuries, but that wasn't your premise - you were talking about affection the energy balance of the earth.It's just that not every extrapolation is true, you neednto always look wt underlying limitations, resource constraints, etc.Again, in earlier times you could've easily argued that "energy use can't grow further" because there isn't enough wood that grows in all the world's forests. We all know why that prediction was wrong (coal).Similarly, saying "power consumption on earth can't increase" ignores the fact that we can use power in space to serve needs on Earth (which, reminder, is the subject of this thread).How many times have people predicted the "end of growth?" How many times have they been wrong, so far? The favorable economics of computing in space are independent of any particular AI or compute technology. This isn't like predicting "everyone will own a DeSoto automobile," this is like predicting "wheeled vehicles will be the dominant form of medium-distance transport."
Again, in earlier times you could've easily argued that "energy use can't grow further" because there isn't enough wood that grows in all the world's forests. We all know why that prediction was wrong (coal).Similarly, saying "power consumption on earth can't increase" ignores the fact that we can use power in space to serve needs on Earth (which, reminder, is the subject of this thread).How many times have people predicted the "end of growth?" How many times have they been wrong, so far?
Quote from: Twark_Main on 12/18/2025 05:32 pmAgain, in earlier times you could've easily argued that "energy use can't grow further" because there isn't enough wood that grows in all the world's forests. We all know why that prediction was wrong (coal).Similarly, saying "power consumption on earth can't increase" ignores the fact that we can use power in space to serve needs on Earth (which, reminder, is the subject of this thread).How many times have people predicted the "end of growth?" How many times have they been wrong, so far? Supply vs demand though.I agree it has historically been a very bad bet that technology won't increase supply to match demand, e.g. the mid 20th century fears of vast overpopulation famines.Transition from wood to coal to oil to current energy mix increased supply, but demand was already increasing rapidly due to both rapid population growth and more and more of the world population being brought into the world economy.Given current falling population growth rates, and the pace of industrialization in the highest population parts of the world such as China and India, and the fact that new technologies are often more energy efficient to serve the same needs (with current LLMs being a huge exception to that general trend, admittedly) it is entirely plausible that world growth in energy *demand* will end by the middle of this century or so. I think datacenters can raise the peak but not extend the timing ... Either vastly more efficient ways to run "AI" in some sense (LLMs or not) will be found, or costs will increase sufficiently that many current uses will be non viable. And I think one or the other will happen way before 2050.If Space Datacenters do end up being cheaper ... Well they're still not free. Question is whether they can be cheap enough to be funded by advertising money and actual subscriptions. AI still needs to be supported by the rest of the economy.That doesn't mean it's forever, but it does mean future demand growth would not be driven by a continuation of current trends. It would have to be by either a reversal of current trends (e.g. if total human population starts increasing again either due to cultural or socioeconomic changes or off-earth settlements growing on their own) or something entirely new (e.g. if super projects like an interstellar probe launching laser are built).
World average per capita energy consumption is 2500W primary energy. US per capita primary energy consumption is 4 times that, 10kW.If you spend an hour flying a Learjet per day, that’s about 25 times that, 250kW.I think there’s still quite a lot of room for global energy use to increase without more population growth. Waste heat will become a constraint even if it’s all carbon neutral.
Quote from: Robotbeat on 12/18/2025 09:26 pmWorld average per capita energy consumption is 2500W primary energy. US per capita primary energy consumption is 4 times that, 10kW.If you spend an hour flying a Learjet per day, that’s about 25 times that, 250kW.I think there’s still quite a lot of room for global energy use to increase without more population growth. Waste heat will become a constraint even if it’s all carbon neutral.How much room? 10x? 100x? I'm ok with all those. The future is wild and unpredictable.It's the 100,000,000x I took issue with.And yes, everything you said about energy is true, I figured PV comes to 1:1 comparison with electrical generation, but all those differences don't make up even one order of magnitude so I let it be.
Quote from: Twark_Main on 12/01/2025 06:19 pmThe idea, clearly, is that AI training tasks let you beam the value back to Earth (ie the finished trained model) without beaming the BTUs back to Earth (and vaporizing the planet, in the limiting growth case).The problem is that training is only 10-20% of the compute load. Inference latencies can be non-trivial, but I'd guess that a half second RTT is about all you can handle if you want to stay competitive. So you might be able to use some kind of subsync orbit, well away from both LEO and those pesky protons.
That sort of depends. I think some may end up being served directly from “regular” orbit Starlinks not on the terminator. That could reduce latency to like sub 10 millisecond.Note I was looking at SOTA models and they tend to have latencies from 0.3 to 4s to first token. It’s not necessarily a problem to put them in like MEO or GSO, or even out to lunar orbit.
Quote from: Robotbeat on 12/18/2025 10:50 pmThat sort of depends. I think some may end up being served directly from “regular” orbit Starlinks not on the terminator. That could reduce latency to like sub 10 millisecond.Note I was looking at SOTA models and they tend to have latencies from 0.3 to 4s to first token. It’s not necessarily a problem to put them in like MEO or GSO, or even out to lunar orbit.How are you serving a customer from directly overhead? unless every satellite has all the data needed, this does not work.
Depends on the model. Training can be 40% or more of total energy use, particularly if models become more sophisticated. And quite a bit of AI usage could tolerant seconds of latency, such as deep research or video/image generation. Short, low-latency prompts don’t use much energy. Really long prompts or extensive code generation do.I think short prompts will continue to be served very near point of use. But long research, “thinking” models, ensembles of multiple models, very long input and output, not to mention video and image generation already can take minutes to produce results and use tremendous energy to do so, and those can be offloaded to beyond LEO.
Quote from: Vultur on 12/05/2025 05:57 amEh. I don't really agree, but we'll have to see what happens ... And it might be a couple decades before the answer is clear. What I'm more concerned about in the short term is an investment bubble burst bringing down space stuff along with everything else. This could happen *regardless* of the soundness of the underlying technology - the dot com bubble of the late 90s wrecked private space projects then, though both the Internet in general and satellite internet specifically ultimately did work out.I've attached a semi-log chart of historical internet traffic, in average petabytes per month, by year. Look around 2000, the time of the dot-com crash. There's not even a blip. That's because people were still building out hardware and using it pretty much up to its capacity, even though bad business models were getting wiped out.I expect the same thing to happen with any putative AI bubble: there are plenty of good applications, and they'll simply take over the compute resources of the bad apps. Meanwhile, the software will get better and the applications more useful. The computronium market is going to remain solid for decades. If you deploy it, the models will come.
It's already happeninghttps://www.geekwire.com/2025/starcloud-power-training-ai-space/
Internet use may not have dropped, but Teledesic and various reusable private launch attempts of the late 1990s *did* die. *That's* my concern regarding a bubble here, the economics of AI messing up the economics of space. (Especially if it affects the whole economy, which seems reasonably likely.)
Quote from: Vultur on 12/21/2025 03:01 amInternet use may not have dropped, but Teledesic and various reusable private launch attempts of the late 1990s *did* die. *That's* my concern regarding a bubble here, the economics of AI messing up the economics of space. (Especially if it affects the whole economy, which seems reasonably likely.)You've mangled the analogy. The internet bubble had nothing to do with space.
This (and similar) proposal is about a direct deployment pipeline from fab to site, fully under the deployer's control. Power's included, but it's not the cost of it, it's the availability of it.
Quote from: meekGee on 12/22/2025 08:56 pmThis (and similar) proposal is about a direct deployment pipeline from fab to site, fully under the deployer's control. Power's included, but it's not the cost of it, it's the availability of it.I seem to recall something about supply/demand curves in Econ 101 class.Are you saying the price won't go up if the demand is far higher than the availability?
No, what I'm saying is this sentence makes no sense to me:Quote from: meekGee on 12/22/2025 08:56 pmThis (and similar) proposal is about a direct deployment pipeline from fab to site, fully under the deployer's control. Power's included, but it's not the cost of it, it's the availability of it.If there's no availability of power, and there's demand, the cost goes up until either the demand drops or the availability increases because someone built more power plants (and natural gas wells, etc etc)Statically analyzing the current cost of compute vs. power also makes no sense to me. If power triples in costs, the ratio changes. Someone will either further optimize the power usage of that compute (hello Apple and their power sipping ARM CPUs with neural net accelerators), build more power plants, or the ROI will be so amazing for the rich that babies will starve. (the "elastic" in "elastic demand" can sometimes mean the innocent bear the cost). Oh and a possibility of an AI market crash because the ROI is terrible if the power costs triple. Ask Germany about that with their $0.40/kwHr "green" energy.When you build AI in space, you increase the supply of power (though at a higher cost of hardware), and therefore your power costs get capped both on earth and in space. Somewhere in the fuzzy middle we hope the babies don't starve, I get 1 second responses on 200K size token contexts for my code agent (for example), and everyone is mostly happy.In general in human history the more power we can utilize the less starvation we have and the better the civilization, let's hope the trend for more power continues. If that means more power is used in space so much the better. We get lots of beautiful land with no ugly windmills, solar panels, or steam belching nuke plants, or smoke belching coal plants.
The "power" play is mis-named. It's not about providing cheaper power, since ...the [compute] hardware is the most [expensive] part
The "power" play is mis-named. It's not about providing cheaper power, since...it'll get more expensive when made space-worthy.
Quote from: meekGee on 12/22/2025 08:56 pmThe "power" play is mis-named. It's not about providing cheaper power, since ...the [compute] hardware is the most [expensive] partBad reasoning. That would be like saying "the inventor of the water powered car [sic] doesn't budget much money for gas, so saving money on gas can't be the advantage." The real question is how much would a terrestrial data centers need to pay for that same power and cooling, over its lifespan?Quote from: meekGee on 12/22/2025 08:56 pmThe "power" play is mis-named. It's not about providing cheaper power, since...it'll get more expensive when made space-worthy.I wouldn't take it for granted that this will always be true.Most of the cost of terrestrial solar isn't the PV cells. It's the wind-loading structure. This requires unsexy yet costly components like strong metal frames, cover glass, steel mounting hardware, and concrete footers. Plus the installation and/or assembly of all that (on-site being especially costly).Space-based power can delete or greatly downside all those parts. The panels are folded during launch so you can carry that load without large structures, You use the same modern high-efficiency silicon PV modules that are used in terrestrial solar (ala Starlink). Your biggest loads occur when it's still compact and folded, so carrying even those largest loads requires a lot less mass. There's zero on-site labor or custom engineering work, it "installs" (deploys) itself autonomously after launch.In theory, given attainably cheap launch prices, there's nothing that says 1 watt of PV capacity in space can't be cheaper than 1 watt of terrestrial PV.On top of that, of course, you need ~5x fewer panels and zero batteries to achieve 24 hour power. So space PV doesn't even require cost parity.
I thought we did the math early on and using terrestrial power cost numbers, power didn't come close to the cost of the hardware. (My argument was that because of this, even if space power is free, it still didn't justify going to orbit)And yeah ok maybe space hardware will beocme cheaper, but it's a race. More radiators, cost of launch, cost of no-maintenance vs. mass reduction and maybe other redictions I can think of.That's why for the longest time I thought this idea doesn't hold water financially.Then came the understanding that it's not cost. For all the King's ransom, nobody can give you 100 GWatt/yr. The ultrascalers don't have a solution for that currently.
This is probably a lot about Musk needing to create a market for space launch, as he did with StarLink, in order to justify more launches and bring down the cost to orbit per kilo.
There's a whole list of things this avoids. But it's still amazing to think it might actually be economical.
https://spacenews.com/space-force-offers-new-vandenberg-launch-site/On cue, a Vandenberg RFI for a new SLC-14 with details that pretty much only match Starship. A full Starship launch complex at Vandenberg is a necessity for Sun synchronous orbit AI sat launching at scale and would also support Starlink/Starshield V3.Starbase TX and KSC/Canaveral Starship launch each have their own factory complex building Starship/Superheavy. How would it work at Vandenberg?