Quote from: Vultur on 12/15/2025 05:34 amQuote from: meekGee on 12/13/2025 02:19 pmQuote from: Crispy on 12/12/2025 10:57 amNot to mention that silicon chips need some of the most complex manufacturing processes ever invented. Multiple stages with exotic chemicals, machines and conditions. It's hard enough on Earth!I actually hope the AI bubble pops before SpaceX gets a chance to IPO. It's a massive distraction.Wouldn't it be better to hope that AI doesn't experience a bubble, or that the bubble is only a consolidation that doesn't affect the overall market size?I mean why is AI a distraction? It seems at the very least a very fundamental technology.If one takes the existence of a bubble as a fact, and I pretty much do, bursting sooner is better, because it means less wealth erased and fewer plans committed to things that won't work out.A bubble doesn't mean the technology is useless. The Internet was a bubble in the 1990s. US railroads were a bubble bursting in 1873."AI" is a very broad term. If "AI" in some sense is an eventual huge success but with a technology totally different from LLMs (and possibly far less energy-hungry) much of the current investment, hardware, etc.The market leaders, if they were based on technology rather than hype, survive.Stocks take a couple of years to recover, but the failure of the smaller players makes it easier to become even more dominant.Like I said before - money already raised is already raised, and revenue will be generally unaffected. It's not like people will stop talking on their phones or using the web.You worry too much. It's not healthy.
Quote from: meekGee on 12/13/2025 02:19 pmQuote from: Crispy on 12/12/2025 10:57 amNot to mention that silicon chips need some of the most complex manufacturing processes ever invented. Multiple stages with exotic chemicals, machines and conditions. It's hard enough on Earth!I actually hope the AI bubble pops before SpaceX gets a chance to IPO. It's a massive distraction.Wouldn't it be better to hope that AI doesn't experience a bubble, or that the bubble is only a consolidation that doesn't affect the overall market size?I mean why is AI a distraction? It seems at the very least a very fundamental technology.If one takes the existence of a bubble as a fact, and I pretty much do, bursting sooner is better, because it means less wealth erased and fewer plans committed to things that won't work out.A bubble doesn't mean the technology is useless. The Internet was a bubble in the 1990s. US railroads were a bubble bursting in 1873."AI" is a very broad term. If "AI" in some sense is an eventual huge success but with a technology totally different from LLMs (and possibly far less energy-hungry) much of the current investment, hardware, etc.
Quote from: Crispy on 12/12/2025 10:57 amNot to mention that silicon chips need some of the most complex manufacturing processes ever invented. Multiple stages with exotic chemicals, machines and conditions. It's hard enough on Earth!I actually hope the AI bubble pops before SpaceX gets a chance to IPO. It's a massive distraction.Wouldn't it be better to hope that AI doesn't experience a bubble, or that the bubble is only a consolidation that doesn't affect the overall market size?I mean why is AI a distraction? It seems at the very least a very fundamental technology.
Not to mention that silicon chips need some of the most complex manufacturing processes ever invented. Multiple stages with exotic chemicals, machines and conditions. It's hard enough on Earth!I actually hope the AI bubble pops before SpaceX gets a chance to IPO. It's a massive distraction.
You could be right, a "soft" bubble burst is a plausible scenario. But the other outcome is also possible, where revenue *does* crash because most of it is circular financing rather than real customer demand, and there simply aren't enough *paying* customers to make ends meet - the scenario where it turns out tons of people will use AI when it's free but not nearly enough will pay high prices for it. (Unless you mean SpaceX's current Starlink revenue. That I agree is safe. The only risk is anything specific to AI hardware.)
Quote from: Vultur on 12/15/2025 01:50 pmYou could be right, a "soft" bubble burst is a plausible scenario. But the other outcome is also possible, where revenue *does* crash because most of it is circular financing rather than real customer demand, and there simply aren't enough *paying* customers to make ends meet - the scenario where it turns out tons of people will use AI when it's free but not nearly enough will pay high prices for it. (Unless you mean SpaceX's current Starlink revenue. That I agree is safe. The only risk is anything specific to AI hardware.)I read so many comments to the effect that there aren't many paying customers for AI yet, so a crash is imminent, that I wonder what I am missing.OpenAI's revenue is projected to be $15 - 20 billion for 2025. That's very far from "tons of people will use AI when it's free" but "there simply aren't enough *paying* customers."
Quote from: Vultur on 12/15/2025 05:22 amQuote from: seb21051 on 12/15/2025 01:22 amWhat I would give to be able to be a fly on the wall for the next 200 years. Neuralink needs to get its Psyche-Upload-To-The-Cloud service launched.I am very skeptical that uploading minds is even theoretically possible, even with arbitrarily advanced technology. The brain doesn't store information in the same way an electronic computer does; how would you get all the information out without destroying the 3D structure that is key to storing it? I think you'd need something like Star Trek scanners, which probably aren't physically possible.(Anyway even if possible it would be the Star Trek transporter problem .. it's a copy of you not *you*. It's not immortality just a nonbiological form of reproduction, a sort of mental cloning.)All I can do is live in hope, lol. I really wouldn't be worried what version of me lives on if it became a possibility.
Quote from: seb21051 on 12/15/2025 01:22 amWhat I would give to be able to be a fly on the wall for the next 200 years. Neuralink needs to get its Psyche-Upload-To-The-Cloud service launched.I am very skeptical that uploading minds is even theoretically possible, even with arbitrarily advanced technology. The brain doesn't store information in the same way an electronic computer does; how would you get all the information out without destroying the 3D structure that is key to storing it? I think you'd need something like Star Trek scanners, which probably aren't physically possible.(Anyway even if possible it would be the Star Trek transporter problem .. it's a copy of you not *you*. It's not immortality just a nonbiological form of reproduction, a sort of mental cloning.)
What I would give to be able to be a fly on the wall for the next 200 years. Neuralink needs to get its Psyche-Upload-To-The-Cloud service launched.
Quote from: meekGee on 12/15/2025 12:16 pmQuote from: Vultur on 12/15/2025 05:34 amQuote from: meekGee on 12/13/2025 02:19 pmQuote from: Crispy on 12/12/2025 10:57 amNot to mention that silicon chips need some of the most complex manufacturing processes ever invented. Multiple stages with exotic chemicals, machines and conditions. It's hard enough on Earth!I actually hope the AI bubble pops before SpaceX gets a chance to IPO. It's a massive distraction.Wouldn't it be better to hope that AI doesn't experience a bubble, or that the bubble is only a consolidation that doesn't affect the overall market size?I mean why is AI a distraction? It seems at the very least a very fundamental technology.If one takes the existence of a bubble as a fact, and I pretty much do, bursting sooner is better, because it means less wealth erased and fewer plans committed to things that won't work out.A bubble doesn't mean the technology is useless. The Internet was a bubble in the 1990s. US railroads were a bubble bursting in 1873."AI" is a very broad term. If "AI" in some sense is an eventual huge success but with a technology totally different from LLMs (and possibly far less energy-hungry) much of the current investment, hardware, etc.The market leaders, if they were based on technology rather than hype, survive.Stocks take a couple of years to recover, but the failure of the smaller players makes it easier to become even more dominant.Like I said before - money already raised is already raised, and revenue will be generally unaffected. It's not like people will stop talking on their phones or using the web.You worry too much. It's not healthy.You could be right, a "soft" bubble burst is a plausible scenario. But the other outcome is also possible, where revenue *does* crash because most of it is circular financing rather than real customer demand, and there simply aren't enough *paying* customers to make ends meet - the scenario where it turns out tons of people will use AI when it's free but not nearly enough will pay high prices for it. (Unless you mean SpaceX's current Starlink revenue. That I agree is safe. The only risk is anything specific to AI hardware.)
That makes sense; I'd just rather not see them moving up that ladder of risk.
Summarizing, there seem to be two main arguments in favor of space-based AI compute: 1. Fewer PV and batteries needed vs terrestrial solar. Because AI requires so much power 24/7, this is actually a big deal. 2. It doesn't deplete the (large but still finite) anthropogenic waste heat rejection capacity of the surface of the Earth. This is the "if you tried to beam the Sun's power to Earth the planet would melt" reason.#1 is the short-term reason, and it's the reason why Musk says space-based AI will be there cheapest option in 2-3 years.#2 is the long-term reason, and it doesn't effect current economics because we don't have a Joule Tax yet. This would be like a coal company planning around a Carbon Tax in 1890. It's too early.And yes, before someone says it, I'm aware that PV-powered chips don't change the Earth's radiant power balance, but it does still "leach" exergy (aka "useful work") from the biosphere and agriculture and other industrial processes. There is a finite (but large) limit on the amount of available exergy that can be siphoned off from the Earth's total exergy budget before those other systems start being starved of useful work, due to unavoidable physics (thermodynamic) constraints.Hopefully we all understand thermodynamics enough that we can skip the back-and-forth about entropy vs exergy vs energy.
Quote from: Twark_Main on 12/15/2025 05:16 pmSummarizing, there seem to be two main arguments in favor of space-based AI compute: 1. Fewer PV and batteries needed vs terrestrial solar. Because AI requires so much power 24/7, this is actually a big deal. 2. It doesn't deplete the (large but still finite) anthropogenic waste heat rejection capacity of the surface of the Earth. This is the "if you tried to beam the Sun's power to Earth the planet would melt" reason.#1 is the short-term reason, and it's the reason why Musk says space-based AI will be there cheapest option in 2-3 years.#2 is the long-term reason, and it doesn't effect current economics because we don't have a Joule Tax yet. This would be like a coal company planning around a Carbon Tax in 1890. It's too early.And yes, before someone says it, I'm aware that PV-powered chips don't change the Earth's radiant power balance, but it does still "leach" exergy (aka "useful work") from the biosphere and agriculture and other industrial processes. There is a finite (but large) limit on the amount of available exergy that can be siphoned off from the Earth's total exergy budget before those other systems start being starved of useful work, due to unavoidable physics (thermodynamic) constraints.Hopefully we all understand thermodynamics enough that we can skip the back-and-forth about entropy vs exergy vs energy.#2 is true in theory but is sooooo far away that other mechanisms will kick in first.You're talking about directly influencing the energy balance of the planet, not just messing with greenhouse gasses.Starship: 1-100 GWatt/yr (reasonable estimate)Lunar: 0.1-10 TWatt/yr (very handwavy)Earth energy uptake 120,000 TWatt...Until then, it's mostly reason #1 and some other related ones.
So... If we install 10 TWatt/yr over 10 millennia, we'll have ourselves a competition.
Quote from: meekGee on 12/15/2025 07:46 pmQuote from: Twark_Main on 12/15/2025 05:16 pmSummarizing, there seem to be two main arguments in favor of space-based AI compute: 1. Fewer PV and batteries needed vs terrestrial solar. Because AI requires so much power 24/7, this is actually a big deal. 2. It doesn't deplete the (large but still finite) anthropogenic waste heat rejection capacity of the surface of the Earth. This is the "if you tried to beam the Sun's power to Earth the planet would melt" reason.#1 is the short-term reason, and it's the reason why Musk says space-based AI will be there cheapest option in 2-3 years.#2 is the long-term reason, and it doesn't effect current economics because we don't have a Joule Tax yet. This would be like a coal company planning around a Carbon Tax in 1890. It's too early.And yes, before someone says it, I'm aware that PV-powered chips don't change the Earth's radiant power balance, but it does still "leach" exergy (aka "useful work") from the biosphere and agriculture and other industrial processes. There is a finite (but large) limit on the amount of available exergy that can be siphoned off from the Earth's total exergy budget before those other systems start being starved of useful work, due to unavoidable physics (thermodynamic) constraints.Hopefully we all understand thermodynamics enough that we can skip the back-and-forth about entropy vs exergy vs energy.#2 is true in theory but is sooooo far away that other mechanisms will kick in first.You're talking about directly influencing the energy balance of the planet, not just messing with greenhouse gasses.Starship: 1-100 GWatt/yr (reasonable estimate)Lunar: 0.1-10 TWatt/yr (very handwavy)Earth energy uptake 120,000 TWatt...Until then, it's mostly reason #1 and some other related ones.Indeed! I thought I made that clear (in fact, pointing out this distinction was my main reason for posting), but it's always good to re-iterate and re-phrase the point."#2... would be like a coal company planning around a Carbon Tax in 1890. It's too early.""(large but still finite) anthropogenic... heat capacity"We may wish that fossil fuel companies had the foresight to address global warming from the start. Well Tesla is doing exactly what we might wish, but the popular reaction is instead to ridicule the fix as "uneconomical."Fortunately, reason #1 is enough to make space-based AI economical even today. But you gotta give points for forward thinking... Quote from: meekGee on 12/15/2025 07:46 pmSo... If we install 10 TWatt/yr over 10 millennia, we'll have ourselves a competition.Growth is exponential, more like 2% CAGR.Thanks. A rare opportunity to use "exponential" in its real math meaning instead of "very big."
Findings and ImplicationsHere's the headline result: it's not obviously stupid, and it's not a sure thing. It's actually more reasonable than my intuition thought! If you run the numbers honestly, the physics doesn't immediately kill it, but the economics are savage. It only gets within striking distance under aggressive assumptions, and the list of organizations positioned to even try that is basically one.That "basically one" point matters. This isn't about talent. It's about integration. If you have to buy launch, buy buses, buy power hardware, buy deployment, and pay margin at every interface, you never get there. The margin stack and the mass tax eat you alive. Vertical integration isn't a nice-to-have. It's the whole ballgame.
Solid snapshot analysis but I think it omits the main point. The stated assumptions include, “No adjustments for permitting or regulatory delay.” But that’s the whole reason** to build in space. My analysis also shows it’s not economically viable today, and it is the regulatory delay dynamic that flips it in about 10 years. With the SpaceX architecture of distributed compute built on Starlink, I think Elon’s 3-year estimate is also quite viable, again precisely due to the regulatory delay difference. **The ultimate reason to build in space is to preserve the environment of Earth, but the way that decision-making gets implemented in our democratic society is through the political-regulatory system. There is no other dynamic to capture this. As damage to the environment increases and voters notice, regulation increases the delay costs, the taxes, and the restrictions on implementing the industry, which creates the economic conditions that result in a different choice. If you leave out this dynamic from a predictive model, you have left out the motive to go to space.Note that Elon didn’t say “cheapest”. He said “fastest.” We have to include the differences in speed and the costs that accrue from delay to see how space is cheaper.
Quote from: Twark_Main on 12/15/2025 08:00 pmQuote from: meekGee on 12/15/2025 07:46 pmQuote from: Twark_Main on 12/15/2025 05:16 pmSummarizing, there seem to be two main arguments in favor of space-based AI compute: 1. Fewer PV and batteries needed vs terrestrial solar. Because AI requires so much power 24/7, this is actually a big deal. 2. It doesn't deplete the (large but still finite) anthropogenic waste heat rejection capacity of the surface of the Earth. This is the "if you tried to beam the Sun's power to Earth the planet would melt" reason.#1 is the short-term reason, and it's the reason why Musk says space-based AI will be there cheapest option in 2-3 years.#2 is the long-term reason, and it doesn't effect current economics because we don't have a Joule Tax yet. This would be like a coal company planning around a Carbon Tax in 1890. It's too early.And yes, before someone says it, I'm aware that PV-powered chips don't change the Earth's radiant power balance, but it does still "leach" exergy (aka "useful work") from the biosphere and agriculture and other industrial processes. There is a finite (but large) limit on the amount of available exergy that can be siphoned off from the Earth's total exergy budget before those other systems start being starved of useful work, due to unavoidable physics (thermodynamic) constraints.Hopefully we all understand thermodynamics enough that we can skip the back-and-forth about entropy vs exergy vs energy.#2 is true in theory but is sooooo far away that other mechanisms will kick in first.You're talking about directly influencing the energy balance of the planet, not just messing with greenhouse gasses.Starship: 1-100 GWatt/yr (reasonable estimate)Lunar: 0.1-10 TWatt/yr (very handwavy)Earth energy uptake 120,000 TWatt...Until then, it's mostly reason #1 and some other related ones.Indeed! I thought I made that clear (in fact, pointing out this distinction was my main reason for posting), but it's always good to re-iterate and re-phrase the point."#2... would be like a coal company planning around a Carbon Tax in 1890. It's too early.""(large but still finite) anthropogenic... heat capacity"We may wish that fossil fuel companies had the foresight to address global warming from the start. Well Tesla is doing exactly what we might wish, but the popular reaction is instead to ridicule the fix as "uneconomical."Fortunately, reason #1 is enough to make space-based AI economical even today. But you gotta give points for forward thinking... Quote from: meekGee on 12/15/2025 07:46 pmSo... If we install 10 TWatt/yr over 10 millennia, we'll have ourselves a competition.Growth is exponential, more like 2% CAGR.Thanks. A rare opportunity to use "exponential" in its real math meaning instead of "very big." Awright!---Meanwhile double checking: Start with 1 GWatt/yr and grow by 10% every year, getting to 100,000 TWatt is a 170 years.So yup, about the same time as from the industrial revolution to now.
Though, chat argues that worldwide energy use, since 1800, only grew by a factor of 30, not 100,000,000...(I pushed back a bit but he's defending that 30x pretty well.)
Your brain and mine require something like 20 to 25W of power. I don't think that the current models of AI capture what's really going on (by many orders-of-magnitude, even correcting for the magnitude of analog-to-digital conversion). I don't know the correct algorithm, but I do know that this isn't it. The current algorithms *might* be equivalent to the retina, but that's about it.