The idea that there is any advantage in putting AI in orbit or on the moon is so stupid, I'm wondering if Elon is just trying to make up an economic justification for having a moon colony. I'd be concerned if I thought he really believed this.
It's an open question: does Musk's vision for Moonbase Alpha imply he thinks civilization/consciousness might need a backup sooner than a Mars settlement could become self-sufficient? Assuming a global collapse of civilization on Earth could a lunar AGI help restore some semblance of what we have today?
Yes, it is a little hard to follow. But he was pretty clear in his tweets today. #2 only. Solar cells and radiators should be relatively easy to do with only slightly modified machines from Earth. These are not among the more difficult manufacturing challenges.I would have thought that this was at least 10-15 years out because terrestrial solar seems like it has a ways to run before community rejection kicks in. But AI scaling is becoming eye-watering. Maybe $1 trillion in spending in each of the next half dozen years?QuotePeter Hague @peterrhague·8hAt the time I posted this, Elon viewed the Moon and its resources as a distraction from the Mars mission. But now he is talking about 100TW/year solar power using lunar resources as a long term goal.This would, going on my beermat calculations (cheap 12% efficient cells, 350g/square metre), require about 210 million tonnes of silicon per year. Getting this from lunar silica, you would yield about 240 million tonnes of oxygen per year as a byproduct. You could vent >99% of this into space and still have more than you could possible use for the Mars fleet.Now it would obviously be foolish to put this on the critical path for Mars - that will have to use Earth launched oxygen until lunar industry ramps up, and should not wait - but long term, it would mean a Starship to Mars would only require ~1 tanker instead of ~6. Moon, Mars, asteroids etc. are complimentary goals, not opposed ones. Developing one helps the others.QuoteElon Musk @elonmusk·5hScaling AI is what changes the equationhttps://x.com/elonmusk/status/1985743650064908694
Peter Hague @peterrhague·8hAt the time I posted this, Elon viewed the Moon and its resources as a distraction from the Mars mission. But now he is talking about 100TW/year solar power using lunar resources as a long term goal.This would, going on my beermat calculations (cheap 12% efficient cells, 350g/square metre), require about 210 million tonnes of silicon per year. Getting this from lunar silica, you would yield about 240 million tonnes of oxygen per year as a byproduct. You could vent >99% of this into space and still have more than you could possible use for the Mars fleet.Now it would obviously be foolish to put this on the critical path for Mars - that will have to use Earth launched oxygen until lunar industry ramps up, and should not wait - but long term, it would mean a Starship to Mars would only require ~1 tanker instead of ~6. Moon, Mars, asteroids etc. are complimentary goals, not opposed ones. Developing one helps the others.
Elon Musk @elonmusk·5hScaling AI is what changes the equation
But I've learned over he years that when Musk says stuff, even if counter-intuitive to me I should listen.He's not right 100% of the time, but he's up there above 90, which is not bad for a visionary.
this whole idea is based on the premise that "thinking interfaces" will be everywhere. Every happy door, every smart elevator, every empath coffee maker.
However, I think AI type things are one of his specific weak points. I still remember 'we will have total self driving in 3 years, the regulations just need to catch up' in 2015. In my experience, Musk is historically overoptimistic about the near term potential of AI/robotics/etc.
Quote from: meekGee on 11/04/2025 03:59 pmBut I've learned over he years that when Musk says stuff, even if counter-intuitive to me I should listen.He's not right 100% of the time, but he's up there above 90, which is not bad for a visionary.However, I think AI type things are one of his specific weak points. I still remember 'we will have total self driving in 3 years, the regulations just need to catch up' in 2015. In my experience, Musk is historically overoptimistic about the near term potential of AI/robotics/etc. This is significantly different from rocket and satellite stuff, which is far more predictable. With the exception of a few questions like supersonic retropropulsion, it was clear long ago (since at least DC-X/Masten) that a system like F9 could be built. The question was instead whether the market demand would be there to justify it. Similar for Starlink - I don't think anyone questioned the technical possibility of such a system, only its finances. It is in contrast far from clear whether there is any role for LLM style AI (as distinctly opposed to specialized systems like AlphaFold) which justify trillion dollar valuations. Google has gotten IMO clearly worse as they have integrated LLMs.Quotethis whole idea is based on the premise that "thinking interfaces" will be everywhere. Every happy door, every smart elevator, every empath coffee maker. Why would you want that though? What's the advantage?Specialized tools (like a door or a coffee maker) don't need a generalized LLM type AI. A door just needs to open/close, a coffee maker just needs to make coffee. Even remote operation (turn on the coffee warmer from your phone) doesn't need that
Quote from: Vultur on 11/04/2025 09:22 pmQuote from: meekGee on 11/04/2025 03:59 pmBut I've learned over he years that when Musk says stuff, even if counter-intuitive to me I should listen.He's not right 100% of the time, but he's up there above 90, which is not bad for a visionary.However, I think AI type things are one of his specific weak points. I still remember 'we will have total self driving in 3 years, the regulations just need to catch up' in 2015. In my experience, Musk is historically overoptimistic about the near term potential of AI/robotics/etc. This is significantly different from rocket and satellite stuff, which is far more predictable. With the exception of a few questions like supersonic retropropulsion, it was clear long ago (since at least DC-X/Masten) that a system like F9 could be built. The question was instead whether the market demand would be there to justify it. Similar for Starlink - I don't think anyone questioned the technical possibility of such a system, only its finances. It is in contrast far from clear whether there is any role for LLM style AI (as distinctly opposed to specialized systems like AlphaFold) which justify trillion dollar valuations. Google has gotten IMO clearly worse as they have integrated LLMs.Quotethis whole idea is based on the premise that "thinking interfaces" will be everywhere. Every happy door, every smart elevator, every empath coffee maker. Why would you want that though? What's the advantage?Specialized tools (like a door or a coffee maker) don't need a generalized LLM type AI. A door just needs to open/close, a coffee maker just needs to make coffee. Even remote operation (turn on the coffee warmer from your phone) doesn't need thatPersonally? I'd love conversational AI-based interfaces instead of rigid algorithmic ones that can't deal with any input that's not precisely structured
Quote from: meekGee on 11/04/2025 09:52 pmQuote from: Vultur on 11/04/2025 09:22 pmQuote from: meekGee on 11/04/2025 03:59 pmBut I've learned over he years that when Musk says stuff, even if counter-intuitive to me I should listen.He's not right 100% of the time, but he's up there above 90, which is not bad for a visionary.However, I think AI type things are one of his specific weak points. I still remember 'we will have total self driving in 3 years, the regulations just need to catch up' in 2015. In my experience, Musk is historically overoptimistic about the near term potential of AI/robotics/etc. This is significantly different from rocket and satellite stuff, which is far more predictable. With the exception of a few questions like supersonic retropropulsion, it was clear long ago (since at least DC-X/Masten) that a system like F9 could be built. The question was instead whether the market demand would be there to justify it. Similar for Starlink - I don't think anyone questioned the technical possibility of such a system, only its finances. It is in contrast far from clear whether there is any role for LLM style AI (as distinctly opposed to specialized systems like AlphaFold) which justify trillion dollar valuations. Google has gotten IMO clearly worse as they have integrated LLMs.Quotethis whole idea is based on the premise that "thinking interfaces" will be everywhere. Every happy door, every smart elevator, every empath coffee maker. Why would you want that though? What's the advantage?Specialized tools (like a door or a coffee maker) don't need a generalized LLM type AI. A door just needs to open/close, a coffee maker just needs to make coffee. Even remote operation (turn on the coffee warmer from your phone) doesn't need thatPersonally? I'd love conversational AI-based interfaces instead of rigid algorithmic ones that can't deal with any input that's not precisely structuredIDK. For computers / smartphones maybe*. But for household objects like doors and coffee makers? I see zero advantage.I feel like this is kind of a lose lose situation. If it IS a bubble, the economy is messed up. If it's NOT a bubble, a lot of jobs get lost with no clear replacement, and people get more and more used to getting quick easy answers from very "black box" tech whose hidden biases & filters are not at all visible to the user and can be changed by the company controlling it.I don't see what the benefit outweighing all this (to people in general, not to the company's valuation) is supposed to be. *And even then, there's a *huge* risk in "too human appearing" conversational tech messing up younger or psychologically vulnerable people. I personally tend to think that this issue makes conversational AI a net negative in terms of human well-being, even if it's a money maker. But this is probably well off topic. My point was about a possible bubble burst destroying the funds for Mars, not really about the value of AI in general. (A technology can be very useful and *Still* have a bubble early on; the Internet did.)
Quote from: thespacecow on 11/04/2025 01:16 amStarlink is already aiming for 1Gbps for consumers, and in LEO the latency is comparable to terrestrial network.Just one quick comment, and then I'm going to read the document you linked to. I'm taking about the latency inside the data center, since that determines how long it takes parallel operations to synchronize. Even inference amounts to performing a monstrous matrix multiplication for each layer of the net. To do that efficiently in parallel, you want all the processors involved to be as close to each other as possible.But that raises another question: If you're going to need 100 TW just for inference, does that mean you also need 1,000 TW to train the system? Or is the idea that AI would somehow be finished, so no one would need to train new networks?
Starlink is already aiming for 1Gbps for consumers, and in LEO the latency is comparable to terrestrial network.
Finally, all that expense isn't going to get you the "magical AI" that people seem to be imagining. No more than if you tried to make a rocket ship by scaling up a truck to the size of a city.
There are different projections on the future of AI, but this whole idea is based on the premise that "thinking interfaces" will be everywhere. Every happy door, every smart elevator, every empath coffee maker.
Quote from: meekGee on 11/04/2025 03:59 pmThere are different projections on the future of AI, but this whole idea is based on the premise that "thinking interfaces" will be everywhere. Every happy door, every smart elevator, every empath coffee maker. Who's asking for that? This is literally a satirical bit from Douglas Adams."Here I am, brain the size of a planet, and they ask me to take you up to the bridge. Call that job satisfaction? 'Cause I don't."Or: "All the doors in this spacecraft have a cheerful and sunny disposition. It is their pleasure to open for you and their satisfaction to close again with the knowledge of a job well done."...Honestly sounds like an awful future. "You know that smart device you didn't want to buy but were forced to anyway? Well now you need to maintain a complex emotional relationship with it, too."
IDK. For computers / smartphones maybe*. But for household objects like doors and coffee makers? I see zero advantage.I feel like this is kind of a lose lose situation. If it IS a bubble, the economy is messed up. If it's NOT a bubble, a lot of jobs get lost with no clear replacement, and people get more and more used to getting quick easy answers from very "black box" tech whose hidden biases & filters are not at all visible to the user and can be changed by the company controlling it.I don't see what the benefit outweighing all this (to people in general, not to the company's valuation) is supposed to be. *And even then, there's a *huge* risk in "too human appearing" conversational tech messing up younger or psychologically vulnerable people. I personally tend to think that this issue makes conversational AI a net negative in terms of human well-being, even if it's a money maker. But this is probably well off topic. My point was about a possible bubble burst destroying the funds for Mars, not really about the value of AI in general. (A technology can be very useful and *Still* have a bubble early on; the Internet did.)
Quote from: Vultur on 11/05/2025 04:04 amIDK. For computers / smartphones maybe*. But for household objects like doors and coffee makers? I see zero advantage.I feel like this is kind of a lose lose situation. If it IS a bubble, the economy is messed up. If it's NOT a bubble, a lot of jobs get lost with no clear replacement, and people get more and more used to getting quick easy answers from very "black box" tech whose hidden biases & filters are not at all visible to the user and can be changed by the company controlling it.I don't see what the benefit outweighing all this (to people in general, not to the company's valuation) is supposed to be. *And even then, there's a *huge* risk in "too human appearing" conversational tech messing up younger or psychologically vulnerable people. I personally tend to think that this issue makes conversational AI a net negative in terms of human well-being, even if it's a money maker. But this is probably well off topic. My point was about a possible bubble burst destroying the funds for Mars, not really about the value of AI in general. (A technology can be very useful and *Still* have a bubble early on; the Internet did.)A lot of people here are dumping on AI since it failed them as a search engine.
Quote from: meekGee on 11/05/2025 03:15 pmQuote from: Vultur on 11/05/2025 04:04 amIDK. For computers / smartphones maybe*. But for household objects like doors and coffee makers? I see zero advantage.I feel like this is kind of a lose lose situation. If it IS a bubble, the economy is messed up. If it's NOT a bubble, a lot of jobs get lost with no clear replacement, and people get more and more used to getting quick easy answers from very "black box" tech whose hidden biases & filters are not at all visible to the user and can be changed by the company controlling it.I don't see what the benefit outweighing all this (to people in general, not to the company's valuation) is supposed to be. *And even then, there's a *huge* risk in "too human appearing" conversational tech messing up younger or psychologically vulnerable people. I personally tend to think that this issue makes conversational AI a net negative in terms of human well-being, even if it's a money maker. But this is probably well off topic. My point was about a possible bubble burst destroying the funds for Mars, not really about the value of AI in general. (A technology can be very useful and *Still* have a bubble early on; the Internet did.)A lot of people here are dumping on AI since it failed them as a search engine.That's not my main position (I do hate Google's integration of AI into search, and do think this is part of why Google today is a significantly worse product than it was 5-6 years ago; but a bad use of a technology is not in itself an argument that the technology as a whole is bad). My comment about "conversational" computers/technology was that it's very likely *bad for people* (at least young people and psychologically vulnerable people). We're already seeing this problem with existing chat bots; if conversational intelligence gets into everything, it'd be far more prevalent (and less controllable)."Revolutionize computer-human interaction" is not necessarily a good thing if the new form of interaction is unhealthy. The level of investment in some AI companies is also troubling economically.But this is probably off topic, just wanted to clarify what I was (and wasn't) saying.
Quote from: Vultur on 11/05/2025 07:16 pmQuote from: meekGee on 11/05/2025 03:15 pmQuote from: Vultur on 11/05/2025 04:04 amIDK. For computers / smartphones maybe*. But for household objects like doors and coffee makers? I see zero advantage.I feel like this is kind of a lose lose situation. If it IS a bubble, the economy is messed up. If it's NOT a bubble, a lot of jobs get lost with no clear replacement, and people get more and more used to getting quick easy answers from very "black box" tech whose hidden biases & filters are not at all visible to the user and can be changed by the company controlling it.I don't see what the benefit outweighing all this (to people in general, not to the company's valuation) is supposed to be. *And even then, there's a *huge* risk in "too human appearing" conversational tech messing up younger or psychologically vulnerable people. I personally tend to think that this issue makes conversational AI a net negative in terms of human well-being, even if it's a money maker. But this is probably well off topic. My point was about a possible bubble burst destroying the funds for Mars, not really about the value of AI in general. (A technology can be very useful and *Still* have a bubble early on; the Internet did.)A lot of people here are dumping on AI since it failed them as a search engine.That's not my main position (I do hate Google's integration of AI into search, and do think this is part of why Google today is a significantly worse product than it was 5-6 years ago; but a bad use of a technology is not in itself an argument that the technology as a whole is bad). My comment about "conversational" computers/technology was that it's very likely *bad for people* (at least young people and psychologically vulnerable people). We're already seeing this problem with existing chat bots; if conversational intelligence gets into everything, it'd be far more prevalent (and less controllable)."Revolutionize computer-human interaction" is not necessarily a good thing if the new form of interaction is unhealthy. The level of investment in some AI companies is also troubling economically.But this is probably off topic, just wanted to clarify what I was (and wasn't) saying. it's a good conversation, and in the few weeks remaining before the AIpocalipse (!!), why not...Clearly I'm the optimist in the room.My kid (hell, myself too) had all manners of dumbshit friends that were bad for us. We lived.
Anyway, I don't think it's stoppable any more than the Internet, television, or the industrial revolution were.
I have to admit the last couple of weeks caught me unprepared. Last time I checked, AI was hard-limited by available power and that was that. I didn't think orbital AI would make any sense or provide a solution, but clearly I was wrong.
I was hoping that AI would boost adoption of nuclear power, and I think that's still happening.
The topic of this thread, that's very much open. I am still not clear what timeline Musk sees before he's doing any of that.I expect some new slides in the next stateX presentation, and I betcha it'll be the most watched one ever...
Quote from: meekGee on 11/05/2025 07:28 pmI have to admit the last couple of weeks caught me unprepared. Last time I checked, AI was hard-limited by available power and that was that. I didn't think orbital AI would make any sense or provide a solution, but clearly I was wrong.Eh, could be yet ANOTHER bubble technology. Just like Elon Musk "leaning" into the Moon could be a bubble issue, since it only came after he/SpaceX was (wrongly in my opinion) dumped on by the person currently running NASA. Once someone sane starts running NASA maybe Musk goes back to not being too interested in the Moon beyond his current NASA contracts.Data centers in space could be yet another technology wishlist item like fusion power or He3 mining on the Moon. Power and scalability are the hot topics, but only because of the potentially irrational plans AI companies + investors have. If those plans don't pan out, then power and scalability is less of a problem, and potentially solvable by scaling up solar & wind and installing more storage.But color me skeptical that Musk will have a long-term interest in our Moon...QuoteI was hoping that AI would boost adoption of nuclear power, and I think that's still happening.Nuclear power has had such an interesting history, but even though it has more support now from people that previously were not support nuclear power, I don't think the fundamental issues with nuclear power construction have been solved. I know there are new technologies that are funded that are trying to address this (i.e. small modular reactors), but it is still too early to know if they truly do solve the scalability and cost issues.QuoteThe topic of this thread, that's very much open. I am still not clear what timeline Musk sees before he's doing any of that.I expect some new slides in the next stateX presentation, and I betcha it'll be the most watched one ever...Yeah, as I previously stated, this may be a short-term interest for Musk, at least through the rest of the Trump term in office. After that, not so sure.
Quote from: Greg Hullender on 11/04/2025 03:30 pmQuote from: thespacecow on 11/04/2025 01:16 amStarlink is already aiming for 1Gbps for consumers, and in LEO the latency is comparable to terrestrial network.Just one quick comment, and then I'm going to read the document you linked to. I'm taking about the latency inside the data center, since that determines how long it takes parallel operations to synchronize. Even inference amounts to performing a monstrous matrix multiplication for each layer of the net. To do that efficiently in parallel, you want all the processors involved to be as close to each other as possible.But that raises another question: If you're going to need 100 TW just for inference, does that mean you also need 1,000 TW to train the system? Or is the idea that AI would somehow be finished, so no one would need to train new networks?@hplan already answered some of these regarding inference, a single inference run needs far far less compute than a training run, so it can be localized inside a single satellite. At the same time, the total compute demand for inference is a lot more than training (I mean how else can AI companies make money, running inference is how they serve customers), current estimate is 80~90% of compute demand is for inference: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/. But it's possible Elon sees much more demand for inference than this, since he envisions AI running everything, like creating the GUI on your phone or creating the graphics for a game.Also see Google's new in-space AI infrastructure paper which covered terabit laser link between satellites, rad tolerance of TPUs and economic feasibility studies: https://services.google.com/fh/files/misc/suncatcher_paper.pdf