Elon has spoken about inference tasks, with the reason being that a short delay is acceptable
The idea, clearly, is that AI training tasks let you beam the value back to Earth (ie the finished trained model) without beaming the BTUs back to Earth (and vaporizing the planet, in the limiting growth case).
Quote from: leovinus on 12/01/2025 06:44 pmSounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ctYep. Remember the huge render farms that were needed in the year 2000 to create CGI that we would now consider barely adequate?
Sounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ct
Quote from: DigitalMan on 12/01/2025 06:35 pmElon has spoken about inference tasks, with the reason being that a short delay is acceptable Other way around. Inference requires low latency. Training (which can take months) does not.
Remember the huge render farms that were needed in the year 2000 to create CGI that we would now consider barely adequate?
Quote from: DanClemmensen on 12/01/2025 06:55 pmQuote from: leovinus on 12/01/2025 06:44 pmSounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ctYep. Remember the huge render farms that were needed in the year 2000 to create CGI that we would now consider barely adequate?The flip side is that miniaturization and diminishing power draws of compute did not make the semiconductor industry shrink, but))⁹ instead enabled more applications.I have no doubt that the cost of a single.inference will go down with time. The question is just how pervasive AI will get.
Quote from: meekGee on 12/01/2025 10:17 pmQuote from: DanClemmensen on 12/01/2025 06:55 pmQuote from: leovinus on 12/01/2025 06:44 pmSounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ctYep. Remember the huge render farms that were needed in the year 2000 to create CGI that we would now consider barely adequate?The flip side is that miniaturization and diminishing power draws of compute did not make the semiconductor industry shrink, but))⁹ instead enabled more applications.I have no doubt that the cost of a single.inference will go down with time. The question is just how pervasive AI will get.I personally believe there is a fairly hard upper limit on demand. There are only so many people in the world, expected to peak somewhere in the general neighborhood of 10B sometime in the second half of this century, and no immediate prospect of getting everyone on Earth fully connected into the information economy either. I tend to think that 20-30 years from now the "wave of the future" will be in stuff that has very little to do with computers or IT. Nearly every technology has a sort of A curve - slow start, rapid growth, plateau.
While talented people are using AI to do amazing things, the overwhelming majority of current AI output is low quality garbage. I can't believe that the post-hype future of AI is going to be just a scaled-up version of what we have now.
Quote from: leovinus on 12/01/2025 06:44 pmSounds like the 1930s predictions of “gazillions of airships and zeppelins” . As someone who trained and optimized AI/ML models which are in the hands of millions, my money is on better algorithms and hardware which will make a trillion in market cap go “poof”. Just my 2ctThat will be fun to watch... Billions in investment under the assumption that inference cannot be performed more efficiently. At some point somebody will breakthrough and "poof".
Quote from: steveleach on 12/02/2025 07:41 amWhile talented people are using AI to do amazing things, the overwhelming majority of current AI output is low quality garbage. I can't believe that the post-hype future of AI is going to be just a scaled-up version of what we have now.Exactly.AI is not just LLMs and video monitoring.Just a few years ago what you see today was strictly Sci Fi, and already people think they can see all the way to the far wall...The world market can accommodate at least 5 computers, was it?
Quote from: meekGee on 12/02/2025 08:12 amQuote from: steveleach on 12/02/2025 07:41 amWhile talented people are using AI to do amazing things, the overwhelming majority of current AI output is low quality garbage. I can't believe that the post-hype future of AI is going to be just a scaled-up version of what we have now.Exactly.AI is not just LLMs and video monitoring.Just a few years ago what you see today was strictly Sci Fi, and already people think they can see all the way to the far wall...The world market can accommodate at least 5 computers, was it?I don't think it is analogous to that at all. At the time that quote was said, computers were a super extreme niche thing. AI is already all over the Internet (eg every Google search pulls up an AI Overview). Also, even if it were analogous... There's an upper limit on use of computers too. You could maybe get x10 the number of computerized devices we have now (if every person on Earth had a smartphone, smart watch, smart home etc) but not 100x (there just aren't going to be enough people to use that many devices)..I am not claiming that current AI use is near peak (though it could be, if much of the current use is "artificial" demand - like AI Overviews on every Google search - which wouldn't be used once the true costs are passed on). I am only claiming that the peak is probably well below terawatt level power use.
So fast forward to AI. You think the type of tasks it does is exhausted,
I'm pretty sure "TettaWatts" was not thrown around without regard.
Quote from: meekGee on 12/03/2025 12:40 amSo fast forward to AI. You think the type of tasks it does is exhausted, No, that's not what I'm saying at all, here.I'm saying that even if tons of new applications are invented there's still a fairly hard upper limit which is ultimately more or less set by the number of "technologically connected" people. And that hard limit, at least in the next century or so, is probably below terawatt level use.Especially with efficiency improvements. And I think fairly dramatic efficiency improvements are ultimately necessary to make widespread use of AI cost effective if/when something closer to the true cost is passed on to the user, which currently often isn't the case.QuoteI'm pretty sure "TettaWatts" was not thrown around without regard.Oh, I think there's thought behind it. But I think it's a hypothetical of what can be done assuming unbounded demand (as well as not hitting any limits in scaling up manufacture, etc.) It's perfectly valid in that context, I just don't think that's the most likely scenario.
I think Musk is talking inference not training for the initial build out.Training data-centers need to be colocated with extremely high-speed interconnects between nodes. This requires On-Orbit assembly.
At the altitude of our planned constellation, the non-sphericity of Earth's gravitational field, and potentially atmospheric drag, are the dominant non-Keplerian effects impacting satellite orbital dynamics. In the figure below, we show trajectories (over one full orbit) for an illustrative 81-satellite constellation configuration in the orbital plane, at a mean cluster altitude of 650 km. The cluster radius is R=1 km, with the distance between next-nearest-neighbor satellites oscillating between ~100–200m, under the influence of Earth’s gravity.The models show that, with satellites positioned just hundreds of meters apart, we will likely only require modest station-keeping maneuvers to maintain stable constellations within our desired sun-synchronous orbit.