Quote from: Star One on 10/17/2024 02:09 pmThe use of small modular reactors to power AI is obviously relevant to spaceflight.QuoteOn Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.…Quote In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.https://futurism.com/the-byte/google-nuclear-powerGoogle press release:https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/Amazon is now backing X-energy, another nuclear startuphttps://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zero
The use of small modular reactors to power AI is obviously relevant to spaceflight.QuoteOn Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.…Quote In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.https://futurism.com/the-byte/google-nuclear-powerGoogle press release:https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/
On Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.
In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.
Quote from: Asteroza on 10/17/2024 10:59 pmQuote from: Star One on 10/17/2024 02:09 pmThe use of small modular reactors to power AI is obviously relevant to spaceflight.QuoteOn Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.…Quote In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.https://futurism.com/the-byte/google-nuclear-powerGoogle press release:https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/Amazon is now backing X-energy, another nuclear startuphttps://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zeroHey, if big tech wants to advance the state-of-the-art in modular nuclear while also completely losing their shirt pursuing exciting investment opportunities, who am I to complain?
Quote from: Twark_Main on 10/17/2024 11:11 pmQuote from: Asteroza on 10/17/2024 10:59 pmQuote from: Star One on 10/17/2024 02:09 pmThe use of small modular reactors to power AI is obviously relevant to spaceflight.QuoteOn Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.…Quote In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.https://futurism.com/the-byte/google-nuclear-powerGoogle press release:https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/Amazon is now backing X-energy, another nuclear startuphttps://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zeroHey, if big tech wants to advance the state-of-the-art in modular nuclear while also completely losing their shirt pursuing exciting investment opportunities, who am I to complain? Certainly the pace of datacenter construction is accelerated by AI datacenter investments outpacing grid capacity, but the grid capacity issue would have occurred even without the AI datacenter part, just a bit slower with the existing pace of general cloud datacenter expansion. Those servers need watts.The point to keep an eye on is when will datacenters be colocated with SMR's, particularly the high temp ones that can do dry cooling which also drives heat driven chillers for the compute.We might not be so far from the crazy datacenters in space idea as we thought though, if the restrictions regarding power for terrestrial compute don't change much.
Quote from: Asteroza on 10/17/2024 10:59 pmQuote from: Star One on 10/17/2024 02:09 pmThe use of small modular reactors to power AI is obviously relevant to spaceflight.QuoteOn Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.…Quote In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.https://futurism.com/the-byte/google-nuclear-powerGoogle press release:https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/Amazon is now backing X-energy, another nuclear startuphttps://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zeroHey, if big tech wants to advance the state-of-the-art in modular nuclear while also completely losing their shirt pursuing exciting investment opportunities, who am I to complain?
Quote from: Star One on 10/17/2024 02:09 pmThe use of small modular reactors to power AI is obviously relevant to spaceflight.QuoteOn Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.…Quote In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.https://futurism.com/the-byte/google-nuclear-powerGoogle press release:https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/Amazon is now backing X-energy, another nuclear startuphttps://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zero
The use of small modular reactors to power AI is obviously relevant to spaceflight.QuoteOn Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.…Quote In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.https://futurism.com/the-byte/google-nuclear-powerGoogle press release:https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/
On Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.
Is AI everything that it's made out to be? Not according to Linus Torvalds, the creator of Linux and its enduring chief spokesperson: in his view, the tech is "90 percent marketing and ten percent reality." Ouch.
"I think AI is really interesting and I think it is going to change the world," Torvalds said in a portion of the interview which recently went viral. "And at the same time, I hate the hype cycle so much that I really don't want to go there.""So my approach to AI right now is I will basically ignore it," he continued, "because I think the whole tech industry around AI is in a very bad position and it's 90 percent marketing and ten percent reality."
QuoteIs AI everything that it's made out to be? Not according to Linus Torvalds, the creator of Linux and its enduring chief spokesperson: in his view, the tech is "90 percent marketing and ten percent reality." Ouch.…Quote "I think AI is really interesting and I think it is going to change the world," Torvalds said in a portion of the interview which recently went viral. "And at the same time, I hate the hype cycle so much that I really don't want to go there.""So my approach to AI right now is I will basically ignore it," he continued, "because I think the whole tech industry around AI is in a very bad position and it's 90 percent marketing and ten percent reality."https://futurism.com/the-byte/creator-of-linux-trashes-ai-hype
Over the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models — which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates — those models will continue to grow or "scale" at a consistent rate.Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."
Wasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.QuoteOver the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models — which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates — those models will continue to grow or "scale" at a consistent rate.Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."https://futurism.com/the-byte/openai-diminishing-returns
Over the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models — which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates — those models will continue to grow or "scale" at a consistent rate.Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."
Inside Elon Musk's Colossus Supercomputer!https://youtube.com/watch?v=Tw696JVSxJQ
Oct 28, 2024We FINALLY get to show the largest AI supercomputer in the world, xAI Colossus. This is the 100,000 (at the time we filmed this) GPU cluster in Memphis Tennessee that has been on the news a lot. This video has been five months in the making, and finally Elon Musk gave us the green light to not just film, but also show everyone the Supermicro side of the cluster.This video is being sponsored by Supermicro and that is why we are only showing the Supermicro side, which is the more advanced side. Unlike our normal content creation, this video had to be reviewed by Elon and his team before going live and we were asked to blur out portions at their request.----------------------------------------------------------------------Timestamps----------------------------------------------------------------------00:00 Introduction01:44 Inside a xAI Colossus Data Hall02:22 Supermicro Liquid Cooled GPU Racks08:30 Supermicro Compute and Storage Racks08:54 Networking is Awesome11:51 Supermicro Storage Servers12:26 Data Center Liquid Cooling13:33 Tesla Megapacks for Power14:08 Future Expansion and Thank Yous
Quote from: Star One on 11/12/2024 09:27 pmWasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.QuoteOver the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models — which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates — those models will continue to grow or "scale" at a consistent rate.Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."https://futurism.com/the-byte/openai-diminishing-returnsMuch like the "death of Moore's Law," I expect this will trigger a renaissance in AI design.
When scaling had better returns than innovation, all we got was scaling. You just needed capital for additional compute and therefore OpenAI sprinted down that path. Now that scaling is nearing its limits, this provides an opportunity for truly innovative AI architectures to pull ahead.
Inside Elon Musk's Colossus Supercomputer!
Quote from: catdlr on 11/13/2024 10:09 pmInside Elon Musk's Colossus Supercomputer!You can't say that Elon Musk doesn't have a weird sense of humor, especially when naming things. For instance, have you ever seen this movie?Colossus: The Forbin Project
Quote from: Twark_Main on 11/12/2024 10:21 pmQuote from: Star One on 11/12/2024 09:27 pmWasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.QuoteOver the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models  which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates  those models will continue to grow or "scale" at a consistent rate.Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."https://futurism.com/the-byte/openai-diminishing-returnsMuch like the "death of Moore's Law," I expect this will trigger a renaissance in AI design.Moore's Law was an observation, not a mandate of course, so all we are observing is that in order to keep up with the same pace of doubling the components on a semiconductor chip every two years, it requires breakthroughs that don't have the same ROI as previous breakthroughs. Interesting, not a crisis though.BTW, I highly encourage reading this article to understand how we got to where we are today with modern A.I. systems, since it was a combination of factors that no one thought were worthy:How a stubborn computer scientist accidentally launched the deep learning boom - Ars TechnicaAnd it is important to also remember that this is the 3rd wave of A.I., so A.I. hype has been around for a while... QuoteWhen scaling had better returns than innovation, all we got was scaling. You just needed capital for additional compute and therefore OpenAI sprinted down that path. Now that scaling is nearing its limits, this provides an opportunity for truly innovative AI architectures to pull ahead.The A.I.'s that get the most attention these days are Large Language Models (LLM), of which there are a LOT. And one of the issues is certainly running out of training data for them:Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated DataBut regardless, LLM's are still just really good at guessing that the next word is, not in truly understanding anything. And LLM's not only hallucinate, but there is no way to understand HOW they came to the decisions that they make, which limits their ability to replace human decision making, which though it can be flawed, it can be explained.So for space, sure, A.I. can be used. Image analysis is an area where A.I. excels, and discovering new molecules and proteins is making a HUGE impact on material science.But we are still a ways from a HAL 9000, which though misguided, was able to reason.
Quote from: Star One on 11/12/2024 09:27 pmWasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.QuoteOver the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models  which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates  those models will continue to grow or "scale" at a consistent rate.Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."https://futurism.com/the-byte/openai-diminishing-returnsMuch like the "death of Moore's Law," I expect this will trigger a renaissance in AI design.
Wasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.QuoteOver the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models  which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates  those models will continue to grow or "scale" at a consistent rate.Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."https://futurism.com/the-byte/openai-diminishing-returns
Over the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models  which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates  those models will continue to grow or "scale" at a consistent rate.Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."
The space agency has built a custom artificial intelligence-powered copilot, called Earth Copilot, using Microsoft’s Azure OpenAI Service, the tech giant announced on Thursday. NASA’s copilot aims to make data collected by the space agency, such as information on wildfires and climate change, more accessible to the general public, scientists and educators, and policymakers. The new system lets users ask questions about NASA’s satellite data in plain English, similar to chatting with a virtual assistant.
throwing more cores at the problem makes Number Go Up for Gustafson's Law scaling tasks, the vast majority of tasks scale with Amdahl's Law instead,
'Moore's Law' may not be the example you're thinking it is. Hitting the gate oxide limit...[snipped a bunch of implementation details about chips that nobody but us understands]
As for 'AI', the current 'Deep Learning' boom is just MLNNs and SLNNs from many decades ago with enough compute thrown at them to overcome all the performance deficits that where why research moved on from them in the first place.
\LLMs are less likely to be replaced by some whizz-bang new model (or another decades old model resuscitated and with a new name slapped on) and more likely to be dumped because nobody can figure out how to make any money with them. We're very much in the pre-dot-com-bust era...
LLMs fundamentally can't reason
are comically atrocious at knowledge retrieval (more likely to make data up than return it), barely even adequate at flowchart-following basics 0-line support tasks (by the time you've fettled them with enough filters and rules to prevent them drifting off-script and offering random invalid solutions they're no better than ELIZA), etc.
However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human.One such metric, defined by Translated, a Rome-based translation company, is an AI’s ability to translate speech at the accuracy of a human. Language is one of the most difficult AI challenges, but a computer that could close that gap could theoretically show signs of Artificial General Intelligence (AGI).
Although this is a novel approach to quantifying how close humanity is to approaching singularity, this definition of singularity runs into similar problems of identifying AGI more broadly. And while perfecting human speech is certainly a frontier in AI research, the impressive skill doesn’t necessarily make a machine intelligent (not to mention how many researchers don’t even agree on what “intelligence” is).