Author Topic: How Can AI Be Used for Space Applications?  (Read 95618 times)

Offline Twark_Main

  • Senior Member
  • *****
  • Posts: 4117
  • Technically we ALL live in space
  • Liked: 2208
  • Likes Given: 1332
Re: How Can AI Be Used for Space Applications?
« Reply #420 on: 10/17/2024 11:11 pm »
The use of small modular reactors to power AI is obviously relevant to spaceflight.

Quote
On Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.



Quote
In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.

https://futurism.com/the-byte/google-nuclear-power

Google press release:

https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/

Amazon is now backing X-energy, another nuclear startup

https://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zero

Hey, if big tech wants to advance the state-of-the-art in modular nuclear while also completely losing their shirt pursuing exciting investment opportunities, who am I to complain?    ;D
« Last Edit: 10/17/2024 11:16 pm by Twark_Main »

Offline Asteroza

  • Senior Member
  • *****
  • Posts: 3027
  • Liked: 1171
  • Likes Given: 33
Re: How Can AI Be Used for Space Applications?
« Reply #421 on: 10/18/2024 12:35 am »
The use of small modular reactors to power AI is obviously relevant to spaceflight.

Quote
On Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.



Quote
In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.

https://futurism.com/the-byte/google-nuclear-power

Google press release:

https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/

Amazon is now backing X-energy, another nuclear startup

https://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zero

Hey, if big tech wants to advance the state-of-the-art in modular nuclear while also completely losing their shirt pursuing exciting investment opportunities, who am I to complain?    ;D

Certainly the pace of datacenter construction is accelerated by AI datacenter investments outpacing grid capacity, but the grid capacity issue would have occurred even without the AI datacenter part, just a bit slower with the existing pace of general cloud datacenter expansion. Those servers need watts.

The point to keep an eye on is when will datacenters be colocated with SMR's, particularly the high temp ones that can do dry cooling which also drives heat driven chillers for the compute.

We might not be so far from the crazy datacenters in space idea as we thought though, if the restrictions regarding power for terrestrial compute don't change much.

Offline Star One

  • Senior Member
  • *****
  • Posts: 14374
  • UK
  • Liked: 4137
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #422 on: 10/18/2024 08:50 am »
I wonder if all this renewed interest in the US nuclear industry might lead to someone to consider resurrecting research into fast breeder reactors.

Offline Twark_Main

  • Senior Member
  • *****
  • Posts: 4117
  • Technically we ALL live in space
  • Liked: 2208
  • Likes Given: 1332
Re: How Can AI Be Used for Space Applications?
« Reply #423 on: 10/18/2024 02:54 pm »
The use of small modular reactors to power AI is obviously relevant to spaceflight.

Quote
On Monday, Google announced a landmark agreement with nuclear startup Kairos Power to purchase energy produced by seven yet-to-be-built small modular nuclear reactors. The companies claim the deal aims to add upwards of 500 megawatts "of new 24/7 carbon-free power to US electricity grids" — that is, over a decade from now when Kairos promises the reactors will be built.

…

Quote
In the announcement, Google and Kairos claim that the first of the modular nuclear power terminals will be up and running by 2030, with all modules completed by 2035.

https://futurism.com/the-byte/google-nuclear-power

Google press release:

https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/

Amazon is now backing X-energy, another nuclear startup

https://www.aboutamazon.com/news/sustainability/amazon-nuclear-small-modular-reactor-net-carbon-zero

Hey, if big tech wants to advance the state-of-the-art in modular nuclear while also completely losing their shirt pursuing exciting investment opportunities, who am I to complain?    ;D

Certainly the pace of datacenter construction is accelerated by AI datacenter investments outpacing grid capacity, but the grid capacity issue would have occurred even without the AI datacenter part, just a bit slower with the existing pace of general cloud datacenter expansion. Those servers need watts.

The point to keep an eye on is when will datacenters be colocated with SMR's, particularly the high temp ones that can do dry cooling which also drives heat driven chillers for the compute.

We might not be so far from the crazy datacenters in space idea as we thought though, if the restrictions regarding power for terrestrial compute don't change much.

Yes, I understood the pitch.

I don't doubt the demand side. Based on long historical experience, I doubt the profitability of execution on the supply end. Even with ultimately viable tech (eg Iridium), investors can still get blasted.

But hey, This Time It Will Be DifferrentTM. "Past performance is no blah blah blah."  So keep that sweet money flowing, folks...    8)
« Last Edit: 10/18/2024 02:58 pm by Twark_Main »

Offline Star One

  • Senior Member
  • *****
  • Posts: 14374
  • UK
  • Liked: 4137
  • Likes Given: 220

Offline Star One

  • Senior Member
  • *****
  • Posts: 14374
  • UK
  • Liked: 4137
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #425 on: 10/31/2024 08:18 am »
Quote
Is AI everything that it's made out to be? Not according to Linus Torvalds, the creator of Linux and its enduring chief spokesperson: in his view, the tech is "90 percent marketing and ten percent reality." Ouch.



Quote
"I think AI is really interesting and I think it is going to change the world," Torvalds said in a portion of the interview which recently went viral. "And at the same time, I hate the hype cycle so much that I really don't want to go there."
"So my approach to AI right now is I will basically ignore it," he continued, "because I think the whole tech industry around AI is in a very bad position and it's 90 percent marketing and ten percent reality."

https://futurism.com/the-byte/creator-of-linux-trashes-ai-hype

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 2760
  • Seattle
  • Liked: 2127
  • Likes Given: 3481
Re: How Can AI Be Used for Space Applications?
« Reply #426 on: 10/31/2024 10:27 pm »
Quote
Is AI everything that it's made out to be? Not according to Linus Torvalds, the creator of Linux and its enduring chief spokesperson: in his view, the tech is "90 percent marketing and ten percent reality." Ouch.



Quote
"I think AI is really interesting and I think it is going to change the world," Torvalds said in a portion of the interview which recently went viral. "And at the same time, I hate the hype cycle so much that I really don't want to go there."
"So my approach to AI right now is I will basically ignore it," he continued, "because I think the whole tech industry around AI is in a very bad position and it's 90 percent marketing and ten percent reality."

https://futurism.com/the-byte/creator-of-linux-trashes-ai-hype

I'm learning Rust right now with AI assist, and even though it's a language with a steep learning curve with the classic "I know *what* I want to do, just not *how* to do it", and the AI knows *how*.  It's working really well.  And I am about the same age as Linus.  Old dog, new tricks problem.

And not the trivial Rust examples, it's a full on tokio based threaded program.

It is like having a second programmer with Rust experience looking over my shoulder giving suggestions.

Linus probably forgot how to learn new stuff years ago. I know I did, until I attempted this, and I need help doing it.

I have found the more numerous the examples, the better the AI's code.. There is so much Linux kernel code floating about, an AI can probably program almost anything needed in the kernel these days.  Probably enough to replace the Russian programmers he just fired.

Getting this back to space travel, it means there doesn't need to be a technical expert for every bit of equipment on board.  A properly trained AI can allow the pilot to adjust the program of the flight computer, if that level of desperation is needed (More likely the environmental systems, which have a lot more unknowns than interplanetary flight).

So AI in space applications is easily "more robots, less domain experts" on Mars Colonies and their transport ships.  Well within the current AI trajectory.

« Last Edit: 10/31/2024 10:28 pm by InterestedEngineer »

Offline Star One

  • Senior Member
  • *****
  • Posts: 14374
  • UK
  • Liked: 4137
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #427 on: 11/12/2024 09:27 pm »
Wasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.

Quote
Over the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.
This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models — which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates — those models will continue to grow or "scale" at a consistent rate.
Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."

https://futurism.com/the-byte/openai-diminishing-returns

Offline Twark_Main

  • Senior Member
  • *****
  • Posts: 4117
  • Technically we ALL live in space
  • Liked: 2208
  • Likes Given: 1332
Re: How Can AI Be Used for Space Applications?
« Reply #428 on: 11/12/2024 10:21 pm »
Wasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.

Quote
Over the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.
This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models — which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates — those models will continue to grow or "scale" at a consistent rate.
Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."

https://futurism.com/the-byte/openai-diminishing-returns

Much like the "death of Moore's Law," I expect this will trigger a renaissance in AI design.

When scaling had better returns than innovation, all we got was scaling. You just needed capital for additional compute and therefore OpenAI sprinted down that path. Now that scaling is nearing its limits, this provides an opportunity for truly innovative AI architectures to pull ahead.

Exciting times!

Online catdlr

  • Member
  • Senior Member
  • *****
  • Posts: 14439
  • Enthusiast since the Redstones and Thunderbirds
  • Marina del Rey, California, USA
  • Liked: 12366
  • Likes Given: 9648
Re: How Can AI Be Used for Space Applications?
« Reply #429 on: 11/13/2024 10:09 pm »
Quote
Inside Elon Musk's Colossus Supercomputer!

https://youtube.com/watch?v=Tw696JVSxJQ

For those seeking a more detailed explanation of the data center infrastructure of the GPU racks and hallways, please refer to the following:



Quote
Oct 28, 2024
We FINALLY get to show the largest AI supercomputer in the world, xAI Colossus. This is the 100,000 (at the time we filmed this) GPU cluster in Memphis Tennessee that has been on the news a lot. This video has been five months in the making, and finally Elon Musk gave us the green light to not just film, but also show everyone the Supermicro side of the cluster.

This video is being sponsored by Supermicro and that is why we are only showing the Supermicro side, which is the more advanced side. Unlike our normal content creation, this video had to be reviewed by Elon and his team before going live and we were asked to blur out portions at their request.

----------------------------------------------------------------------
Timestamps
----------------------------------------------------------------------
00:00 Introduction
01:44 Inside a xAI Colossus Data Hall
02:22 Supermicro Liquid Cooled GPU Racks
08:30 Supermicro Compute and Storage Racks
08:54 Networking is Awesome
11:51 Supermicro Storage Servers
12:26 Data Center Liquid Cooling
13:33 Tesla Megapacks for Power
14:08 Future Expansion and Thank Yous
« Last Edit: 11/14/2024 12:23 am by catdlr »
It's Tony De La Rosa, ...I don't create this stuff, I just report it.

Offline Coastal Ron

  • Senior Member
  • *****
  • Posts: 9180
  • I live... along the coast
  • Liked: 10625
  • Likes Given: 12241
Re: How Can AI Be Used for Space Applications?
« Reply #430 on: 11/13/2024 10:57 pm »
Wasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.

Quote
Over the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.
This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models — which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates — those models will continue to grow or "scale" at a consistent rate.
Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."

https://futurism.com/the-byte/openai-diminishing-returns
Much like the "death of Moore's Law," I expect this will trigger a renaissance in AI design.

Moore's Law was an observation, not a mandate of course, so all we are observing is that in order to keep up with the same pace of doubling the components on a semiconductor chip every two years, it requires breakthroughs that don't have the same ROI as previous breakthroughs. Interesting, not a crisis though.

BTW, I highly encourage reading this article to understand how we got to where we are today with modern A.I. systems, since it was a combination of factors that no one thought were worthy:

How a stubborn computer scientist accidentally launched the deep learning boom - Ars Technica

And it is important to also remember that this is the 3rd wave of A.I., so A.I. hype has been around for a while...  ;)

Quote
When scaling had better returns than innovation, all we got was scaling. You just needed capital for additional compute and therefore OpenAI sprinted down that path. Now that scaling is nearing its limits, this provides an opportunity for truly innovative AI architectures to pull ahead.

The A.I.'s that get the most attention these days are Large Language Models (LLM), of which there are a LOT. And one of the issues is certainly running out of training data for them:

Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data

But regardless, LLM's are still just really good at guessing that the next word is, not in truly understanding anything. And LLM's not only hallucinate, but there is no way to understand HOW they came to the decisions that they make, which limits their ability to replace human decision making, which though it can be flawed, it can be explained.

So for space, sure, A.I. can be used. Image analysis is an area where A.I. excels, and discovering new molecules and proteins is making a HUGE impact on material science.

But we are still a ways from a HAL 9000, which though misguided, was able to reason.
If we don't continuously lower the cost to access space, how are we ever going to afford to expand humanity out into space?

Offline Coastal Ron

  • Senior Member
  • *****
  • Posts: 9180
  • I live... along the coast
  • Liked: 10625
  • Likes Given: 12241
Re: How Can AI Be Used for Space Applications?
« Reply #431 on: 11/13/2024 11:03 pm »
Inside Elon Musk's Colossus Supercomputer!

You can't say that Elon Musk doesn't have a weird sense of humor, especially when naming things. For instance, have you ever seen this movie?

Colossus: The Forbin Project
If we don't continuously lower the cost to access space, how are we ever going to afford to expand humanity out into space?

Online Robotbeat

  • Senior Member
  • *****
  • Posts: 39454
  • Minnesota
  • Liked: 25565
  • Likes Given: 12232
Re: How Can AI Be Used for Space Applications?
« Reply #432 on: 11/13/2024 11:05 pm »
Inside Elon Musk's Colossus Supercomputer!

You can't say that Elon Musk doesn't have a weird sense of humor, especially when naming things. For instance, have you ever seen this movie?

Colossus: The Forbin Project
It's named Colossus after the first programmable digital computer, not after some random movie. https://en.wikipedia.org/wiki/Colossus_computer
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Twark_Main

  • Senior Member
  • *****
  • Posts: 4117
  • Technically we ALL live in space
  • Liked: 2208
  • Likes Given: 1332
Re: How Can AI Be Used for Space Applications?
« Reply #433 on: 11/14/2024 06:27 am »
Wasn’t this always a highly likely outcome and makes you wonder if there really is that much future in the current way of going things in the field.

Quote
Over the weekend, The Information reported that with each new flagship model, OpenAI is seeing a slowdown in the sort of "leaps" users have come to expect in the wake of its game-changing ChatGPT release in December 2022.
This slowdown seems to test the core belief at the center of the argument for AI scaling: that as long as there's ever more data and computing power to feed the models Ă˘â‚¬â€ which is a big "if," given that firms have already run out of training data and are eating up electricity at unprecedented rates Ă˘â‚¬â€ those models will continue to grow or "scale" at a consistent rate.
Responding to this latest news from The Information, data scientist Yam Peleg teased on X that another cutting-edge AI firm had "reached an unexpected HUGE wall of diminishing returns trying to brute-force better results by training longer & using more and more data."

https://futurism.com/the-byte/openai-diminishing-returns
Much like the "death of Moore's Law," I expect this will trigger a renaissance in AI design.

Moore's Law was an observation, not a mandate of course, so all we are observing is that in order to keep up with the same pace of doubling the components on a semiconductor chip every two years, it requires breakthroughs that don't have the same ROI as previous breakthroughs. Interesting, not a crisis though.

BTW, I highly encourage reading this article to understand how we got to where we are today with modern A.I. systems, since it was a combination of factors that no one thought were worthy:

How a stubborn computer scientist accidentally launched the deep learning boom - Ars Technica

And it is important to also remember that this is the 3rd wave of A.I., so A.I. hype has been around for a while...  ;)

Quote
When scaling had better returns than innovation, all we got was scaling. You just needed capital for additional compute and therefore OpenAI sprinted down that path. Now that scaling is nearing its limits, this provides an opportunity for truly innovative AI architectures to pull ahead.

The A.I.'s that get the most attention these days are Large Language Models (LLM), of which there are a LOT. And one of the issues is certainly running out of training data for them:

Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data

But regardless, LLM's are still just really good at guessing that the next word is, not in truly understanding anything. And LLM's not only hallucinate, but there is no way to understand HOW they came to the decisions that they make, which limits their ability to replace human decision making, which though it can be flawed, it can be explained.

So for space, sure, A.I. can be used. Image analysis is an area where A.I. excels, and discovering new molecules and proteins is making a HUGE impact on material science.

But we are still a ways from a HAL 9000, which though misguided, was able to reason.

It's bemusing iff you thought this was telling me things I didn't already know (or changed the observations I make above), but in either case good to catch up the rest of the class!  ;)

A good summary.  Thanks for posting, truly.


Historical experience (such as mentioned) tells us the industry runs down the easy path until they can't do it any longer. When Moore's Law was king we saw a general stagnation in 'novel' computer architectures. They still existed of course, but nobody poured (sufficiently) enormous sums of money into them. It was generally understood to be a smarter bet to focus on shrinking the process (or wait for others to shrink it), and maybe throw in a few architectural tweaks around the edges. Big, novel architecture bets were considered (with some justification) a recipe for bankruptcy.

I expect the up-till-now "ease" of LLM compute scaling has had a similar damping down effect on novel machine learning architectures. So while novel ML architectures certainly "exist," I expect we'll see (especially in hindsight) that till now they've been comparatively resource starved in this latest AI boom.

If these novel architectures succeed, we'll see a new breakthrough era in AI. To the extent these novel architectures fail, that determines the length and depth of a hypothetical Third AI Winter.

Get ready for a wild ride ahead!  :o
« Last Edit: 11/14/2024 07:27 am by Twark_Main »

Offline edzieba

  • Virtual Realist
  • Senior Member
  • *****
  • Posts: 6832
  • United Kingdom
  • Liked: 10454
  • Likes Given: 48
Re: How Can AI Be Used for Space Applications?
« Reply #434 on: 11/14/2024 03:26 pm »
'Moore's Law' may not be the example you're thinking it is. Hitting the gate oxide limit and the end of practical transistor size scaling (and more importantly, the end of gate voltage reduction) meant that the universally applicable thread performance scaling enjoyed until then came to a screeching halt. Whilst implementing SMT and throwing more cores at the problem makes Number Go Up for Gustafson's Law scaling tasks, the vast majority of tasks scale with Amdahl's Law instead, so gain little to no performance as you add cores. And often lose performance from the overhead of additional cores (e.g. latency hits from the expanded cache hierarchies and required internal busses and dealing with inter-core contention). Power budgets have also ballooned due to the voltage scaling wall, which has other knock-on effects (e.g. needing more transistors per cell for the demanded current, further limiting practical areal scaling).
The upshot has been that the 'throw more cores at it' era only lasted a brief few years and has been superseded by adding a bunch of fixed function blocks to dies to handle more and more complex tasks with specialised silicon (video decode and encode being some of the first). This works as long as the task you happen to be performing is one implemented in an FFB, but is worthless if it is not.
What we're getting is not new and clever innovative scaling, but a teetering pile of awkward hacks growing ponderously to try and keep up with performance demand that are not otherwise being addressed. Which is where we are today, with fragmented availability of FFBs across vendors and architectures, and random inconsistent snippets of instruction sets (go look at the state of AVX512 support as an example), that needs to be danced around across the install base in order to actually achieve any performance gains from your application. If any, maybe, in some scenarios, for some vendors, on some SKUs.


As for 'AI', the current 'Deep Learning' boom is just MLNNs and SLNNs from many decades ago with enough compute thrown at them to overcome all the performance deficits that where why research moved on from them in the first place. LLMs are less likely to be replaced by some whizz-bang new model (or another decades old model resuscitated and with a new name slapped on) and more likely to be dumped because nobody can figure out how to make any money with them. We're very much in the pre-dot-com-bust era of "We need a website LLM because websites LLMs are the future!" where everyone is in such a rush not to be 'left behind' that nobody is paying much through as to what they are actually going to do with their LLMs: fundamentally can't reason, are comically atrocious at knowledge retrieval (more likely to make data up than return it), barely even adequate at flowchart-following basics 0-line support tasks (by the time you've fettled them with enough filters and rules to prevent them drifting off-script and offering random invalid solutions they're no better than ELIZA), etc.

Offline Star One

  • Senior Member
  • *****
  • Posts: 14374
  • UK
  • Liked: 4137
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #435 on: 11/14/2024 06:33 pm »
Microsoft and NASA have a new AI tool to put satellite data at your fingertips

Quote
The space agency has built a custom artificial intelligence-powered copilot, called Earth Copilot, using Microsoft’s Azure OpenAI Service, the tech giant announced on Thursday. NASA’s copilot aims to make data collected by the space agency, such as information on wildfires and climate change, more accessible to the general public, scientists and educators, and policymakers. The new system lets users ask questions about NASA’s satellite data in plain English, similar to chatting with a virtual assistant.

https://qz.com/microsoft-nasa-ai-copilot-earth-data-science-openai-1851698371

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 2760
  • Seattle
  • Liked: 2127
  • Likes Given: 3481
Re: How Can AI Be Used for Space Applications?
« Reply #436 on: 11/19/2024 08:15 pm »
throwing more cores at the problem makes Number Go Up for Gustafson's Law scaling tasks, the vast majority of tasks scale with Amdahl's Law instead,

One of the great innovations in AI in the last 10-15 years is figuring out how to get neural net training from Amdahl's Law limited to Gustafson's Law.

Saw an AI seminar from Facebook in 2018, when the first real fruits of that scaling were coming online.  It's accelerated since then (and probably plateaued)


Offline Twark_Main

  • Senior Member
  • *****
  • Posts: 4117
  • Technically we ALL live in space
  • Liked: 2208
  • Likes Given: 1332
Re: How Can AI Be Used for Space Applications?
« Reply #437 on: 11/19/2024 10:58 pm »
'Moore's Law' may not be the example you're thinking it is. Hitting the gate oxide limit...

[snipped a bunch of implementation details about chips that nobody but us understands]

Obviously if you're going to go deep into the weeds, things won't necessarily be analogous anymore. But it's quite clear that these details are specific to chip scaling, not AI.

The point is that we didn't have a reason to even explore those limits while Moore's Law was still working. Now, with AI hitting its scaling limits, we get to explore those details and see how it works out in that particular case. That history is not yet written.

As for 'AI', the current 'Deep Learning' boom is just MLNNs and SLNNs from many decades ago with enough compute thrown at them to overcome all the performance deficits that where why research moved on from them in the first place.

"Research moved on" precisely because it was non-obvious that "just" throwing compute at it would work. This, in itself, was a novel result.

You're also ignoring the impact of attention, which is what enabled scaling to work in the first place. First we needed to modify "NNs from decades ago" so that they actually do scale. This was also a novel result.

\LLMs are less likely to be replaced by some whizz-bang new model (or another decades old model resuscitated and with a new name slapped on) and more likely to be dumped because nobody can figure out how to make any money with them. We're very much in the pre-dot-com-bust era...

Now you veer into prophesying out of your backside, using the very clever "argument by me using the word likely in a sentence."   ;)

LLMs fundamentally can't reason

It remains to be seen whether humans "fundamentally" reason (whatever "fundamentally" means). If it works it works, and all the rest is post-hoc hand waving.

Dan Dennett's work is instructive in this domain. Just replace the word "consciousness" with your "fundamental [sic] reasoning":



are comically atrocious at knowledge retrieval (more likely to make data up than return it), barely even adequate at flowchart-following basics 0-line support tasks (by the time you've fettled them with enough filters and rules to prevent them drifting off-script and offering random invalid solutions they're no better than ELIZA), etc.

These are precisely the areas that LLMs are still making rapid progress, and branching out into techniques other than simple compute scaling. It will be interesting to see where it goes from here, but continued progress remains an open question.

One possibility is, this turns out like the famous "neat trick, but computers will never beat a chess grandmaster" argument. We shall see!  8)
« Last Edit: 11/19/2024 11:28 pm by Twark_Main »

Offline Star One

  • Senior Member
  • *****
  • Posts: 14374
  • UK
  • Liked: 4137
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #438 on: 11/20/2024 04:07 pm »
NASA Tests Swimming Robots for Exploring Oceans on Icy Moons:


Offline Star One

  • Senior Member
  • *****
  • Posts: 14374
  • UK
  • Liked: 4137
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #439 on: 12/02/2024 06:23 am »
As it says in the article itself using this one metric seems enormously problematic.

Quote
However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human.

One such metric, defined by Translated, a Rome-based translation company, is an AI’s ability to translate speech at the accuracy of a human. Language is one of the most difficult AI challenges, but a computer that could close that gap could theoretically show signs of Artificial General Intelligence (AGI).



Quote
Although this is a novel approach to quantifying how close humanity is to approaching singularity, this definition of singularity runs into similar problems of identifying AGI more broadly. And while perfecting human speech is certainly a frontier in AI research, the impressive skill doesn’t necessarily make a machine intelligent (not to mention how many researchers don’t even agree on what “intelligence” is).

https://www.popularmechanics.com/technology/robots/a63057078/when-the-singularity-will-happen/

Tags:
 

Advertisement NovaTech
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
1