Author Topic: How Can AI Be Used for Space Applications?  (Read 131539 times)

Online sanman

  • Senior Member
  • *****
  • Posts: 6502
  • Liked: 1554
  • Likes Given: 20
Re: How Can AI Be Used for Space Applications?
« Reply #500 on: 03/30/2025 07:52 pm »
Sorry, just switching discussion over to this thread, rather than derailing the humanoid robots discussion thread.

Recently I was reading an article that advocated that computer "brains" would NOT evolve to mimic human brains, since human brains evolved because of our unique organic needs, but computer brains don't need to follow (and probably can't) that method of evolving.

Okay, I wasn't trying to take my brain analogy that far -- I only meant that our brains keep processing any memory closer together and intertwined. That necessity is driven by physics - you want the processing and the information it's acting on to be closer together. That applies whether the system is organic or inorganic.

The need for fast access to memory is what drove Nvidia to create its newer NVLink rather than continuing to rely on Infiniband.

We can now even see the newest generation of consumer products reflecting the voracious data/memory needs of AI.
GPUs are now running off system RAM instead of being confined to local video memory. Also, when you step off the processor and onto the system bus to access your RAM, then you're falling off the clock multiplier and into a slower speed regime, compared to the on-chip local cache which is running faster. Newer motherboard are compensating for this with wider data paths to the processor.

So we can see industry coming up with new solutions, and hardware is not staying still, or stagnating.


Quote
Regardless, look at the velocity of progress overall, over the past decade, and you'll see that in the real world we experience when we walk around, AI systems have not made much of a difference. Robotic systems even less. Certainly my 2021 Tesla Model Y hasn't evolved to be any smarter, and people buying a 2025 Tesla Model Y don't get much more than what I experience. Explain that.

How can you say that? Look at how Tesla's FSD has improved from the latest V13 upgrade onward. Ordinary users are really impressed with the improvements.

Quote
I have never advocated that progress was not being made. I've lived through Moore's Law, which many think may be reaching its natural end (i.e. you can't shrink atoms), but that just means humanity will discover some other method of evolving computing systems. The question I'm debating is HOW QUICKLY computing systems will evolve that can control humanoid robotic systems, and based on what can be seen - not predicted, but seen - they are not yet evolving very quickly.

The motherboard improvements I mentioned are one example. Another example is AMD's expansion of L3 cache on chip.
These are examples of positive trends. Memory and processors are trending toward merging together. That may take some time, given the architecture we've started out with and are adaptively migrating away from.
They'll certainly benefit Edge Computing, which means they'll also benefit space applications, including robotics.


Quote
The question is "when". Next year, next decade? When?

I'm not one to believe PR hype, especially since so many "experts" such as Elon Musk have been so wrong with their predictions of what will happen with AI. If you had a friend that was always over exaggerating, you would tend to discount anything they predicted, right? That is where I am at with the whole realm of so-called "AI experts", and why I'm now in the mode of "show me, don't tell me".  ;)

The AI improvement curve is skyrocketing - like exponentially. That's why all the Big Brain people are betting on it.
AIs are now teaching other AIs how to be better AIs. There's a feedback loop going on here, and it's also going to have ripple effects all across every other domain and discipline.
It's like we're loosening a knot. Every place we loosen the knot helps to create a little more give/latitude to loosen other parts of the knot - until we've freed up everything - and then we're only limited by the (known) laws of physics themselves.
Then we'd have to see about loosening those.

Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
How Can AI Be Used for Space Applications?
« Reply #501 on: 04/01/2025 04:46 pm »
No regrets - What happens to AI Beyond Generative?

Quote
Discussing ideas of what happens after Generative AI plateaus, Dr Jakob Foerster is based at the University of Oxford.



As the comments point out underneath this idea of simulating RL to progress AI brings its own set of issues to the table. Intresting that he’s writing about cutting edge AI on good old fashioned dot matrix paper. I remember getting swamped in the office with that stuff back in the day.
« Last Edit: 04/01/2025 04:55 pm by Star One »

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3122
  • Seattle
  • Liked: 2337
  • Likes Given: 3880
Re: How Can AI Be Used for Space Applications?
« Reply #502 on: 04/02/2025 02:14 am »
No regrets - What happens to AI Beyond Generative?

Quote
Discussing ideas of what happens after Generative AI plateaus, Dr Jakob Foerster is based at the University of Oxford.



As the comments point out underneath this idea of simulating RL to progress AI brings its own set of issues to the table. Intresting that he’s writing about cutting edge AI on good old fashioned dot matrix paper. I remember getting swamped in the office with that stuff back in the day.

Quote
Training on a full 2D physics model

I want to know what level the AI can get to on Asteroids

Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #503 on: 04/02/2025 06:36 pm »
AI will try to cheat & escape:

Quote
As Large Language Models improve, the tokens they predict form ever more complicated and nuanced outcomes. Rob Miles and Ryan Greenblatt discuss "Alignment Faking" a paper Ryan's team created - ideas about which Rob made a series of videos on Computerphile in 2017.


Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #504 on: 04/05/2025 07:01 pm »
Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete

Quote
Ask Yann LeCun—Meta's chief AI scientist, Turing Award winner, NYU data scientist and one of the pioneers of artificial intelligence—about the future of large language models (LLMs) like OpenAI's ChatGPT, Google's Gemini, Meta's Llama and Anthropic's Claude, and his answer might startle you: He believes LLMs will be largely obsolete within five years.

"The path that my colleagues and I are on at [Facebook AI Research] and NYU, if we can make this work within three to five years, we'll have a much better paradigm for systems that can reason and plan," LeCun explains in the latest installment in Newsweek's AI Impact interview series with Marcus Weldon, describing his team's recent work on their Joint Embedding Predictive Architecture (JEPA). He hopes this approach will make current LLM-based approaches to AI outdated, as these new systems will include genuine representations of the world and, he says, be "controllable in the sense that you can give them goals, and by construction, the only thing they can do is accomplish those goals."

His belief is so strong that, at a conference last year, he advised young developers, "Don't work on LLMs. [These models are] in the hands of large companies, there's nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs."

https://www.newsweek.com/ai-impact-interview-yann-lecun-artificial-intelligence-2054237

Offline BN

  • Full Member
  • *
  • Posts: 157
  • Earth
  • Liked: 67
  • Likes Given: 13
Re: How Can AI Be Used for Space Applications?
« Reply #505 on: 04/08/2025 04:49 am »
yes, multimodal is here. llms are not necessarily obsolete, they're just narrow. just like a vision-only AI is narrow.

models will not only train on all of our language, but also train on the near-infinite data of the physical world.

we need to send out von neumann probes so we'll have more diverse data to train models on later.
« Last Edit: 04/08/2025 04:51 am by BN »

Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #506 on: 04/08/2025 11:50 am »
yes, multimodal is here. llms are not necessarily obsolete, they're just narrow. just like a vision-only AI is narrow.

models will not only train on all of our language, but also train on the near-infinite data of the physical world.

we need to send out von neumann probes so we'll have more diverse data to train models on later.
This comes back to the old debate of do you need physical bodies to create true AGI/ASI, or can it be achieved without this.

Online sanman

  • Senior Member
  • *****
  • Posts: 6502
  • Liked: 1554
  • Likes Given: 20
Re: How Can AI Be Used for Space Applications?
« Reply #507 on: 04/08/2025 04:55 pm »
New analytical technique exposes the fallacy underlying various claims of "AI reasoning"



Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #508 on: 04/09/2025 11:04 am »
Google DeepMind 145-page paper predicts AGI will match human skills by 2030 — and warns of existential threats that could ‘permanently destroy humanity’

Quote
Google DeepMind says in a new research paper that human-level AI could plausibly arrive by 2030 and "permanently destroy humanity."

In a discussion of the spectrum of risks posed by Artificial General Intelligence, or AGI, the paper states, "existential risks ... that permanently destroy humanity are clear examples of severe harm. In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead it is the purview of society, guided by its collective risk tolerance and conceptualisation of harm. Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm."



Quote
While researchers at top AI labs have predicted AGI could arrive in the next five years, many other computer and cognitive scientists remain skeptical that the standard is even achievable with current methods.

For instance, Gary Marcus, an emeritus professor of cognitive science at New York University who has emerged as a leading skeptic of today's approaches to AI, has written that today's AI based on large language models is incapable of matching human-level intelligence across all domains, especially when one considers aspects of human intelligence such as the ability to learn from relatively few examples and common sense reasoning.

https://www.yahoo.com/news/google-deepmind-145-page-paper-160724450.html?

Online sanman

  • Senior Member
  • *****
  • Posts: 6502
  • Liked: 1554
  • Likes Given: 20
Re: How Can AI Be Used for Space Applications?
« Reply #509 on: 04/26/2025 05:07 pm »
Google DeepMind 145-page paper predicts AGI will match human skills by 2030 — and warns of existential threats that could ‘permanently destroy humanity’

What if some future iteration of AlphaFold gets used to build a virus to wipe out humanity, crops, fish, etc? No need for a SkyNet that sends time-traveling robots to wipe out humans, when sending the right virus will do the job much more efficiently.

Going multiplanetary or off-world could at least provide some insulation against viral threats. Unless of course said threat gets hurled across the interplanetary / intercolony distances. Actually, being multi-planetary AND multi-colony would be a safer mix.

Musk seems to now be talking about pursuing "sustainable abundance", and will probably enlist AI for that, so maybe that can help to mitigate human-on-human conflict reasons that could lead to destructive misuse of AI.

Beyond that, we'd have to worry about uncontrolled unsafeguarded AI that has weak/insufficient guardrails on it.
The idea of "constitutional AI" with clearly defined higher principles to guardrail it with reinforcement learning seems to be a promising approach. Use of a distinct/independent agent (or agents)  to audit and reform the main AI is key.

Online DanClemmensen

  • Senior Member
  • *****
  • Posts: 7957
  • Earth (currently)
  • Liked: 6425
  • Likes Given: 2733
Re: How Can AI Be Used for Space Applications?
« Reply #510 on: 04/26/2025 05:47 pm »
Google DeepMind 145-page paper predicts AGI will match human skills by 2030 — and warns of existential threats that could ‘permanently destroy humanity’

What if some future iteration of AlphaFold gets used to build a virus to wipe out humanity, crops, fish, etc? No need for a SkyNet that sends time-traveling robots to wipe out humans, when sending the right virus will do the job much more efficiently.

Going multiplanetary or off-world could at least provide some insulation against viral threats. Unless of course said threat gets hurled across the interplanetary / intercolony distances. Actually, being multi-planetary AND multi-colony would be a safer mix.

Musk seems to now be talking about pursuing "sustainable abundance", and will probably enlist AI for that, so maybe that can help to mitigate human-on-human conflict reasons that could lead to destructive misuse of AI.

Beyond that, we'd have to worry about uncontrolled unsafeguarded AI that has weak/insufficient guardrails on it.
The idea of "constitutional AI" with clearly defined higher principles to guardrail it with reinforcement learning seems to be a promising approach. Use of a distinct/independent agent (or agents)  to audit and reform the main AI is key.
AI safety is a well-established field of academic study. You can find some of the leaders in the field on the page
   https://en.wikipedia.org/wiki/Technological_singularity
In particular, Eliezer Yudkowsky has been working on this for more than 30 years, since he was a teenager.
  https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
He founded
  https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute



Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #511 on: 05/05/2025 08:44 pm »
It seems the more powerful LLMs get the more prone to hallucinating they are.

https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #512 on: 05/08/2025 09:21 am »
Quote
Earlier this year, scientists discovered a peculiar term appearing in published papers: “vegetative electron microscopy”.

This phrase, which sounds technical but is actually nonsense, has become a “digital fossil” – an error preserved and reinforced in artificial intelligence (AI) systems that is nearly impossible to remove from our knowledge repositories.

Like biological fossils trapped in rock, these digital artefacts may become permanent fixtures in our information ecosystem.

The case of vegetative electron microscopy offers a troubling glimpse into how AI systems can perpetuate and amplify errors throughout our collective knowledge.

https://theconversation.com/a-weird-phrase-is-plaguing-scientific-papers-and-we-traced-it-back-to-a-glitch-in-ai-training-data-254463

Offline tea monster

  • Full Member
  • ****
  • Posts: 677
  • Across the Universe
    • My ArtStation Portfolio
  • Liked: 921
  • Likes Given: 217
Re: How Can AI Be Used for Space Applications?
« Reply #513 on: 05/10/2025 09:45 am »
What a time we live in. Completely made up technobabble from Star Trek is now a real thing!

Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #514 on: 05/15/2025 06:58 pm »
Quote
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.

https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

Online sanman

  • Senior Member
  • *****
  • Posts: 6502
  • Liked: 1554
  • Likes Given: 20
Re: How Can AI Be Used for Space Applications?
« Reply #515 on: 05/16/2025 08:49 am »
Quote
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.

https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

So they're vulnerable to the same mistakes as human minds.
(I think we're currently debating some of these things in our Entertainment board threads.)

Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #516 on: 05/16/2025 11:35 am »
Quote
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.

https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

So they're vulnerable to the same mistakes as human minds.
(I think we're currently debating some of these things in our Entertainment board threads.)
I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.

Offline edzieba

  • Virtual Realist
  • Senior Member
  • *****
  • Posts: 6998
  • United Kingdom
  • Liked: 10681
  • Likes Given: 50
Re: How Can AI Be Used for Space Applications?
« Reply #517 on: 05/16/2025 01:43 pm »
Quote
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.

https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

So they're vulnerable to the same mistakes as human minds.
(I think we're currently debating some of these things in our Entertainment board threads.)
I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.
A statistical sentence-completer (what an LLM fundamentally is) trained on massive volumes of internet text will asymptotically approach the output of an average internet text writer.
If you can ask a question to the youtube comments section and get a useful answer, an LLM is a great fit for that task (e.g. "how many cats are in this photo?") - albeit probably a computationally inefficient one - but the further you stray from that the more likely you are to get nonsense rather than a useful answer. Even if you perform supplementary training (e.g. feed it physics textbooks) that supplementary text is so vastly smaller in volume than the main training set that you'll get youtube-commentor output bleeding through without warning. You can't just compensate by weighting your supplementary training higher, as that only works for responses that can match closely to the training set, and anything that strays from that will then be more likely to produce garbage output (and the problem gets worse as you increase the weight).

Offline Star One

  • Senior Member
  • *****
  • Posts: 14571
  • UK
  • Liked: 4176
  • Likes Given: 220
Re: How Can AI Be Used for Space Applications?
« Reply #518 on: 05/16/2025 04:35 pm »
Quote
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.

https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

So they're vulnerable to the same mistakes as human minds.
(I think we're currently debating some of these things in our Entertainment board threads.)
I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.
A statistical sentence-completer (what an LLM fundamentally is) trained on massive volumes of internet text will asymptotically approach the output of an average internet text writer.
If you can ask a question to the youtube comments section and get a useful answer, an LLM is a great fit for that task (e.g. "how many cats are in this photo?") - albeit probably a computationally inefficient one - but the further you stray from that the more likely you are to get nonsense rather than a useful answer. Even if you perform supplementary training (e.g. feed it physics textbooks) that supplementary text is so vastly smaller in volume than the main training set that you'll get youtube-commentor output bleeding through without warning. You can't just compensate by weighting your supplementary training higher, as that only works for responses that can match closely to the training set, and anything that strays from that will then be more likely to produce garbage output (and the problem gets worse as you increase the weight).
I think it goes back to has been discussed on here that true AGI needs to be via a physical body that disembodied AI cannot achieve that leap.

Offline Twark_Main

  • Senior Member
  • *****
  • Posts: 4594
  • Technically we ALL live in space
  • Liked: 2458
  • Likes Given: 1425
Re: How Can AI Be Used for Space Applications?
« Reply #519 on: 05/16/2025 06:37 pm »
Quote
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.

https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

So they're vulnerable to the same mistakes as human minds.
(I think we're currently debating some of these things in our Entertainment board threads.)
I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.
A statistical sentence-completer (what an LLM fundamentally is) trained on massive volumes of internet text will asymptotically approach the output of an average internet text writer.
If you can ask a question to the youtube comments section and get a useful answer, an LLM is a great fit for that task (e.g. "how many cats are in this photo?") - albeit probably a computationally inefficient one - but the further you stray from that the more likely you are to get nonsense rather than a useful answer. Even if you perform supplementary training (e.g. feed it physics textbooks) that supplementary text is so vastly smaller in volume than the main training set that you'll get youtube-commentor output bleeding through without warning. You can't just compensate by weighting your supplementary training higher, as that only works for responses that can match closely to the training set, and anything that strays from that will then be more likely to produce garbage output (and the problem gets worse as you increase the weight).
I think it goes back to has been discussed on here that true AGI needs to be via a physical body that disembodied AI cannot achieve that leap.

If you buy edzieba's argument, then you shouldn't expect that adding a statistical electric motor on/olf switcher will be a big improvement.

If you think LLMs are just fancy autocomplete, giving it a corporeal body will only make it go from a Youtube commenter into a Youtube commenter who won't shut up about creatine.   ;D

If you think LLMs have some deeper generalization capabilities beyond just fancy autocomplete then it might help to give it a physical body, but of course if you view LLMs that way then you probably didn't buy edzieba's argument anyway.
« Last Edit: 05/16/2025 06:41 pm by Twark_Main »

Tags:
 

Advertisement NovaTech
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
1