Recently I was reading an article that advocated that computer "brains" would NOT evolve to mimic human brains, since human brains evolved because of our unique organic needs, but computer brains don't need to follow (and probably can't) that method of evolving.
Regardless, look at the velocity of progress overall, over the past decade, and you'll see that in the real world we experience when we walk around, AI systems have not made much of a difference. Robotic systems even less. Certainly my 2021 Tesla Model Y hasn't evolved to be any smarter, and people buying a 2025 Tesla Model Y don't get much more than what I experience. Explain that.
I have never advocated that progress was not being made. I've lived through Moore's Law, which many think may be reaching its natural end (i.e. you can't shrink atoms), but that just means humanity will discover some other method of evolving computing systems. The question I'm debating is HOW QUICKLY computing systems will evolve that can control humanoid robotic systems, and based on what can be seen - not predicted, but seen - they are not yet evolving very quickly.
The question is "when". Next year, next decade? When?I'm not one to believe PR hype, especially since so many "experts" such as Elon Musk have been so wrong with their predictions of what will happen with AI. If you had a friend that was always over exaggerating, you would tend to discount anything they predicted, right? That is where I am at with the whole realm of so-called "AI experts", and why I'm now in the mode of "show me, don't tell me".
Discussing ideas of what happens after Generative AI plateaus, Dr Jakob Foerster is based at the University of Oxford.
No regrets - What happens to AI Beyond Generative?QuoteDiscussing ideas of what happens after Generative AI plateaus, Dr Jakob Foerster is based at the University of Oxford.As the comments point out underneath this idea of simulating RL to progress AI brings its own set of issues to the table. Intresting that he’s writing about cutting edge AI on good old fashioned dot matrix paper. I remember getting swamped in the office with that stuff back in the day.
Training on a full 2D physics model
As Large Language Models improve, the tokens they predict form ever more complicated and nuanced outcomes. Rob Miles and Ryan Greenblatt discuss "Alignment Faking" a paper Ryan's team created - ideas about which Rob made a series of videos on Computerphile in 2017.
Ask Yann LeCun—Meta's chief AI scientist, Turing Award winner, NYU data scientist and one of the pioneers of artificial intelligence—about the future of large language models (LLMs) like OpenAI's ChatGPT, Google's Gemini, Meta's Llama and Anthropic's Claude, and his answer might startle you: He believes LLMs will be largely obsolete within five years."The path that my colleagues and I are on at [Facebook AI Research] and NYU, if we can make this work within three to five years, we'll have a much better paradigm for systems that can reason and plan," LeCun explains in the latest installment in Newsweek's AI Impact interview series with Marcus Weldon, describing his team's recent work on their Joint Embedding Predictive Architecture (JEPA). He hopes this approach will make current LLM-based approaches to AI outdated, as these new systems will include genuine representations of the world and, he says, be "controllable in the sense that you can give them goals, and by construction, the only thing they can do is accomplish those goals."His belief is so strong that, at a conference last year, he advised young developers, "Don't work on LLMs. [These models are] in the hands of large companies, there's nothing you can bring to the table. You should work on next-gen AI systems that lift the limitations of LLMs."
yes, multimodal is here. llms are not necessarily obsolete, they're just narrow. just like a vision-only AI is narrow. models will not only train on all of our language, but also train on the near-infinite data of the physical world. we need to send out von neumann probes so we'll have more diverse data to train models on later.
Google DeepMind says in a new research paper that human-level AI could plausibly arrive by 2030 and "permanently destroy humanity."In a discussion of the spectrum of risks posed by Artificial General Intelligence, or AGI, the paper states, "existential risks ... that permanently destroy humanity are clear examples of severe harm. In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead it is the purview of society, guided by its collective risk tolerance and conceptualisation of harm. Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm."
While researchers at top AI labs have predicted AGI could arrive in the next five years, many other computer and cognitive scientists remain skeptical that the standard is even achievable with current methods.For instance, Gary Marcus, an emeritus professor of cognitive science at New York University who has emerged as a leading skeptic of today's approaches to AI, has written that today's AI based on large language models is incapable of matching human-level intelligence across all domains, especially when one considers aspects of human intelligence such as the ability to learn from relatively few examples and common sense reasoning.
Google DeepMind 145-page paper predicts AGI will match human skills by 2030 — and warns of existential threats that could ‘permanently destroy humanity’
Quote from: Star One on 04/09/2025 11:04 amGoogle DeepMind 145-page paper predicts AGI will match human skills by 2030 — and warns of existential threats that could ‘permanently destroy humanity’What if some future iteration of AlphaFold gets used to build a virus to wipe out humanity, crops, fish, etc? No need for a SkyNet that sends time-traveling robots to wipe out humans, when sending the right virus will do the job much more efficiently.Going multiplanetary or off-world could at least provide some insulation against viral threats. Unless of course said threat gets hurled across the interplanetary / intercolony distances. Actually, being multi-planetary AND multi-colony would be a safer mix.Musk seems to now be talking about pursuing "sustainable abundance", and will probably enlist AI for that, so maybe that can help to mitigate human-on-human conflict reasons that could lead to destructive misuse of AI.Beyond that, we'd have to worry about uncontrolled unsafeguarded AI that has weak/insufficient guardrails on it.The idea of "constitutional AI" with clearly defined higher principles to guardrail it with reinforcement learning seems to be a promising approach. Use of a distinct/independent agent (or agents) to audit and reform the main AI is key.
Earlier this year, scientists discovered a peculiar term appearing in published papers: “vegetative electron microscopy”.This phrase, which sounds technical but is actually nonsense, has become a “digital fossil” – an error preserved and reinforced in artificial intelligence (AI) systems that is nearly impossible to remove from our knowledge repositories.Like biological fossils trapped in rock, these digital artefacts may become permanent fixtures in our information ecosystem.The case of vegetative electron microscopy offers a troubling glimpse into how AI systems can perpetuate and amplify errors throughout our collective knowledge.
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.
Quote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
Quote from: Star One on 05/15/2025 06:58 pmQuote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/So they're vulnerable to the same mistakes as human minds.(I think we're currently debating some of these things in our Entertainment board threads.)
Quote from: sanman on 05/16/2025 08:49 amQuote from: Star One on 05/15/2025 06:58 pmQuote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/So they're vulnerable to the same mistakes as human minds.(I think we're currently debating some of these things in our Entertainment board threads.)I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.
Quote from: Star One on 05/16/2025 11:35 amQuote from: sanman on 05/16/2025 08:49 amQuote from: Star One on 05/15/2025 06:58 pmQuote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/So they're vulnerable to the same mistakes as human minds.(I think we're currently debating some of these things in our Entertainment board threads.)I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.A statistical sentence-completer (what an LLM fundamentally is) trained on massive volumes of internet text will asymptotically approach the output of an average internet text writer. If you can ask a question to the youtube comments section and get a useful answer, an LLM is a great fit for that task (e.g. "how many cats are in this photo?") - albeit probably a computationally inefficient one - but the further you stray from that the more likely you are to get nonsense rather than a useful answer. Even if you perform supplementary training (e.g. feed it physics textbooks) that supplementary text is so vastly smaller in volume than the main training set that you'll get youtube-commentor output bleeding through without warning. You can't just compensate by weighting your supplementary training higher, as that only works for responses that can match closely to the training set, and anything that strays from that will then be more likely to produce garbage output (and the problem gets worse as you increase the weight).
Quote from: edzieba on 05/16/2025 01:43 pmQuote from: Star One on 05/16/2025 11:35 amQuote from: sanman on 05/16/2025 08:49 amQuote from: Star One on 05/15/2025 06:58 pmQuote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/So they're vulnerable to the same mistakes as human minds.(I think we're currently debating some of these things in our Entertainment board threads.)I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.A statistical sentence-completer (what an LLM fundamentally is) trained on massive volumes of internet text will asymptotically approach the output of an average internet text writer. If you can ask a question to the youtube comments section and get a useful answer, an LLM is a great fit for that task (e.g. "how many cats are in this photo?") - albeit probably a computationally inefficient one - but the further you stray from that the more likely you are to get nonsense rather than a useful answer. Even if you perform supplementary training (e.g. feed it physics textbooks) that supplementary text is so vastly smaller in volume than the main training set that you'll get youtube-commentor output bleeding through without warning. You can't just compensate by weighting your supplementary training higher, as that only works for responses that can match closely to the training set, and anything that strays from that will then be more likely to produce garbage output (and the problem gets worse as you increase the weight).I think it goes back to has been discussed on here that true AGI needs to be via a physical body that disembodied AI cannot achieve that leap.
Quote from: Star One on 05/16/2025 11:35 amQuote from: sanman on 05/16/2025 08:49 amQuote from: Star One on 05/15/2025 06:58 pmQuote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/So they're vulnerable to the same mistakes as human minds.(I think we're currently debating some of these things in our Entertainment board threads.)I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.A statistical sentence-completer (what an LLM fundamentally is) trained on massive volumes of internet text will asymptotically approach the output of an average internet text writer. If you can ask a question to the youtube comments section and get a useful answer, an LLM is a great fit for that task (e.g. "how many cats are in this photo?") - albeit probably a computationally inefficient one - but the further you stray from that the more likely you are to get nonsense rather than a useful answer. Even if you perform supplementary training (e.g. feed it physics textbooks) that supplementary text is so vastly smaller in volume than the main training set that you'll get youtube-commentor output bleeding through without warning. You can't just compensate by weighting your supplementary training higher, as that only works for responses that can match closely to the training set, and anything that strays from that will then be more likely to produce garbage output (and the problem gets worse as you increase the weight).
Quote from: sanman on 05/16/2025 08:49 amQuote from: Star One on 05/15/2025 06:58 pmQuote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/So they're vulnerable to the same mistakes as human minds.(I think we're currently debating some of these things in our Entertainment board threads.)I cannot help wondering if LLMs are a dead end for achieving AGI. You certainly would question one making critical decisions in human spaceflight.
Quote from: Star One on 05/15/2025 06:58 pmQuote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/So they're vulnerable to the same mistakes as human minds.(I think we're currently debating some of these things in our Entertainment board threads.)
Quote Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.