Quote from: Coastal Ron on 03/24/2025 12:57 amAlso, computer technology isn't advancing that quickly. GPU's are, but CPU's certainly are not, and CPU's are critical components for running the operating systems of robots.AI is making fast advances, but that only applies to part of the software needed for bi-pedal robots, and the hardware for robots is not advancing that fast, and is still pretty rudimentary compared to what a human can do.The hardware will evolve to look more like our brain, where the processing and memory are more co-located / co-resident.Our brain has memory and processing more distributed and intertwined with each other.
Also, computer technology isn't advancing that quickly. GPU's are, but CPU's certainly are not, and CPU's are critical components for running the operating systems of robots.AI is making fast advances, but that only applies to part of the software needed for bi-pedal robots, and the hardware for robots is not advancing that fast, and is still pretty rudimentary compared to what a human can do.
We can see that GPUs, whether from Nvidia or competitors like AMD, are accruing more short-term cache memory over time... This is clearly a harbinger for the future of where things are going.
QuoteWhich is why I think human-assisted or tele-robotic systems will be of the most use for space exploration.For the near future. But for the longer-term future, autonomy will naturally be more useful and advantageous. AI will allow our tele-robotic commands to be given at more of a high-level for execution rather than having to be low-level control and micro-management.
Which is why I think human-assisted or tele-robotic systems will be of the most use for space exploration.
Recently I was reading an article that advocated that computer "brains" would NOT evolve to mimic human brains, since human brains evolved because of our unique organic needs, but computer brains don't need to follow (and probably can't) that method of evolving.
Quote from: Coastal Ron on 03/29/2025 02:49 pmRecently I was reading an article that advocated that computer "brains" would NOT evolve to mimic human brains, since human brains evolved because of our unique organic needs, but computer brains don't need to follow (and probably can't) that method of evolving.There's a significant counterexample that casts this statement into doubt: Tesla recently transitioned the FSD suite from a rules-based framework that used some auxiliary neural network modules, to a neural-network-based framework that uses some auxiliary rules. IOW, they're becoming more brain-like, not less.We can obviously have a long discussion where we bicker about what it means to be "brain-like"; let's not. But the key distinction between rules-based systems and neural networks is important.Given Tesla's current software architecture proclivities, I'd bet that Optimus will also become more neural-network-based.What would this mean for hybridizing teleoperation and true autonomous operation?I can imagine a humanoid robot that starts out with a basic set of rough-surface locomotion skills, then trains itself on specific fine-motor tasks, based on the experience provided by the teleoperators performing those tasks. When it's confident enough in its actions, the tasks can be offloaded from the teleoperators and proceed autonomously.That's a big step from where we are now, but it's a step that is consistent with how machine learning is changing.Update: Here's a Boston Dynamics blog post about using reinforcement learning with Spot.
Quote from: Coastal Ron on 03/29/2025 02:49 pmRecently I was reading an article that advocated that computer "brains" would NOT evolve to mimic human brains, since human brains evolved because of our unique organic needs, but computer brains don't need to follow (and probably can't) that method of evolving.There's a significant counterexample that casts this statement into doubt: Tesla recently transitioned the FSD suite from a rules-based framework that used some auxiliary neural network modules, to a neural-network-based framework that uses some auxiliary rules. IOW, they're becoming more brain-like, not less.
We can obviously have a long discussion where we bicker about what it means to be "brain-like"; let's not. But the key distinction between rules-based systems and neural networks is important.
Given Tesla's current software architecture proclivities, I'd bet that Optimus will also become more neural-network-based.
I can imagine a humanoid robot that starts out with a basic set of rough-surface locomotion skills, then trains itself on specific fine-motor tasks, based on the experience provided by the teleoperators performing those tasks. When it's confident enough in its actions, the tasks can be offloaded from the teleoperators and proceed autonomously.
That's a big step from where we are now, but it's a step that is consistent with how machine learning is changing.
The question I'm debating is HOW QUICKLY computing systems will evolve that can control humanoid robotic systems, and based on what can be seen - not predicted, but seen - they are not yet evolving very quickly.
The Neural Networks being run at scale today are VERY, VERY far from being 'brain-like' at the architectural level, let alone at the hardware level. There have been many attempts at brain-like physical chip architectures for neural network computation, and every single one has fallen flat on its face as being dramatically slower and less efficient compared to running on serialised processors. In Memory Computation has not fared much better when it comes to actual implementations, as by the time sufficient operations are built into the IMC die to actually be useful for NNs what you end up with is a large matrix acceleration unit with a really terrible memory architecture that is actively hostile to cross-chip memory access or to initial data ingest (i.e. any speedups from having more local memory are more than lost from introduced delays in getting data in and out).Even at a fundamental level, brains are timing-based systems operating asynchronously and modulating mostly in the frequency domain (pulse-train modulation). All current practical ANN implementations are synchronous clocked with value-based modulation. ANNs are almost entirely connectome-free (barring some partitioning in the case of clustered training which is minimised to the absolute maximum extent possible) whereas actual brains not only have a connectome but it is very strongly correlated with physical neuron position.
Apr 2, 2025Boston Dynamics’ Atlas is taking on a new role—this time, behind the camera. In a collaboration with WPP, Canon, and NVIDIA, Atlas has been tested as a robotic camera operator, demonstrating how humanoid robots could assist in filmmaking. With its ability to hold heavy equipment, execute precise movements, and train in virtual environments, Atlas represents a potential shift in how robotics integrate into creative and industrial fields.
Yall are thinking at the circuit level. CMOS circuit design is far different than wetware, it's going to be different architecture.Think instead at the macro level. The human brain isn't one big Neural Net. It's a bunch of smaller specialized NN cooperating.There's specialization for hearing, seeing, emotions, higher coordination functions, lower coordination functions, etc. Neuroscientists have mapped this all out over the last 100+ years.Some of those brain functions may be more built in logic (expert systems) than NN.I have yet to see the "cooperating neural net" approach taken on various systems. It's mostly just centralized NNs.The closest I've seen is training a NN to take over some expert system's function (per the article above).
Revolutionizing Lunar Operations: Robots at Work 🌕🤖Open Positions ➡️ https://job-boards.greenhouse.io/gitai Check out our #GITAI Lunar Rover autonomously assembling lunar infrastructure and deploying solar panels in a simulated lunar environment.
https://twitter.com/GITAI_HQ/status/1907387602249957700QuoteRevolutionizing Lunar Operations: Robots at Work 🌕🤖Open Positions ➡️ https://job-boards.greenhouse.io/gitai Check out our #GITAI Lunar Rover autonomously assembling lunar infrastructure and deploying solar panels in a simulated lunar environment. https://twitter.com/GITAI_HQ/status/1905213247944372275
Here's some company/team that are claiming a different approach (how different it actually is, I can't really tell yet)
So not relevant to this thread then.
Quote from: daedalus1 on 04/13/2025 11:01 pmSo not relevant to this thread then.Do we really need a separate robot-horse thread, though?This concept could be useful on the irregular surfaces of the Moon and Mars, for navigating where dune-buggies can't.Autonomously it could still be used as a mule to carry payloads.
Here's a more anatomically accurate humanoid robot which more closely imitates the human skeleto-musculatureMaybe something like this could put in a spacesuit, to better stress-test the thing across many hours of human-like movements.
Quote from: sanman on 04/14/2025 11:59 pmHere's a more anatomically accurate humanoid robot which more closely imitates the human skeleto-musculature-Maybe something like this could put in a spacesuit, to better stress-test the thing across many hours of human-like movements.Visions of Mr. Data from Startrek TNG, or Cylons from Battlestar Galactica, or the Borg.
Here's a more anatomically accurate humanoid robot which more closely imitates the human skeleto-musculature-Maybe something like this could put in a spacesuit, to better stress-test the thing across many hours of human-like movements.