Humanoid robots have a data problem. There isn't enough to feed re-enforcement learning. Many of these companies are using Nvidia Cosmos to generate that data through hundreds of millions of physics based simulations. I can't help but wonder how that works for Clone. Modelling and simulating a robot with tens of actuators is one thing. Modelling hundreds of squishy and wobbly hydraulic 'muscles' sounds like quite another. Even if Cosmos handles it like a champ, there's presumably a higher computational overhead, making training slower and more expensive.
Quote from: Twark_Main on 03/24/2025 03:02 pmQuote from: Coastal Ron on 03/24/2025 02:44 pmQuote from: Twark_Main on 03/24/2025 06:27 amhttps://en.wikipedia.org/wiki/Experience_curve_effectsMore things made = better things.Well thanks for coming up with you own answer for the question I posed to member sanman, but what you reference applies to the COST of something being produced, not the capabilities of the product. So not relevant.You're being purposely obtuse. So no, building more of something does not mean that something will gain new and better capabilities. In fact you don't even have to build something in order for the design to iterate and get better - this happens all the time in engineering departments.Also, you are assuming that the mechanical parts of a humanoid robot are able to iterate quickly, despite decades of robotic design and manufacturing pointing to the opposite.
Quote from: Coastal Ron on 03/24/2025 02:44 pmQuote from: Twark_Main on 03/24/2025 06:27 amhttps://en.wikipedia.org/wiki/Experience_curve_effectsMore things made = better things.Well thanks for coming up with you own answer for the question I posed to member sanman, but what you reference applies to the COST of something being produced, not the capabilities of the product. So not relevant.You're being purposely obtuse.
Quote from: Twark_Main on 03/24/2025 06:27 amhttps://en.wikipedia.org/wiki/Experience_curve_effectsMore things made = better things.Well thanks for coming up with you own answer for the question I posed to member sanman, but what you reference applies to the COST of something being produced, not the capabilities of the product. So not relevant.
https://en.wikipedia.org/wiki/Experience_curve_effectsMore things made = better things.
You can have the best end effectors, but unless you have a control system that can use them to accomplish a task, they will be useless. And that is what we have NOT seen, in any humanoid robot, is the ability to do work that comes close to what a human can do.
Quote from: Cheapchips on 04/15/2025 10:06 pmHumanoid robots have a data problem. There isn't enough to feed re-enforcement learning. Many of these companies are using Nvidia Cosmos to generate that data through hundreds of millions of physics based simulations. I can't help but wonder how that works for Clone. Modelling and simulating a robot with tens of actuators is one thing. Modelling hundreds of squishy and wobbly hydraulic 'muscles' sounds like quite another. Even if Cosmos handles it like a champ, there's presumably a higher computational overhead, making training slower and more expensive.ASICs can handle it. There's a reason Broadcom stock started shooting up in Fall of 2022 alongside Nvidia. Broadcom is the biggest implementor of custom ASIC technology, and have the most industrial expertise in it.Whatever you can do with software, you can hard-code it in hardware thru ASICs.
simple and proven
Quote from: sanman on 04/16/2025 02:06 amQuote from: Cheapchips on 04/15/2025 10:06 pmHumanoid robots have a data problem. There isn't enough to feed re-enforcement learning. Many of these companies are using Nvidia Cosmos to generate that data through hundreds of millions of physics based simulations. I can't help but wonder how that works for Clone. Modelling and simulating a robot with tens of actuators is one thing. Modelling hundreds of squishy and wobbly hydraulic 'muscles' sounds like quite another. Even if Cosmos handles it like a champ, there's presumably a higher computational overhead, making training slower and more expensive.ASICs can handle it. There's a reason Broadcom stock started shooting up in Fall of 2022 alongside Nvidia. Broadcom is the biggest implementor of custom ASIC technology, and have the most industrial expertise in it.Whatever you can do with software, you can hard-code it in hardware thru ASICs.Tensor processing units, like the ones Broadcom make for Google, are not 'hard-coded' for AI.
Quote from: Cheapchips on 04/17/2025 11:10 amQuote from: sanman on 04/16/2025 02:06 amQuote from: Cheapchips on 04/15/2025 10:06 pmHumanoid robots have a data problem. There isn't enough to feed re-enforcement learning. Many of these companies are using Nvidia Cosmos to generate that data through hundreds of millions of physics based simulations. I can't help but wonder how that works for Clone. Modelling and simulating a robot with tens of actuators is one thing. Modelling hundreds of squishy and wobbly hydraulic 'muscles' sounds like quite another. Even if Cosmos handles it like a champ, there's presumably a higher computational overhead, making training slower and more expensive.ASICs can handle it. There's a reason Broadcom stock started shooting up in Fall of 2022 alongside Nvidia. Broadcom is the biggest implementor of custom ASIC technology, and have the most industrial expertise in it.Whatever you can do with software, you can hard-code it in hardware thru ASICs.Tensor processing units, like the ones Broadcom make for Google, are not 'hard-coded' for AI.Nobody mentioned TPUs. Sanman mentioned ASICs (Application Specific Integrated Circuits), which is a general term that does include chips "hard-coding" for AI.I'm guessing the mention of Broadcom (as an illustrative example of the recent ascendancy of ASICs) caused some confusion.
Lol.There will soon be no reason for humans on Mars. The few that are there will be looked after by humanoid robots. Much cheaper and only a one way trip required.
Quote from: BN on 04/17/2025 07:31 amsimple and provenAnd in direct competition with a single driven front wheel and a steering linkage, to go with the two existing back wheels.
Quote from: BN on 04/17/2025 07:31 amsimple and provenThat picture assumes that China arrived first on Mars and the Robot is Chinese-made.
Quote from: sanman on 05/14/2025 04:03 pm"Freeze All Motor Functions! I said Freeze All Motor Functions!" ...Those who danced were thought to be quite insane by those who couldn't hear the music.
"Freeze All Motor Functions! I said Freeze All Motor Functions!"
Quote from: Nomadd on 05/14/2025 04:45 pmQuote from: sanman on 05/14/2025 04:03 pm"Freeze All Motor Functions! I said Freeze All Motor Functions!" ...Those who danced were thought to be quite insane by those who couldn't hear the music.The most recently leaked footage from the H1 incident is shedding new light on the root cause investigation. See attachedWe've all been there, haven't we?
Quote from: Twark_Main on 05/14/2025 09:01 pmQuote from: Nomadd on 05/14/2025 04:45 pmQuote from: sanman on 05/14/2025 04:03 pm"Freeze All Motor Functions! I said Freeze All Motor Functions!" ...Those who danced were thought to be quite insane by those who couldn't hear the music.The most recently leaked footage from the H1 incident is shedding new light on the root cause investigation. See attachedWe've all been there, haven't we?wat
What is this?Someone told Optimus to do a particular dance.It was a preprogrammed sequence.Someone was wearing sensors and the robot copied their dance move.If it's not the first one then it's not really significant.
Tesla aren't the first, since we've seen Unitree, Boston etc do similarly complex full body motion. What is significant is the 'how it's done' of all these recent demos.You can't just map full body mocap of any action and then have a robot perform it. Even with inverse kinematics, physics takes over and you end up with heap of robot on the floor.With reinforcement learning the mocap or animation data is fed into a simulation where the simulated robot tries to perform that action hundred of millions of times. Natural selection from failure eventually creates a policy which can execute the moves successfully across a range of scenarios, including the natural variability in a robots own actuation. It's not an animation playback.Tesla and others are currently keen to point out that you RL train the action, drop the policy into the robot and it just works.The great white hope is that this same methodology works for complex full body manipulation. The real killer demo is going to be a robot picking something up in one hand and opening a door with the other.