The "current state of the art in AI" is to continue to throw more and more compute at the problem, using techniques (bog standard MLNNs) that were dismissed as dead ends decades ago, and to continue to improve in capability as a result of that increased available compute.
Turing Machines (the majority of current computers) work by being very dumb very quickly, and its turned out that 'AI' can work very well by being very dumb in parallel, sufficient to be just as useful as out UTMs. Of course, as has happened every single time an AI technique has been adopted into production, it will be dismissed in short order as "not real/true AI", as if the sole goal of AI is to replicate human-level intelligence and ignore the existence and utility of the huge range of other useful intelligences.
Quote from: JohnFornaro on 12/06/2022 10:55 am"Trust me on this" -- While you may assert that we [or you?] know the full extent of human intelligence, and the computing power and complexity of the human brain, you and we do not. I'm simply saying that scaling up existing systems isn't going to produce human intelligence or anything like it, for the same reasons that scaling up a truck or a rocket isn't going to. ...
"Trust me on this" -- While you may assert that we [or you?] know the full extent of human intelligence, and the computing power and complexity of the human brain, you and we do not.
Lack of computing power is not what stops current (or foreseeable) AI technology from having human-level intelligence.
"Lack of computing power" is one of the things stopping "forseeable" AI from having human level intelligence.
Quote from: Greg Hullender on 12/03/2022 07:06 pmQuote from: ppnl on 12/02/2022 10:19 pmBut do you have an argument for such an opinion? A neuron is just a physical object that obeys physical laws. The Church-Turing thesis suggests that a computer program should be able to emulate the function of a neuron. While emulating a hundred billion neurons with a thousand trillion interconnections is challenging there is no new physics here as far as we can tell. Therefore it seems to be mostly an engineering problem....I'm not arguing that intelligence is supernatural--just that we don't have the foggiest idea how to engineer such a thing. Nor is it reasonable to suppose that if we just make our computing systems bigger they'll somehow magically become intelligent.I remember back in the late 80's a company I was working at, that had LOTS of Phd's, thought that neural networks would solve the A.I. hardware challenge. That was also about the time when "fuzzy logic" was thought to be the next revolution in consumer appliances, for making "smart washers" and such.Needless to say the hype did not live up to reality, though no doubt we learned more about what we didn't know than what we did know.Fast forward to fairly recently and we saw a similar boom and bust cycle with A.I. and its various subcategories. Though it looks like the bust cycle is not so bad with A.I., as we have found plenty of applications that can use its limited abilities.But to your point, it does not yet appear that we understand how to make truly intelligent artificial intelligence, as opposed to smart tools.So from that standpoint, of course "smart tools" like A.I. can be used for space applications. But I don't think they will be able to solve the challenges we have holding us back from expanding humanity out into space.
Quote from: ppnl on 12/02/2022 10:19 pmBut do you have an argument for such an opinion? A neuron is just a physical object that obeys physical laws. The Church-Turing thesis suggests that a computer program should be able to emulate the function of a neuron. While emulating a hundred billion neurons with a thousand trillion interconnections is challenging there is no new physics here as far as we can tell. Therefore it seems to be mostly an engineering problem....I'm not arguing that intelligence is supernatural--just that we don't have the foggiest idea how to engineer such a thing. Nor is it reasonable to suppose that if we just make our computing systems bigger they'll somehow magically become intelligent.
But do you have an argument for such an opinion? A neuron is just a physical object that obeys physical laws. The Church-Turing thesis suggests that a computer program should be able to emulate the function of a neuron. While emulating a hundred billion neurons with a thousand trillion interconnections is challenging there is no new physics here as far as we can tell. Therefore it seems to be mostly an engineering problem....
I think there is some serious fuzziness in how people are thinking about this. Let me try to clarify. First, universal Turing machines are just that. Universal. That means all computers are the same to within a polynomial time complexity. Neglecting memory size and processing speed any problem that one can solve any other can solve. They should also be able to simulate any object or process that exists in the universe. As long as the process isn't quantum in nature they should be able to simulate efficiently in the big O time complexity sense. That means we should be able to simulate brains. Short of trashing the Church/Turing thesis there is no way around this conclusion. People are free to reject the CT thesis but they should say so up front to avoid confusion.So what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part. After that all that is left is the software and algorithms. That may be the hard part.Now, does anyone disagree with the truth and clarity of the above?
Quote from: ppnl on 12/06/2022 09:41 pmI think there is some serious fuzziness in how people are thinking about this. Let me try to clarify. First, universal Turing machines are just that. Universal. That means all computers are the same to within a polynomial time complexity. Neglecting memory size and processing speed any problem that one can solve any other can solve. They should also be able to simulate any object or process that exists in the universe. As long as the process isn't quantum in nature they should be able to simulate efficiently in the big O time complexity sense. That means we should be able to simulate brains. Short of trashing the Church/Turing thesis there is no way around this conclusion. People are free to reject the CT thesis but they should say so up front to avoid confusion.So what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part. After that all that is left is the software and algorithms. That may be the hard part.Now, does anyone disagree with the truth and clarity of the above?A Turing machine is a clocked digital system. Digital computers are a subset of all possible computers, so a UTM cannot simulate all possible computers. The two notable extensions are analog computers and non-clocked digital systems. Biological systems do not appear to be clocked, and they appear to incorporate analog components. It is not clear that a UTM can simulate a system with analog components in polynomial time.I personally believe that "intelligence" will end up getting implemented in "traditional" computers (i.e., clocked digital logic). By "intelligence" I mean a system that passes an extended version of the Turing test. However, if this does not happen, you don't need to invoke quantum theory. You can add analog elements instead.
I came across this interesting article about how AI is being used to conjure up completely new proteins very quickly.https://www.nature.com/articles/d41586-022-02947-7It occurred to me that this could enable all sorts of designer bugsorganisms for ISRU purposes, and maybe even terraforming.Perhaps we could have bugsorganisms designed to survive the Martian day/night cycle, which would come alive during the day, to perform useful conversion of natural resources, like through Sabatier or whatever.Could we even use AI to design complex ecosystems of organisms that would cope with the existing Mars conditions while working to transform the environment into one that's more human-friendly?
Tell me more, with specifics on the engineering involved.
I’m pretty sure ChatGPT could pass the Turing test if implemented and judged by most people. The most obviously machine aspect of ChatGPT is that the responses are much faster than a human.ChatGPT simulates a dumb human (but really good at English class assignments) really well.
Quote from: DanClemmensen on 12/06/2022 10:00 pmQuote from: ppnl on 12/06/2022 09:41 pmI think there is some serious fuzziness in how people are thinking about this. Let me try to clarify. First, universal Turing machines are just that. Universal. That means all computers are the same to within a polynomial time complexity. Neglecting memory size and processing speed any problem that one can solve any other can solve. They should also be able to simulate any object or process that exists in the universe. As long as the process isn't quantum in nature they should be able to simulate efficiently in the big O time complexity sense. That means we should be able to simulate brains. Short of trashing the Church/Turing thesis there is no way around this conclusion. People are free to reject the CT thesis but they should say so up front to avoid confusion.So what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part. After that all that is left is the software and algorithms. That may be the hard part.Now, does anyone disagree with the truth and clarity of the above?A Turing machine is a clocked digital system. Digital computers are a subset of all possible computers, so a UTM cannot simulate all possible computers. The two notable extensions are analog computers and non-clocked digital systems. Biological systems do not appear to be clocked, and they appear to incorporate analog components. It is not clear that a UTM can simulate a system with analog components in polynomial time.I personally believe that "intelligence" will end up getting implemented in "traditional" computers (i.e., clocked digital logic). By "intelligence" I mean a system that passes an extended version of the Turing test. However, if this does not happen, you don't need to invoke quantum theory. You can add analog elements instead.Asynchronous digital systems have the same computational power as clocked digital systems. This becomes obvious when you consider that they both use the same universal logic gates and the same boolean algebra. Any asynchronous digital circuit can immediately be implemented as a clocked digital circuit or simply programmed as a computer program. Clock signals are really just a kludge to solve some engineering problems with asynchronous circuits. The problem is the data paths from input to output may be of very different lengths. This creates race conditions and for a large circuit the output may never be valid. A timing signal can latch the output to a valid state until the next state is valid. A few decades ago they started experimenting with asynchronous sections in microprocessors to make them a little faster and use less energy. I think all modern processors have asynchronous sections. But it is just an engineering kludge that gets a little more performance.The problem with analog computers is they really suck. There is no way to control the noise so the complexity of the calculations is severely limited. As a practical matter you will never calculate Pi to a thousand digits for example. And a digital computer can do anything that they can do faster and better. Floating point arithmetic is a thing on digital computers and they can have as many digits of accuracy as you want. I don't know if any analog computer is still in use anywhere in the world today. They are obsolete exactly because a digital computer can do everything that they can do.Quantum computers arguably could offer an exponential speedup on certain limited types of computations. But it is hard to see how a warm wet brain could be a quantum computer.I expect algorithmic progress decade on decade so that in 70 or so years we will look back on today the same way that we today look back on the invention of the transistor. Future computers used in AI may not look much like our computers but they will be universal Turing machines.
So what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part.
Interact with chat.OpenAI.com for a few hours, see what others are doing, and then tell me how confident you are that a sufficiently large model, with tweaks in the next few years or decades, couldn’t approximate the kind of general intelligence that humans and animals exhibit.
I was not attempting to claim that the alternatives (async digital and analog) are useful. I'm just pointing out that the flat assertion that a UTM can do any computing task is not strictly true in theory. Furthermore, biological intelligence appears to use both. Note that async digital is continuous in the time domain, so it is basically analog in the time domain.
Quote from: ppnl on 12/06/2022 09:41 pmSo what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part. If you're talking about simulating a brain on the atomic level, then you need a lot more than a "handful of orders of magnitude," unless your hands are unusually large. :-) It's not clear to me that we could set up such a simulation anyway--simulating single protein molecules is challenging at the moment.
If you're imagining that we understand the brain well enough to set up a simulation of it at something other than the atomic level, I think you're seriously misinformed.
I'm loath to draw any distinction between 'real' intelligence and a tottering pile of hacks...
Now, does anyone disagree with the truth and clarity of the above?
But it is hard to see how a warm wet brain could be a quantum computer.
How many light bulbs does it take to screw in a chicken?