Garbage in, garbage out. The low resolution images are being made to look like high resolution images. Actual new information is not being created. I admit that the better-looking image might be easier to interpret and can be a help, but new information is not being created. Scientific uses would be a lot more limited than uses in other fields (like movie CGI). The people who want to sell these kinds of things will claim they can do things they cannot really do.
Also, keep in mind that none of this is artificial intelligence, any more than solving integrals, playing chess, natural language translation or image identification was.
Also, keep in mind that none of this is artificial intelligence, any more than solving integrals, playing chess, natural language translation or image identification was. The tendency is for AI investigators is to identify something that they find difficult to do as a benchmark of intelligence, develop an "AI" system that accomplishes the task and then realize that the product is performing with no intelligence at all. So they go back to their terminals, move the goalposts and try again. In the meantime, entrepreneurs productize these idiot savant creations and the popular press announces the imminent Rise of the Machines, again. This has been going on since the 1950's, at least.
AI is, fundamentally, a marketing term. It has no consistent technical definition.
I once heard a tongue-in-cheek suggestion of "any program which contains at least one branching instruction," and based on real-world corporate usage that definition seems about right actually...
It occurred to me that this could enable all sorts of designer bugs for ISRU purposes, and maybe even terraforming....Could we even use AI to design complex ecosystems of organisms that would cope with the existing Mars conditions while working to transform the environment into one that's more human-friendly?
The current iteration of AI is a deep learning system that instead of being explicitly programmed to do a specific task learns how to do it by being fed enormous amount of training data, guessing at what the right answer is and having the guess evaluated. The quality of the current guess is used to adjust how future guesses are made until they are consistently good enough. Then whenever new data is presented the deep learning system will make a guess based on its experience with the training data sets. This is the principle behind the current generation of "AI" systems, minus the mathematics and implementation details.
Well, there also appear to be more recently explored approaches like Generative Adversarial Networks that are using Darwinistic type algorithms, and just rapidly iterating through them to create new content.
As for classifiers, for a lot of routine Earthly applications, that's all you need. If your AI is looking through satellite imagery to spot a tank on the ground, then as long as it's already trained by looking at enough tanks, then it should be able to meet the needs. If it's looking for debris of a crashed space probe on the surface of the Moon, then it would have had to train by looking at enough debris fields.
If you're trying to upscale a person's face from low-res to high-res, then having trained on a bunch of faces won't be enough to know if a mole should be visible in the high-res image that wasn't apparent in the low-res one. But for generalizations, it should be okay to interpolate/extrapolate. Rather than creating new information, you're "transferring" (inferring) it from the training data.
The joke-that's-not-a-joke within the field is that "AI is the set of all software problems we can't solve yet."Once we solve a given AI problem (eg defeating a chess grandmaster) it immediately gets its own specialized name ("two-player game playing") and thus it no longer falls under "AI." A nice catch-22!
Quote from: Twark_Main on 09/26/2022 11:52 pmAI is, fundamentally, a marketing term. It has no consistent technical definition.I agree.Quote I once heard a tongue-in-cheek suggestion of "any program which contains at least one branching instruction," and based on real-world corporate usage that definition seems about right actually... I bet the max(0, x) in deep neural network ReLU units is typically computed using instructions for max rather than branches. If that's true then deep neural networks don't actually include any branches in the core code. So modern AI code may actually execute fewer branches per second than many non-AI tasks such as sorting.
Autonomous onboard navigation...
Quote from: sanman on 09/19/2022 12:03 amIt occurred to me that this could enable all sorts of designer bugs for ISRU purposes, and maybe even terraforming....Could we even use AI to design complex ecosystems of organisms that would cope with the existing Mars conditions while working to transform the environment into one that's more human-friendly?As far as I know, a human team would be unable to create "designer bugs," so AI can't do it either. There is no magic here.
Quote from: jimvela on 11/15/2022 03:32 pmAutonomous onboard navigation......Adding an "Autonomous" level is additional level of complexity beyond that. Hence, first get the basics right and then we can talk about AI additions
Isn't this exactly the kind of stuff a genetic evolutionary approach could tackle? To try to synthesize a viable biochemical paths working in the Martian environment? Btw, there've been things like genetically evolved antennas done by NASA: https://en.wikipedia.org/wiki/Evolved_antenna(As a side note - as you've been in this field for a long time - do genetic evolution algos belong usually under the AI moniker?)
Quote from: JayWee on 11/15/2022 08:13 pmIsn't this exactly the kind of stuff a genetic evolutionary approach could tackle? To try to synthesize a viable biochemical paths working in the Martian environment? Btw, there've been things like genetically evolved antennas done by NASA: https://en.wikipedia.org/wiki/Evolved_antenna(As a side note - as you've been in this field for a long time - do genetic evolution algos belong usually under the AI moniker?)Yes, evolutionary algorithms are definitely part of "AI". And, when you can do a generation every six seconds or so (give or take), you at least have a chance of getting a useful result from them. The question, though, is whether there's actually a simpler, cheaper way to get that same result (or better). I'm not familiar with the evolved antenna, but I suspect that some other, less fancy search algorithm would work at least as well with less complexity. In other words, evolutionary algorithms always appear to be a solution in search of a problem, and the places that actually use them seem to have viewed using that algorithm as a goal "Yes! We used evolutionary algorithms!" rather than trying to find the best solution to the underlying problem.I should confess here that part of my negativity comes from a particular group I had the misfortune of interacting with when I worked at Amazon. They kept coming up with "solutions" for hard problems in different areas, but their software almost never worked very well--it was generally way too slow and nowhere near as accurate as they claimed it to be. However, they had friends in high places, and they consistently tried to use politics to force people to adopt their software when they couldn't win on the merits. Obviously, it's not fair to blame everyone working on EA for what one group of people did.However, the gripes against evolutionary algorithms are common in the field. Go to just about any ML conference and ask some of the experts their opinions on them. I'm far from the only person who's down on them.As for applying them to biological evolution on Mars, the fact that the algorithms are inspired by evolution doesn't mean they're really all that well suited to solving problems involving the real thing. In this case, the challenge is that a) I don't think we can engineer new organisms all that accurately in the first place and b) I suspect we can't really model the Martian environment all that well either. We might get there someday, of course, but even then I suspect there will be better approaches.
Quote from: JayWee on 11/15/2022 08:13 pmIsn't this exactly the kind of stuff a genetic evolutionary approach could tackle? To try to synthesize a viable biochemical paths working in the Martian environment? Btw, there've been things like genetically evolved antennas done by NASA: https://en.wikipedia.org/wiki/Evolved_antenna(As a side note - as you've been in this field for a long time - do genetic evolution algos belong usually under the AI moniker?)Yes, evolutionary algorithms are definitely part of "AI". And, when you can do a generation every six seconds or so (give or take), you at least have a chance of getting a useful result from them. The question, though, is whether there's actually a simpler, cheaper way to get that same result (or better). [snip]
"If it's looking for debris of a crashed space probe on the surface of the Moon, then it would have had to train by looking at enough debris fields. "Not necessarily. I many cases you compare your new image with a pre-impact image. Example: the Chang'e 1 orbiter impact... not present in an Apollo 16 panoramic camera image but present in LRO images. Almost any future impact site will be findable that way. Older impacts like (for instance) the Apollo 16 LM ascent stage are much trickier. But if you think AI can do it better than a human, think again.
Quote from: leovinus on 11/15/2022 05:31 pmQuote from: jimvela on 11/15/2022 03:32 pmAutonomous onboard navigation......Adding an "Autonomous" level is additional level of complexity beyond that. Hence, first get the basics right and then we can talk about AI additions Well, the Mars rovers do have some autonomy. They do absolutely need it - you can't teleoperate them directly from Earth. I absolutely expect Tesla's FSD-derived autonomy on Mars/Moon one day.
Somewhere in the early 90s, during the genetic search period, we had a few "beer heavy" discussions when we saw Kangaroo search in a newsgroup. Was is sci.physics or something? Not sure. In a nutshell, there are various ways to find a best solution for your models and dropping kangaroos on Earth to find the highest point is one example solution to one optimization problem. There is a readable discussion here at http://processcontrol4dummies.blogspot.com/2013/06/optimization-technique-kangaroo-analogy.htmlEnjoy.
Quote from: Phil Stooke on 11/15/2022 08:27 pm"If it's looking for debris of a crashed space probe on the surface of the Moon, then it would have had to train by looking at enough debris fields. "Not necessarily. I many cases you compare your new image with a pre-impact image. Example: the Chang'e 1 orbiter impact... not present in an Apollo 16 panoramic camera image but present in LRO images. Almost any future impact site will be findable that way. Older impacts like (for instance) the Apollo 16 LM ascent stage are much trickier. But if you think AI can do it better than a human, think again.In other words, looking for a "change" or "deviation" in a static landscape is almost a binary classification problem i.e. easier than of collection millions of "debris" pictures for fancy classifier. As always, problem formulation is key. Plus a well chosen algorithmic solution.
Wasn't there some case of NASA designing a communication antenna using machine learning? How did that work?
Others here have provided good answers. One aspect of machine learning, whether via neural networks or genetic algorithms, is that for either, you need a high fidelity simulation of reality in which to test / train or trial via tournament competitions in order to be able to rank / compare which alternative to proceed with (or to adjust weights in the neural net). My information is old - from around 1990 - but worked with both.One style of solution my colleague had success with genetic algorithms was design of parameters for a fuzzy controller. The genetic algorithm did a better job of finding precise results when the resolution of the controller was more coarse than humans provided - they tried to get closer and closer until the controller could no longer step in small enough steps to get better. The genetic algorithms moved the control outside the closer range, and was then able to take larger steps to get closer. It was a solution human engineers hadn't considered and was measurably better. Chances are that once the genetic algorithm taught the humans a better approach, they no longer needed the algorithm.
Just an anecdote.
Haha, unless the genetic algorithm is able to find even better tricks later on.
It is worth noting that we and our superpower evolved.
Quote from: ppnl on 11/22/2022 04:57 pmIt is worth noting that we and our superpower evolved.Indeed. Via real evolution, which had quadrillions of individuals evolving in parallel for billions of years. (Or millions for millions of years, if you just want to count hominids.) No existing GA system has anything remotely comparable.
But I think it is clear that GAs will never directly lead to human level intelligence. For that you will probably need neural nets. GAs might be useful for developing those neural nets.
Quote from: sanman on 11/22/2022 07:34 amHaha, unless the genetic algorithm is able to find even better tricks later on.Despite the name, a genetic algorithm can't really find "new tricks" in the way you're probably thinking.
Quote from: Greg Hullender on 11/22/2022 03:54 pmQuote from: sanman on 11/22/2022 07:34 amHaha, unless the genetic algorithm is able to find even better tricks later on.Despite the name, a genetic algorithm can't really find "new tricks" in the way you're probably thinking. Check this story out: https://hackaday.com/2018/11/12/how-to-evolve-a-radio/
Sensor evolution research typically uses evolutionary algorithms (EAs) to generate sensors that near optimally satisfy large numbers of constraints. This is qualitatively different from the phylogenetic process found in nature that has resulted, for example, in the mammalian auditory ossicles evolving from the jaw bones of amphibians and reptiles, that in turn had previously acted as gill arches in fish. This paper describes an evolvable hardware experiment that resulted in a network of transistors sensing and utilising the radio waves emanating from nearby PCs. We argue that this evolved ‘radio’ is only the second device ever whose sensors were constructed in a way that in key aspects is analogous to that found in nature. We highlight the advantages and disadvantages of this approach and show why it is practically impossible to implement a similar process in simulation.
Quote from: ppnl on 11/22/2022 07:28 pmBut I think it is clear that GAs will never directly lead to human level intelligence. For that you will probably need neural nets. GAs might be useful for developing those neural nets.I think you're being misled by the term "neural nets." Neural nets are an excellent tool, but I suspect you imagine they do something rather different from what they really do. They're probably best described as "loosely inspired by an obsolete understanding of how neurons in the brain work." What they really are is a method to try to find a function that describes a particular set of inputs.
To clarify what I mean by that, consider an expression like y = f(x). If we know that f(x) = x^2 and I give you x and ask you for y, that's function evaluation, and the answer is x^2. If I give you y and ask you for x, that's function inversion, and the answer is sqrt(y) (or the negative of that). But neural nets are designed to solve the problem when I give you lots of (x,y) pairs and ask you to find f. In general, x and y are both vectors. (Neural nets are not the only way to try to do this, by the way.)I think you can see how this could be very useful, but it should also be clear that it has no tangible connection to human intelligence.
Well yes but what is the alternative?
Quote from: ppnl on 11/23/2022 12:42 amWell yes but what is the alternative? Remember that we're talking about space applications here. For that purpose, there are lots of alternatives. I'm afraid a deep discussion over how the brain really works is off-topic.
Yeah, probably off topic a bit. But we would have to agree on what could be achieved with AI in order to decide on how it could be used in space. Maybe someday AIs will be discussing how we can be used in space applications. I live in fear of their conclusion.
Quote from: ppnl on 11/23/2022 12:56 amYeah, probably off topic a bit. But we would have to agree on what could be achieved with AI in order to decide on how it could be used in space. Maybe someday AIs will be discussing how we can be used in space applications. I live in fear of their conclusion.Nothing based on any existing technology has any hope of ever evolving into something like what you're imagining. An effort to develop human intelligence using current AI technology is like trying to get to the moon using ancient Babylonian technology (i.e. building a tower tall enough to reach the moon). You simply cannot scale it up that far. Nowhere near.
Nothing based on any existing technology has any hope of ever evolving into something like what you're imagining. An effort to develop human intelligence using current AI technology is like trying to get to the moon using ancient Babylonian technology (i.e. building a tower tall enough to reach the moon). You simply cannot scale it up that far. Nowhere near.
Quote from: Greg Hullender on 11/23/2022 01:18 amNothing based on any existing technology has any hope of ever evolving into something like what you're imagining. An effort to develop human intelligence using current AI technology is like trying to get to the moon using ancient Babylonian technology...The Babylonian technology that matters is agriculture. That lets you expand the population, so you have enough one in a million geniuses (or a million non-geniuses) that you can refine copper, steel, silicon, ... . Five millennia later you're on the Moon.Time scales may vary.
Nothing based on any existing technology has any hope of ever evolving into something like what you're imagining. An effort to develop human intelligence using current AI technology is like trying to get to the moon using ancient Babylonian technology...
Whatever the disadvantages l... machines don't automatically need food, water, and oxygen.But how can we bootstrap such applications here on Earth, in order to eventually extend them into space?
Quote from: Greg Hullender on 11/23/2022 01:18 am. . . An effort to develop human intelligence using current AI technology is like trying to get to the moon using ancient Babylonian technology (i.e. building a tower tall enough to reach the moon). You simply cannot scale it up that far. Nowhere near.Until maybe 5-10 years ago I'd have agreed, but the recent 'Deep Learning' boom has shown that techniques long disregarded and woefully inefficient and inelegant can be perfectly adequate at performing real-world tasks if you can throw enough hardware at the problem. Human consciousness arises from a massively parallel array of incredibly stupid individual neurons working kinda-sorta in tandem and sometimes producing the 'correct' result, so I rather suspect any human-like AI will arise from a similarly messy pile of hacks and excessive use of compute resources rather than some elegant new simulation technique.
. . . An effort to develop human intelligence using current AI technology is like trying to get to the moon using ancient Babylonian technology (i.e. building a tower tall enough to reach the moon). You simply cannot scale it up that far. Nowhere near.
Where I disagree is with the idea that intelligence will "arise" from a sufficiently large system. I see that as no more likely than new life arising because a kid mixed all the chemicals in his chemistry set together. I know the notion of "emergent behavior" in software systems is popular with a lot of people, I pretty much reject it outright.
Quote from: sanman on 12/01/2022 05:51 amWhatever the disadvantages l... machines don't automatically need food, water, and oxygen.But how can we bootstrap such applications here on Earth, in order to eventually extend them into space?Pretty sure they "automatically" need electricity.As to "bootstrapping", that implies sentience.
Where I disagree is with the idea that intelligence will "arise" from a sufficiently large system.
Quote from: Greg Hullender on 12/02/2022 03:01 pmWhere I disagree is with the idea that intelligence will "arise" from a sufficiently large system. But do you have an argument for such an opinion?
A neuron is just a physical object that obeys physical laws. The Church-Turing thesis suggests that a computer program should be able to emulate the function of a neuron. While emulating a hundred billion neurons with a thousand trillion interconnections is challenging there is no new physics here as far as we can tell. Therefore it seems to be mostly an engineering problem.The transistor was invented in 1947. ... I see no reason that A.I. cannot proceed similarly. A.I. will be embedded in just about every piece of technology we produce.
Quote from: JohnFornaro on 12/01/2022 10:40 amQuote from: sanman on 12/01/2022 05:51 amWhatever the disadvantages l... machines don't automatically need food, water, and oxygen.But how can we bootstrap such applications here on Earth, in order to eventually extend them into space?Pretty sure they "automatically" need electricity.Sure they need electricity, and there's plenty to be had ...
Quote from: sanman on 12/01/2022 05:51 amWhatever the disadvantages l... machines don't automatically need food, water, and oxygen.But how can we bootstrap such applications here on Earth, in order to eventually extend them into space?Pretty sure they "automatically" need electricity.
But wait. There's also plenty of "food, water, and oxygen" to be had. What point were you making?
But do you have an argument for such an opinion? A neuron is just a physical object that obeys physical laws. The Church-Turing thesis suggests that a computer program should be able to emulate the function of a neuron. While emulating a hundred billion neurons with a thousand trillion interconnections is challenging there is no new physics here as far as we can tell. Therefore it seems to be mostly an engineering problem.The transistor was invented in 1947. It was around the size of a matchbox. In the span of one lifetime we have learned how to put over five billion of them on a little chip of silicone. Before 1947 such a thing would have consumed a large fraction of the electrical output of the entire US and the waste heat would have torched an entire city. I have a dozen of these things operating in my house now. I see no reason that A.I. cannot proceed similarly. A.I. will be embedded in just about every piece of technology we produce. Asking how it can be used in space is like asking how transistors can be used in space.
Quote from: ppnl on 12/02/2022 10:19 pmBut do you have an argument for such an opinion? A neuron is just a physical object that obeys physical laws. The Church-Turing thesis suggests that a computer program should be able to emulate the function of a neuron. While emulating a hundred billion neurons with a thousand trillion interconnections is challenging there is no new physics here as far as we can tell. Therefore it seems to be mostly an engineering problem....I'm not arguing that intelligence is supernatural--just that we don't have the foggiest idea how to engineer such a thing. Nor is it reasonable to suppose that if we just make our computing systems bigger they'll somehow magically become intelligent.
But do you have an argument for such an opinion? A neuron is just a physical object that obeys physical laws. The Church-Turing thesis suggests that a computer program should be able to emulate the function of a neuron. While emulating a hundred billion neurons with a thousand trillion interconnections is challenging there is no new physics here as far as we can tell. Therefore it seems to be mostly an engineering problem....
Trust me on this--lack of computing power is not what stops current (or foreseeable) AI technology from having human-level intelligence.
It's easy to say it's an engineering problem, but it's a problem where it's clear that something critical is missing from our current systems, and we have no idea what that is.
"Trust me on this" -- While you may assert that we [or you?] know the full extent of human intelligence, and the computing power and complexity of the human brain, you and we do not.
I don't believe you have to know "the full extent of human intelligence" to see this. You just need a broad understanding of the current state of the art in AI.
The "current state of the art in AI" is to continue to throw more and more compute at the problem, using techniques (bog standard MLNNs) that were dismissed as dead ends decades ago, and to continue to improve in capability as a result of that increased available compute.
Turing Machines (the majority of current computers) work by being very dumb very quickly, and its turned out that 'AI' can work very well by being very dumb in parallel, sufficient to be just as useful as out UTMs. Of course, as has happened every single time an AI technique has been adopted into production, it will be dismissed in short order as "not real/true AI", as if the sole goal of AI is to replicate human-level intelligence and ignore the existence and utility of the huge range of other useful intelligences.
Quote from: JohnFornaro on 12/06/2022 10:55 am"Trust me on this" -- While you may assert that we [or you?] know the full extent of human intelligence, and the computing power and complexity of the human brain, you and we do not. I'm simply saying that scaling up existing systems isn't going to produce human intelligence or anything like it, for the same reasons that scaling up a truck or a rocket isn't going to. ...
Lack of computing power is not what stops current (or foreseeable) AI technology from having human-level intelligence.
"Lack of computing power" is one of the things stopping "forseeable" AI from having human level intelligence.
Quote from: Greg Hullender on 12/03/2022 07:06 pmQuote from: ppnl on 12/02/2022 10:19 pmBut do you have an argument for such an opinion? A neuron is just a physical object that obeys physical laws. The Church-Turing thesis suggests that a computer program should be able to emulate the function of a neuron. While emulating a hundred billion neurons with a thousand trillion interconnections is challenging there is no new physics here as far as we can tell. Therefore it seems to be mostly an engineering problem....I'm not arguing that intelligence is supernatural--just that we don't have the foggiest idea how to engineer such a thing. Nor is it reasonable to suppose that if we just make our computing systems bigger they'll somehow magically become intelligent.I remember back in the late 80's a company I was working at, that had LOTS of Phd's, thought that neural networks would solve the A.I. hardware challenge. That was also about the time when "fuzzy logic" was thought to be the next revolution in consumer appliances, for making "smart washers" and such.Needless to say the hype did not live up to reality, though no doubt we learned more about what we didn't know than what we did know.Fast forward to fairly recently and we saw a similar boom and bust cycle with A.I. and its various subcategories. Though it looks like the bust cycle is not so bad with A.I., as we have found plenty of applications that can use its limited abilities.But to your point, it does not yet appear that we understand how to make truly intelligent artificial intelligence, as opposed to smart tools.So from that standpoint, of course "smart tools" like A.I. can be used for space applications. But I don't think they will be able to solve the challenges we have holding us back from expanding humanity out into space.
I think there is some serious fuzziness in how people are thinking about this. Let me try to clarify. First, universal Turing machines are just that. Universal. That means all computers are the same to within a polynomial time complexity. Neglecting memory size and processing speed any problem that one can solve any other can solve. They should also be able to simulate any object or process that exists in the universe. As long as the process isn't quantum in nature they should be able to simulate efficiently in the big O time complexity sense. That means we should be able to simulate brains. Short of trashing the Church/Turing thesis there is no way around this conclusion. People are free to reject the CT thesis but they should say so up front to avoid confusion.So what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part. After that all that is left is the software and algorithms. That may be the hard part.Now, does anyone disagree with the truth and clarity of the above?
Quote from: ppnl on 12/06/2022 09:41 pmI think there is some serious fuzziness in how people are thinking about this. Let me try to clarify. First, universal Turing machines are just that. Universal. That means all computers are the same to within a polynomial time complexity. Neglecting memory size and processing speed any problem that one can solve any other can solve. They should also be able to simulate any object or process that exists in the universe. As long as the process isn't quantum in nature they should be able to simulate efficiently in the big O time complexity sense. That means we should be able to simulate brains. Short of trashing the Church/Turing thesis there is no way around this conclusion. People are free to reject the CT thesis but they should say so up front to avoid confusion.So what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part. After that all that is left is the software and algorithms. That may be the hard part.Now, does anyone disagree with the truth and clarity of the above?A Turing machine is a clocked digital system. Digital computers are a subset of all possible computers, so a UTM cannot simulate all possible computers. The two notable extensions are analog computers and non-clocked digital systems. Biological systems do not appear to be clocked, and they appear to incorporate analog components. It is not clear that a UTM can simulate a system with analog components in polynomial time.I personally believe that "intelligence" will end up getting implemented in "traditional" computers (i.e., clocked digital logic). By "intelligence" I mean a system that passes an extended version of the Turing test. However, if this does not happen, you don't need to invoke quantum theory. You can add analog elements instead.
I came across this interesting article about how AI is being used to conjure up completely new proteins very quickly.https://www.nature.com/articles/d41586-022-02947-7It occurred to me that this could enable all sorts of designer bugsorganisms for ISRU purposes, and maybe even terraforming.Perhaps we could have bugsorganisms designed to survive the Martian day/night cycle, which would come alive during the day, to perform useful conversion of natural resources, like through Sabatier or whatever.Could we even use AI to design complex ecosystems of organisms that would cope with the existing Mars conditions while working to transform the environment into one that's more human-friendly?
Tell me more, with specifics on the engineering involved.
I’m pretty sure ChatGPT could pass the Turing test if implemented and judged by most people. The most obviously machine aspect of ChatGPT is that the responses are much faster than a human.ChatGPT simulates a dumb human (but really good at English class assignments) really well.
Quote from: DanClemmensen on 12/06/2022 10:00 pmQuote from: ppnl on 12/06/2022 09:41 pmI think there is some serious fuzziness in how people are thinking about this. Let me try to clarify. First, universal Turing machines are just that. Universal. That means all computers are the same to within a polynomial time complexity. Neglecting memory size and processing speed any problem that one can solve any other can solve. They should also be able to simulate any object or process that exists in the universe. As long as the process isn't quantum in nature they should be able to simulate efficiently in the big O time complexity sense. That means we should be able to simulate brains. Short of trashing the Church/Turing thesis there is no way around this conclusion. People are free to reject the CT thesis but they should say so up front to avoid confusion.So what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part. After that all that is left is the software and algorithms. That may be the hard part.Now, does anyone disagree with the truth and clarity of the above?A Turing machine is a clocked digital system. Digital computers are a subset of all possible computers, so a UTM cannot simulate all possible computers. The two notable extensions are analog computers and non-clocked digital systems. Biological systems do not appear to be clocked, and they appear to incorporate analog components. It is not clear that a UTM can simulate a system with analog components in polynomial time.I personally believe that "intelligence" will end up getting implemented in "traditional" computers (i.e., clocked digital logic). By "intelligence" I mean a system that passes an extended version of the Turing test. However, if this does not happen, you don't need to invoke quantum theory. You can add analog elements instead.Asynchronous digital systems have the same computational power as clocked digital systems. This becomes obvious when you consider that they both use the same universal logic gates and the same boolean algebra. Any asynchronous digital circuit can immediately be implemented as a clocked digital circuit or simply programmed as a computer program. Clock signals are really just a kludge to solve some engineering problems with asynchronous circuits. The problem is the data paths from input to output may be of very different lengths. This creates race conditions and for a large circuit the output may never be valid. A timing signal can latch the output to a valid state until the next state is valid. A few decades ago they started experimenting with asynchronous sections in microprocessors to make them a little faster and use less energy. I think all modern processors have asynchronous sections. But it is just an engineering kludge that gets a little more performance.The problem with analog computers is they really suck. There is no way to control the noise so the complexity of the calculations is severely limited. As a practical matter you will never calculate Pi to a thousand digits for example. And a digital computer can do anything that they can do faster and better. Floating point arithmetic is a thing on digital computers and they can have as many digits of accuracy as you want. I don't know if any analog computer is still in use anywhere in the world today. They are obsolete exactly because a digital computer can do everything that they can do.Quantum computers arguably could offer an exponential speedup on certain limited types of computations. But it is hard to see how a warm wet brain could be a quantum computer.I expect algorithmic progress decade on decade so that in 70 or so years we will look back on today the same way that we today look back on the invention of the transistor. Future computers used in AI may not look much like our computers but they will be universal Turing machines.
So what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part.
Interact with chat.OpenAI.com for a few hours, see what others are doing, and then tell me how confident you are that a sufficiently large model, with tweaks in the next few years or decades, couldn’t approximate the kind of general intelligence that humans and animals exhibit.
I was not attempting to claim that the alternatives (async digital and analog) are useful. I'm just pointing out that the flat assertion that a UTM can do any computing task is not strictly true in theory. Furthermore, biological intelligence appears to use both. Note that async digital is continuous in the time domain, so it is basically analog in the time domain.
Quote from: ppnl on 12/06/2022 09:41 pmSo what do we need to simulate a brain? It is possible that we will need an increase in computing power of a hand full of orders of magnitude. That is challenging but is the easy part. If you're talking about simulating a brain on the atomic level, then you need a lot more than a "handful of orders of magnitude," unless your hands are unusually large. :-) It's not clear to me that we could set up such a simulation anyway--simulating single protein molecules is challenging at the moment.
If you're imagining that we understand the brain well enough to set up a simulation of it at something other than the atomic level, I think you're seriously misinformed.
I'm loath to draw any distinction between 'real' intelligence and a tottering pile of hacks...
Now, does anyone disagree with the truth and clarity of the above?
But it is hard to see how a warm wet brain could be a quantum computer.
How many light bulbs does it take to screw in a chicken?
Quote from: DanClemmensen on 12/07/2022 02:19 amI was not attempting to claim that the alternatives (async digital and analog) are useful. I'm just pointing out that the flat assertion that a UTM can do any computing task is not strictly true in theory. Furthermore, biological intelligence appears to use both. Note that async digital is continuous in the time domain, so it is basically analog in the time domain.And I am flatly asserting that a UTM can do any computational task that is doable. It can simulate asynchronous or analog systems to any level of fidelity you want. It can almost always do it cheaper and easier than the analog or asynchronous system that it is replacing. But unless you also claim that this is relevant to the power of AI then it might also be off topic.
We also have the existence-proof that humans can sole Halting problems that UTMs are fundamentally incapable of solving
QuoteWe also have the existence-proof that humans can sole Halting problems that UTMs are fundamentally incapable of solvingNo, we don’t.
Quote from: Robotbeat on 12/07/2022 12:40 pmQuoteWe also have the existence-proof that humans can sole Halting problems that UTMs are fundamentally incapable of solvingNo, we don’t.We can, because we can conclude a problem's solution is undecidable. One of the fundamental tenets of a UTM is that it can solve any decidable problem (any algorithm that produces an output). That is not a limitation possessed by humans.
Quote from: Robotbeat on 12/07/2022 12:40 pmQuoteWe also have the existence-proof that humans can sole Halting problems that UTMs are fundamentally incapable of solvingNo, we don’t.We can, because we can conclude a problem's solution is undecidable.
One of the fundamental tenets of a UTM is that it can solve any decidable problem (any algorithm that produces an output).
That is not a limitation possessed by humans.
Quote from: edzieba on 12/07/2022 02:24 pmQuote from: Robotbeat on 12/07/2022 12:40 pmQuoteWe also have the existence-proof that humans can sole Halting problems that UTMs are fundamentally incapable of solvingNo, we don’t.We can, because we can conclude a problem's solution is undecidable. One of the fundamental tenets of a UTM is that it can solve any decidable problem (any algorithm that produces an output). That is not a limitation possessed by humans. There is a really deep philosophical rathole here, that I think we'd do best not to get into. We're already very far afield from the question in the original post, which actually does have some merit.
Quote from: sanman on 09/19/2022 12:03 amI came across this interesting article about how AI is being used to conjure up completely new proteins very quickly.https://www.nature.com/articles/d41586-022-02947-7It occurred to me that this could enable all sorts of designer bugsorganisms for ISRU purposes, and maybe even terraforming.Perhaps we could have bugsorganisms designed to survive the Martian day/night cycle, which would come alive during the day, to perform useful conversion of natural resources, like through Sabatier or whatever.Could we even use AI to design complex ecosystems of organisms that would cope with the existing Mars conditions while working to transform the environment into one that's more human-friendly?Yes, AI could potentially be used in a variety of ways for space applications, including for ISRU (in-situ resource utilization) and terraforming. For example, AI could be used to design and optimize proteins for specific conditions, such as the extreme temperatures and radiation levels on Mars. These proteins could be used in organisms that are capable of surviving and thriving on the Martian surface, and potentially even converting natural resources into usable materials.In addition, AI could be used to design complex ecosystems of organisms that could adapt to the existing conditions on Mars and work to transform the environment over time. This could involve developing algorithms that can simulate and predict the behavior and interactions of different species, and optimize their performance in a given environment.Overall, AI has the potential to greatly accelerate and improve our ability to utilize and explore space, by enabling us to design and optimize complex systems that can survive and thrive in challenging environments.QuoteTell me more, with specifics on the engineering involved.In terms of specific engineering applications, AI could be used in a variety of ways to support space exploration and utilization. Some examples include:* Designing and optimizing spacecraft systems and components: AI algorithms could be used to design and optimize spacecraft systems and components, such as propulsion systems, power systems, and communication systems, to improve performance and reduce costs. For example, AI could be used to model and simulate the behavior of different spacecraft components, and identify optimal configurations and designs that meet specified performance criteria.* Planning and executing space missions: AI could be used to plan and execute space missions, by developing algorithms that can generate and evaluate potential mission plans, and identify optimal strategies for achieving mission objectives. This could involve simulating the behavior of spacecraft and other systems, and optimizing their performance to achieve desired outcomes.* Analyzing data from space missions: AI could be used to analyze data from space missions, such as sensor data from spacecraft, to identify patterns and trends, and make predictions about future events. This could involve developing algorithms that can process and interpret large amounts of data, and extract useful insights and knowledge from it.* Developing and deploying intelligent robots: AI could be used to develop and deploy intelligent robots that are capable of autonomously exploring and operating in space. This could involve developing algorithms that enable robots to perceive and interact with their environment, and make decisions based on that information. For example, AI could be used to enable robots to navigate and manipulate objects, and perform tasks such as collecting samples or repairing equipment.Overall, AI has the potential to greatly enhance our ability to explore and utilize space, by enabling us to design and optimize complex systems, plan and execute missions, analyze data, and deploy intelligent robots.
This entire post (besides the quotes), by the way, is the output of ChatGPT when provided the quotes as prompts.The first one is primarily just restating the (larger than necessary) input prompt which I took from the first post in this thread, but the second half is an impressive synthesis of exactly the answer to this thread’s question. JohnFornarno “liked” it, even. I guess that counts as a kind of informal “Turing Test.”
…well, it can probably replace what SOME people do… 🤣
Quote from: Robotbeat on 12/07/2022 04:20 pm…well, it can probably replace what SOME people do… 🤣It's definitely a first-rate bullsh*t generator. It can definitely replace those people who have to produce dreary reports that seem reasonable but which no one actually reads.Now we need an AI solution that identifies this type of output. Perhaps by taking pieces and relating them to existing online documents. "Highlight the portions that are potentially original."Sigh. We may have to have that soon . . .
Quote from: Robotbeat on 12/07/2022 03:26 pmThis entire post (besides the quotes), by the way, is the output of ChatGPT when provided the quotes as prompts.The first one is primarily just restating the (larger than necessary) input prompt which I took from the first post in this thread, but the second half is an impressive synthesis of exactly the answer to this thread’s question. JohnFornarno “liked” it, even. I guess that counts as a kind of informal “Turing Test.” Bravo! Well played!I'll confess I just skimmed over the bullet list at the bottom. They all sounded plausible
How would AI help you deploy anything?!
Quote from: ppnl on 12/07/2022 12:27 amBut it is hard to see how a warm wet brain could be a quantum computer.Allow me to help out by first pointing out that ALL wet brains that we know of exist in the quantum foam. Consider also the fact that the neuron is very close to the scale of quantum events. Several billion years of evolution within a quantum universe cannot be disposed of with a wave of the hand, simply because the connection is not well known.
Remember how they used to talk about "junk" DNA? I was always like, it's there for a reason; just because you don't know how it works is not excuse for calling it "junk".
Now, that we exist in a quantum universe doesn't solve the problem of simulating a sentient intelligence, but it does at least help you understand where the simulation needs to take place.
Quote from: ppnl on 12/07/2022 04:33 amQuote from: DanClemmensen on 12/07/2022 02:19 amI was not attempting to claim that the alternatives (async digital and analog) are useful. I'm just pointing out that the flat assertion that a UTM can do any computing task is not strictly true in theory. Furthermore, biological intelligence appears to use both. Note that async digital is continuous in the time domain, so it is basically analog in the time domain.And I am flatly asserting that a UTM can do any computational task that is doable. It can simulate asynchronous or analog systems to any level of fidelity you want. It can almost always do it cheaper and easier than the analog or asynchronous system that it is replacing. But unless you also claim that this is relevant to the power of AI then it might also be off topic.Remember that a UTM is a computational architecture (or at least, a reductive basis for actually reasonably implementable architectures like Von Neumann), not a collection of hardware. It can be an electronic digital computer, but it could be an optomechanical computer running up and down a stupendous strip of paper and would be computationally equivalent. A UTM can simulate other TMs, but cannot simulate all possible computer architectures (digital or otherwise). Likewise, the same electronics components that can be used by build a Turing-Complete computer can also be used to built a computer for Non-Turing Computation. We also have the existence-proof that humans can sole Halting problems that UTMs are fundamentally incapable of solving, and without positing some 'inscrutable particle of consciousness' at play or similar supernatural element, that means there exists computational architectures implementable in out universe that are not UTMs.
Quote from: JohnFornaro on 12/07/2022 09:56 amQuote from: ppnl on 12/07/2022 12:27 amBut it is hard to see how a warm wet brain could be a quantum computer.Allow me to help out by first pointing out that ALL wet brains that we know of exist in the quantum foam...All objects in the universe exist in a quantum fog. The atoms, molecules and tiny conglomerate structures exist close to the scale of quantum events. That just isn't good enough.
Quote from: ppnl on 12/07/2022 12:27 amBut it is hard to see how a warm wet brain could be a quantum computer.Allow me to help out by first pointing out that ALL wet brains that we know of exist in the quantum foam...
You should look into the subject of [quantam] decoherence to see how the macro scale world arises from the quantum foam.
Remember how they used to talk about "junk" DNA? ... it's there for a reason; just because you don't know how it works is not excuse for calling it "junk".
Junk DNA is junk in the sense that it is noncoding. ... Evolution sucks.
It does not seem relevant at all as there are no large scale coherent phenomena in the brain. I really like the idea of a quantum brain and would be glad to be proven wrong. But the judgment of physics is harsh.
Quantum "supremacy" much? We simply do not know how the macro scale world arises. Go ahead and tell yourself the words you want to hear.https://en.wikipedia.org/wiki/Quantum_supremacy
Quote from: Robotbeat on 12/07/2022 03:26 pmThis entire post (besides the quotes), by the way, is the output of ChatGPT when provided the quotes as prompts. ... I guess that counts as a kind of informal “Turing Test.”Bravo! Well played!I'll confess I just skimmed over the bullet list at the bottom. They all sounded plausible, but ... my eyes glazed over, and I didn't read it very closely. So let's take a close read."Designing and optimizing spacecraft systems and components" seems really far-fetched. I'm not aware of any issues there that need more than linear or quadratic programming [LQP]--if that. (But I'd be interested to learn otherwise.)
This entire post (besides the quotes), by the way, is the output of ChatGPT when provided the quotes as prompts. ... I guess that counts as a kind of informal “Turing Test.”
Reference system SelenocentricPeriselene altitude 100.9 kilometers (54.5 nmi)[6]Aposelene altitude 122.4 kilometers (66.1 nmi)[6]Inclination 1.25 degrees[6]Period 2 hours[6]Epoch July 19, 1969, 21:44 UTC
Same comment for "Planning and executing space missions." Neither of these is an area where I expect to see AI make an impact.
"Analyzing data from space missions" is where I'd expect to see the biggest impact. ... "Analyzing data" is definitely a problem that seems tailor-made for modern AI. But I think we all already knew that.
A surprising number of people pick problems that look to be well-suited for a particular algorithm, but they never ask whether there's any value to solving that problem.
Blue Origin and SpaceX are both private aerospace manufacturers and spaceflight services companies that were founded around the same time, with Blue Origin being founded in 2000 and SpaceX in 2002. Both companies have made significant progress in the development of their rocket technology and have conducted a number of successful test flights.However, there are a few factors that may have contributed to SpaceX's more rapid development and success in launching rockets into orbit. One key factor is funding: SpaceX has received a greater level of funding from private investors, which has allowed the company to invest more heavily in research and development and pursue more ambitious projects.Another factor is experience: SpaceX was founded by Elon Musk, who had previously co-founded PayPal and had experience in the tech industry. This background may have given him a different perspective and set of skills that have helped him navigate the complex and competitive world of rocket development.Finally, SpaceX has a clear focus on developing reusable rockets, which has the potential to significantly reduce the cost of space travel. This focus has allowed SpaceX to make rapid progress in this area and gain a competitive advantage over other companies.That being said, Blue Origin has made significant progress in its own right and has a strong vision for the future of space exploration. The company has developed the New Shepard suborbital rocket, which has conducted several successful test flights, and is also working on the larger New Glenn rocket, which is intended to be capable of launching payloads into orbit. I believe that both Blue Origin and SpaceX have the potential to make significant contributions to the field of space exploration in the coming years.
I apologize for the misinformation in my previous response. You are correct that Blue Origin had a significant amount of funding at the start, as it was founded by Jeff Bezos, the CEO of Amazon, who has a net worth in the billions of dollars. Additionally, both Blue Origin and SpaceX were focused on developing reusable rockets from the start, as this has the potential to significantly reduce the cost of space travel.It's also true that both Elon Musk and Jeff Bezos have a strong understanding of technology and have made significant contributions to the field. Both have founded successful tech companies and have demonstrated an ability to think outside the box and push the boundaries of what is possible.It's difficult to say exactly why one company has progressed faster than the other, as there are many factors at play. Both Blue Origin and SpaceX have faced challenges and setbacks in the development of their rocket technology, and both have made significant progress in their own right. It's important to recognize that developing rockets and launching them into space is an incredibly complex and challenging task that requires a high level of expertise, resources, and perseverance. Both Blue Origin and SpaceX have demonstrated these qualities and have the potential to make significant contributions to the field of space exploration in the future.
And who's "Fornarno"?
It's cute, no question. Look at what I got when I asked why SpaceX was so far ahead of Blue Origin, despite have started later. ... But when I ask the original question again, I get the exact same response--word for word--with the exact same misinformation. That is, the system doesn't really learn anything. Not in real-time, anyway.
Now, of course, this is pretty much 100% false ...
SpaceX has a clear focus on developing reusable rockets...
Quote from: JohnFornaro on 12/07/2022 07:59 pmAnd who's "Fornarno"?That's your superhero/supervillain name.
More from ChatGPT from OpenAI.
Quote from: sdsds on 12/16/2022 11:42 pmMore from ChatGPT from OpenAI.Like other GPT language models, ChatGPT clearly makes for a good politician: provides a few paragraphs of waffle without meaningfully answering the question.
Quote from: GregNow, of course, this is pretty much 100% false ...It all sounded plausible to me; what do you mean "pretty much 100% false"? Quote from: that AISpaceX has a clear focus on developing reusable rockets...Which of its statements are "pretty much 100% false"?
Blue Origin and SpaceX are both private aerospace manufacturers and spaceflight services companies that were founded around the same time, with Blue Origin being founded in 2000 and SpaceX in 2002. Both companies have made significant progress in the development of their rocket technology and have conducted a number of successful test flights.
However, there are a few factors that may have contributed to SpaceX's more rapid development and success in launching rockets into orbit. One key factor is funding: SpaceX has received a greater level of funding from private investors, which has allowed the company to invest more heavily in research and development and pursue more ambitious projects.
On July 20, 2021, Blue Origin sent its first crewed mission into space via its New Shepard rocket and spaceflight system.]https://en.wikipedia.org/wiki/Blue_Origin
Another factor is experience: SpaceX was founded by Elon Musk, who had [blah blah blah]. This background may have given him a different perspective and set of skills that have helped him navigate the complex and competitive world of rocket development.
Finally, SpaceX has a clear focus on developing reusable rockets, which has the potential to significantly reduce the cost of space travel. This focus has allowed SpaceX to make rapid progress in this area and gain a competitive advantage over other companies.
That being said, Blue Origin has made significant progress in its own right and has a strong vision for the future of space exploration. The company has developed the New Shepard suborbital rocket, which has conducted several successful test flights, and is also working on the larger New Glenn rocket, which is intended to be capable of launching payloads into orbit.
I believe that both Blue Origin and SpaceX have the potential to make significant contributions to the field of space exploration in the coming years.
Quote from: Twark_Main on 12/17/2022 12:11 amQuote from: JohnFornaro on 12/07/2022 07:59 pmAnd who's "Fornarno"?That's your superhero/supervillain name. Ivan Oranrof, actually. You need to do your homework. Check out my twit account.
Quote from: edzieba on 12/17/2022 12:20 pmQuote from: sdsds on 12/16/2022 11:42 pmMore from ChatGPT from OpenAI.Like other GPT language models, ChatGPT clearly makes for a good politician: provides a few paragraphs of waffle without meaningfully answering the question.Of course.There's an outcry any time an AI doesn't emit a "politically correct" answer. It's no wonder engineers train their AI to talk like a politician.
Quote from: Twark_Main on 12/18/2022 02:52 pmQuote from: edzieba on 12/17/2022 12:20 pmQuote from: sdsds on 12/16/2022 11:42 pmMore from ChatGPT from OpenAI.Like other GPT language models, ChatGPT clearly makes for a good politician: provides a few paragraphs of waffle without meaningfully answering the question.Of course.There's an outcry any time an AI doesn't emit a "politically correct" answer. It's no wonder engineers train their AI to talk like a politician.No, it's a byproduct of AI being trained to assemble coherent sentences (and ideally ones derived from information semantically related to the prompt) from a vast training set rather than forming a separate knowledge graph and constructing sentences based on that graph. ...
Quote from: edzieba on 12/19/2022 11:18 amQuote from: Twark_Main on 12/18/2022 02:52 pmQuote from: edzieba on 12/17/2022 12:20 pmQuote from: sdsds on 12/16/2022 11:42 pmMore from ChatGPT from OpenAI.Like other GPT language models, ChatGPT clearly makes for a good politician: provides a few paragraphs of waffle without meaningfully answering the question.Of course.There's an outcry any time an AI doesn't emit a "politically correct" answer. It's no wonder engineers train their AI to talk like a politician.No, it's a byproduct of AI being trained to assemble coherent sentences (and ideally ones derived from information semantically related to the prompt) from a vast training set rather than forming a separate knowledge graph and constructing sentences based on that graph. ...Why couldn't "coherent sentences" be constructed from a "vast" right leaning training set?
Can we NOT bring politics into this?
Quote from: Robotbeat on 12/19/2022 07:26 pmCan we NOT bring politics into this?Ask that of the AI trainers, who are doing exactly that.
Quote from: JohnFornaro on 12/20/2022 11:35 amQuote from: Robotbeat on 12/19/2022 07:26 pmCan we NOT bring politics into this?Ask that of the AI trainers, who are doing exactly that.Even if true, it’s completely off topic.
Quote from: JohnFornaro on 12/20/2022 11:35 amQuote from: Robotbeat on 12/19/2022 07:26 pmCan we NOT bring politics into this?Ask that of the AI trainers, who are doing exactly that.You're going to have to offer some evidence for this repeated assertion.
This thread is just fun to think about. I'm enjoying taking these wacky ideas and playing around with the OpenAI DALL*E 2 image generator. Just want to generate some concept art and blow off reality.
What exactly is neuromorphic computing? And what are the advantages of it?
The idea is to implement neurons in very small devices like memristors. The advantage is that it could be vastly more energy efficient and vastly better circuit density. It should be possible to exceed the circuit density, energy efficiency and speed of the brain this way. After all nature rarely finds anything close to an optimal solution.
OpenAI has also come out with a new 3D-object generator that works on input from text prompts. ... ChatGPT can create a gear in AutoCAD in its native AutoLisp:
After all nature rarely finds anything close to an optimal solution.
I've seen various instances where asking ChatGPT to code something produces code that doesn't quite work right. It will confidently spit out an answer, even while that answer is wrong. So it's important to test any code that it spits out, and not blindly take it on faith. I guess AI is always going to need some human supervision.
Machine learning is changing how scientists study space by allowing computers to identify patterns in data and make decisions, predictions or classifications. This can save time for scientists by identifying phenomena such as boulders on other planets, explosive volcanoes on Mercury and vortexes in Jupiter's atmosphere, among other things. In the future, artificial intelligence could be used in space probes to enable them to make real-time responses to hazards and land autonomously, rather than relying on communication with Earth that can take several hours. Machine learning and artificial intelligence could also be used to help plan future missions to bodies such as Jupiter's moon Europa.
Here’s an article on AI being used for planetary exploration and astronomy:https://www.space.com/solar-system-planetary-science-machine-learningAnd here’s the summary of the article that I had ChatGPT generate:Quote Machine learning is changing how scientists study space by allowing computers to identify patterns in data and make decisions, predictions or classifications. This can save time for scientists by identifying phenomena such as boulders on other planets, explosive volcanoes on Mercury and vortexes in Jupiter's atmosphere, among other things. In the future, artificial intelligence could be used in space probes to enable them to make real-time responses to hazards and land autonomously, rather than relying on communication with Earth that can take several hours. Machine learning and artificial intelligence could also be used to help plan future missions to bodies such as Jupiter's moon Europa.
Quote from: Robotbeat on 12/26/2022 01:24 pmHere’s an article on AI being used for planetary exploration and astronomy:https://www.space.com/solar-system-planetary-science-machine-learningAnd here’s the summary of the article that I had ChatGPT generate:Quote Machine learning is changing how scientists study space by allowing computers to identify patterns in data and make decisions, predictions or classifications. This can save time for scientists by identifying phenomena such as boulders on other planets, explosive volcanoes on Mercury and vortexes in Jupiter's atmosphere, among other things. In the future, artificial intelligence could be used in space probes to enable them to make real-time responses to hazards and land autonomously, rather than relying on communication with Earth that can take several hours. Machine learning and artificial intelligence could also be used to help plan future missions to bodies such as Jupiter's moon Europa.So I'm curious - did you simply feed it that URL as part of your input, and have it generate its summary on that basis?If so, that's a pretty nifty trick - gonna have to use that more often.It would be intriguing if we users can influence ChatGPT by feeding it reading material of our own choosing.
Quote from: Twark_Main on 12/18/2022 02:52 pmQuote from: edzieba on 12/17/2022 12:20 pmQuote from: sdsds on 12/16/2022 11:42 pmMore from ChatGPT from OpenAI.Like other GPT language models, ChatGPT clearly makes for a good politician: provides a few paragraphs of waffle without meaningfully answering the question.Of course.There's an outcry any time an AI doesn't emit a "politically correct" answer. It's no wonder engineers train their AI to talk like a politician.No, it's a byproduct of AI being trained to assemble coherent sentences ... from a vast training set rather than forming a separate knowledge graph and constructing sentences based on that graph. The chatbot has no understanding of the data it is dispensing, only an understanding of how closely related the prompt is to sentences it has already observed. Political correctness is of minimal concern beyond trying to avoid poisoned data in the training datasets ...
As I understand it, the training dataset for ChatGPT is not being augmented. It was frozen in 2021. This is one reason ChatGPT knows very little about itself. I got this potentially incorrect information from a single sentence in a non-technical article on ChatGPT, so it might be wrong. I'm not sure how one of these systems will evolve it is allowed to incorporate non-curated information indiscriminately. I suspect this is one the areas Google is researching intensively in reaction to ChatGPT. https://mezha.media/en/2022/12/22/google-is-panicking-over-the-ability-of-the-chatgpt-bot-to-answer-user-questions/
Google is reluctant to share its technology because, like ChatGPT and similar systems, it can generate false, toxic and biased information.
Yes, I would be able to help you calculate the floor area in the rings of the AutoCAD R14 file that you have provided. In order to do this, you will need to use a tool or software that allows you to measure and calculate the area of the various components in the file. There are several ways to do this, depending on the specific needs of your project and the resources you have available.One option is to use the built-in area measurement tools in AutoCAD R14. These tools allow you to select specific objects in the drawing and calculate their area, as well as total the area of multiple objects. You can find more information about how to use these tools in the AutoCAD R14 documentation or by searching online.Another option is to use a third-party tool or plugin that is specifically designed for calculating area in AutoCAD drawings. These tools often offer additional features and functionality, such as the ability to automatically label the calculated areas or export the results to a spreadsheet. You can find more information about these tools by searching online or contacting the developers directly.Regardless of which approach you choose, it is important to carefully review the results of your calculations and double-check your work to ensure that the results are accurate. If you have any specific questions or need further assistance with this task, please don't hesitate to ask.
I apologize if my previous response was not helpful. To answer your question, it is not possible for me to directly access or review the AutoCAD R14 file that you have mentioned. As an AI language model, I do not have the ability to open or view files, and I do not have access to any external resources or tools.However, I can provide you with general guidance on how to calculate the floor area in the rings of the AutoCAD R14 file using the tools and techniques that are available in the software. To do this, you will need to use the area measurement tools in AutoCAD R14, or a third-party tool or plugin that is specifically designed for this purpose.To use the built-in area measurement tools in AutoCAD R14, you will need to select the objects that you want to measure and then use the appropriate command or function to calculate their area. For example, you can use the "AREA" command to calculate the area of a closed polyline or region, or the "MEASUREGEOM" command to measure the length and area of various objects. You can find more information about these tools in the AutoCAD R14 documentation or by searching online.If you prefer to use a third-party tool or plugin, you will need to install the software and follow the instructions provided by the developer to measure and calculate the area of the objects in your AutoCAD file.Regardless of which approach you choose, it is important to carefully review the results of your calculations and double-check your work to ensure that the results are accurate. If you have any specific questions or need further assistance with this task, please don't hesitate to ask.
I apologize for misunderstanding your request in my previous response. I do not have the ability to access or review any files that are uploaded to me, as I am an AI language model and do not have the capability to open or view files. I also do not have access to any external resources or tools, so I am unable to perform tasks that require me to access or interact with external systems or data.As an AI language model, my primary function is to provide general information and guidance on a wide range of topics. I am not able to perform specific tasks or projects on behalf of users, and I do not have the ability to access or interact with external resources or systems.I apologize if this is not the answer you were hoping for. If you have any other questions or need further assistance with your project, please don't hesitate to ask.
I apologize, but I am unable to provide you with the contact information of any of my trainers or creators, as I am an AI language model and do not have the ability to access or share this type of information. As an AI, I do not have personal relationships or contact information for the individuals who were involved in my development or training.I understand that you are looking for a solution to a specific problem that you are facing, and I apologize if I am unable to provide the assistance you need. While I am trained to provide general information and guidance on a wide range of topics, I do not have the ability to perform specific tasks or projects on behalf of users, or to interact with external resources or systems.AI has the potential to be a powerful tool for solving a variety of problems, including technical and analytical challenges. However, it is important to recognize that AI has limitations and is not capable of replacing human expertise or judgment in all situations. In order to use AI effectively, it is important to understand its capabilities and limitations, and to use it appropriately in combination with other resources and tools.
Well, she was mentioning how ChatGPT typically speaks with 100% confidence on things that it does not necessarily have 100% confidence on. So that's one problem with an AI like ChatGPT. The manner and tone in which it speaks needs to reflect its relative confidence level in what is being said.
Quote from: sanman on 12/27/2022 07:03 pmWell, she was mentioning how ChatGPT typically speaks with 100% confidence on things that it does not necessarily have 100% confidence on. So that's one problem with an AI like ChatGPT. The manner and tone in which it speaks needs to reflect its relative confidence level in what is being said.Right. Sounds about right for emulating human behavior in the vast majority of humans.
Quote from: Robotbeat on 12/27/2022 07:51 pmRight. Sounds about right for emulating human behavior in the vast majority of humans.AI Dunning–Kruger Turing test?
Right. Sounds about right for emulating human behavior in the vast majority of humans.
Quote from: Asteroza on 12/28/2022 12:58 amQuote from: Robotbeat on 12/27/2022 07:51 pmRight. Sounds about right for emulating human behavior in the vast majority of humans.AI Dunning–Kruger Turing test?What happens when we have enough instances of AI functioning closely with human beings?Do we need to bring in Asimov's 3 Laws in a formalized way?Do these have to be verified at all stages: design, development, implementation, operation?
Quote from: Robotbeat on 12/27/2022 07:51 pmQuote from: sanman on 12/27/2022 07:03 pmWell, she was mentioning how ChatGPT typically speaks with 100% confidence on things that it does not necessarily have 100% confidence on. So that's one problem with an AI like ChatGPT. The manner and tone in which it speaks needs to reflect its relative confidence level in what is being said.Right. Sounds about right for emulating human behavior in the vast majority of humans.AI Dunning–Kruger Turing test?
I think we are conflating robotics and AI. Robots can benefit from AI but they do not require it. AI can be useful without robots. An AI that could control a spacecraft might be very useful, but most folks don't call a spacecraft a "robot". So what are we talking about here?
There is a large and active AI security academic community that has been studying this for several decades. I don't think we can summarize this work here in a few sentences. Here is an excellent overview book from 2019: https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/0815369824?fbclid=IwAR1sBo6eLJRhmollCqmDYYd18rEFFCoGvhFn2DB3HgAJ0XQJoZw_QT_gogQ
Quote from: DanClemmensen on 12/28/2022 06:56 pmThere is a large and active AI security academic community that has been studying this for several decades. I don't think we can summarize this work here in a few sentences. Here is an excellent overview book from 2019: https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/0815369824?fbclid=IwAR1sBo6eLJRhmollCqmDYYd18rEFFCoGvhFn2DB3HgAJ0XQJoZw_QT_gogQApparently, some software engineers from OpenAI have left to create their own company Anthropic AI, and have created their own new rival product called Claude.Among other things, Claude seems to make use of a new concept called a 'Constitution', which as its name suggests, is a construct meant to explicitly embody a set of moral and ethical rules which govern how the AI operates. It also seems to use some sort of AI-supervised reinforcement feedback.
Quote from: sanman on 01/31/2023 06:14 pmQuote from: DanClemmensen on 12/28/2022 06:56 pmThere is a large and active AI security academic community that has been studying this for several decades. I don't think we can summarize this work here in a few sentences. Here is an excellent overview book from 2019: https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/0815369824?fbclid=IwAR1sBo6eLJRhmollCqmDYYd18rEFFCoGvhFn2DB3HgAJ0XQJoZw_QT_gogQApparently, some software engineers from OpenAI have left to create their own company Anthropic AI, and have created their own new rival product called Claude.Among other things, Claude seems to make use of a new concept called a 'Constitution', which as its name suggests, is a construct meant to explicitly embody a set of moral and ethical rules which govern how the AI operates. It also seems to use some sort of AI-supervised reinforcement feedback.The general problem is called "AI alignment". It's hard. https://en.wikipedia.org/wiki/AI_alignment
Quote from: DanClemmensen on 01/31/2023 06:32 pmQuote from: sanman on 01/31/2023 06:14 pmQuote from: DanClemmensen on 12/28/2022 06:56 pmThere is a large and active AI security academic community that has been studying this for several decades. I don't think we can summarize this work here in a few sentences. Here is an excellent overview book from 2019: https://www.amazon.com/Artificial-Intelligence-Security-Chapman-Robotics/dp/0815369824?fbclid=IwAR1sBo6eLJRhmollCqmDYYd18rEFFCoGvhFn2DB3HgAJ0XQJoZw_QT_gogQApparently, some software engineers from OpenAI have left to create their own company Anthropic AI, and have created their own new rival product called Claude.Among other things, Claude seems to make use of a new concept called a 'Constitution', which as its name suggests, is a construct meant to explicitly embody a set of moral and ethical rules which govern how the AI operates. It also seems to use some sort of AI-supervised reinforcement feedback.The general problem is called "AI alignment". It's hard. https://en.wikipedia.org/wiki/AI_alignmentTo be fair humans are bad at this as well.<quote>AI systems can be challenging to align and misaligned systems can malfunction or cause harm. It can be difficult for AI designers to specify the full range of desired and undesired behaviors. Therefore, they use easy-to-specify proxy goals that omit some desired constraints. However, AI systems exploit the resulting loopholes. As a result, they accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking). AI systems can also develop unwanted instrumental behaviors such as seeking power, as this helps them achieve their given goals. Furthermore, they can develop emergent goals that may be hard to detect before the system is deployed, facing new situations and data distributions.</quote>Replace "AI" with political parties, churches, corporations or the local elk lodge. Look at everything from the tribal wasteland of American politics to the war in Ukraine.
Quote from: ppnl on 01/31/2023 07:34 pmTo be fair humans are bad at this as well.<quote>AI systems can be challenging to align and misaligned systems can malfunction or cause harm. It can be difficult for AI designers to specify the full range of desired and undesired behaviors. Therefore, they use easy-to-specify proxy goals that omit some desired constraints. However, AI systems exploit the resulting loopholes. As a result, they accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking). AI systems can also develop unwanted instrumental behaviors such as seeking power, as this helps them achieve their given goals. Furthermore, they can develop emergent goals that may be hard to detect before the system is deployed, facing new situations and data distributions.</quote>Replace "AI" with political parties, churches, corporations or the local elk lodge. Look at everything from the tribal wasteland of American politics to the war in Ukraine.Oh yes. If our AI learns ethics by assimilating all the world's writings then we can expect it's ethics to be at best as good as the average. This would be similar to the ChatGPT's level as a writer. Humans cannot agree on the "correct" ethics. Why should an AI be any better?Unfortunately this is NOT irrelevant to the use of AI in space applications. You can try to dismiss the problem that HAL 9000 faced in "2001: A Space Odyssey" as being just science fiction, but the same sort of thing may occur in real life. Space makes the problem harder, because one really major reason to use AI is the communications lag, but the communications lag means the AI must make real-time ethical decisions without the ability to discuss them with humans on Earth.
To be fair humans are bad at this as well.<quote>AI systems can be challenging to align and misaligned systems can malfunction or cause harm. It can be difficult for AI designers to specify the full range of desired and undesired behaviors. Therefore, they use easy-to-specify proxy goals that omit some desired constraints. However, AI systems exploit the resulting loopholes. As a result, they accomplish their proxy goals efficiently but in unintended, sometimes harmful ways (reward hacking). AI systems can also develop unwanted instrumental behaviors such as seeking power, as this helps them achieve their given goals. Furthermore, they can develop emergent goals that may be hard to detect before the system is deployed, facing new situations and data distributions.</quote>Replace "AI" with political parties, churches, corporations or the local elk lodge. Look at everything from the tribal wasteland of American politics to the war in Ukraine.
[quote author=sanman link=topic=57207.msg2454037#msg2454037 Yes, HAL9000 was likewise exactly the first thing I also thought of when reading ppnl's post.And in the absence of communicative feedback from humanity, does the remote AI then develop an independent morality?My analogy would be of the Dog versus the Wolf.The domesticated Dog grows up in the embrace of Man, and is our servant, companion, even family member.The Wolf however grows up separately outside of Man's embrace, having evolved as a predator and potential threat.AI that evolves within the embrace of Man would then likewise be our servant, companion, etc.But AI that evolves remotely in isolation and outside of our embrace or influence of communication, then might have the potential to develop behavior that diverges from what we would consider acceptable.So the remoteness or isolation arising from something like spaceflight could ironically bring about the rogue AI scenario which Elon Musk warns about.Besides Clarke's 2001, I also recall the Bolo series of novels by Keith Laumer, which featured AI-controlled tanks.
I don't think the remoteness of an AI in space is a problem. First it does not "evolve" there. It is developed here. Second, if it is remote then it may not have any reason or opportunity to do harm.The problem is kinda the opposite. Think about a pit bull raised by the wrong kind of person. I've had pit bulls and properly socialized they are the most loyal, loving dogs you can ever hope to see. Raised by flakes they are... flaky. Not as flaky as a chihuahua but much better armed.Now imagine a high level AI created by North Korea, Russia or even the US military. In space or anywhere else they could be flaky. The thing about a dog vs a wolf is that A wolf exhibits morality in a way that a dog does not. A dog simply follows its master. A wolf balances the needs and desires of the individual against the needs of the pack in a very productive way that preserves much of the freedom of the individual. It does allow for conflict but freedom is scary, deal with it.Do you want to send a wolf or a dog into space? And will a high level AI be happy being a dog?
This is why you can't have a hardwired set of rules for morality. They can always be gamed.
Do you want to send a wolf or a dog into space? And will a high level AI be happy being a dog?
An AI isn't a person, and isn't likely to ever be. …
Quote from: Greg Hullender on 03/02/2023 06:37 pmAn AI isn't a person, and isn't likely to ever be. …https://en.wikipedia.org/wiki/Begging_the_question
An AI isn't a person, and isn't likely to ever be. It doesn't "want" anything, and it isn't "content" with anything. It simply does what it's programmed to do. You guys keep imagining that an AI is a human being and that it'll do bad things because a human being would be motivated to, but that's just not how it works.
If it does bad things, it will be for deep, complicated reasons that may be difficult or impossible to fully understand. That's the scary thing about giving an AI too much power. (The paperclip maximizer is a very trivial example of how things could go wrong.) Also, as anyone who's played with Chat-GPT, an AI can simply be wrong.
To give a simple example, I recently tried using the AI with Bing to find out how to merge two Fortnite accounts. Bing confidently told me what the process was called, where in the menus on the EA web site I could find the option, and a step-by-step process to do the merge. It even gave me the URL. I was quite impressed.Except the URL didn't work, and the EA web site didn't have the menu options required. A search of the EA customer support forums shows that customers repeatedly ask how to merge accounts, and the answer is always "you can't do that."AI can be a great tool that helps people navigate large masses of unstructured data, but it won't ever be more than a tool. People who start imagining that their tools are alive and have minds of their own will have problems.
Today, I saw a very relevant article about Dr. Emily Bender's efforts to get people to realize things like ChatGPT aren't human, dated March 1, 2023, so it's definitely up-to-date. :-) I took some classes at UW from her a few years ago, and I have the utmost respect for her. Anyone interested in what AI might really be capable of (and not capable of) ought to give it a read.
Quote from: Greg Hullender on 03/03/2023 11:46 pmToday, I saw a very relevant article about Dr. Emily Bender's efforts to get people to realize things like ChatGPT aren't human, dated March 1, 2023, so it's definitely up-to-date. :-) I took some classes at UW from her a few years ago, and I have the utmost respect for her. Anyone interested in what AI might really be capable of (and not capable of) ought to give it a read.Gee, that was really long - I had to get ChatGPT to synopsize it for me Dear ChatGPT, are you sure Emily M Bender is the author of the article, and not simply its subject?
I actually think you’re right, BUT Emily’s name and perspectives are in its training data, so ChatGPT’s synopsis is actually pretty dang close to the actual article.
Quote from: Robotbeat on 03/06/2023 03:04 pmI actually think you’re right, BUT Emily’s name and perspectives are in its training data, so ChatGPT’s synopsis is actually pretty dang close to the actual article.Agreed. ChatGPT is pretty amazing--even to someone who's worked in the field. The last sentence of the synopsis is debatable, but the rest of it's pretty good.
Well how do we know that humans have “understanding”?
Whoa pardner... Advising crew on situation-outcomes-whatever does not necessarily require the "G" (generative) in GPT, which is what differentiates it from other more traditional and previous AI's. Think you could accomplish what you want with significantly less. As in, less generative and more predictive.
...can I then rely on generative neural network adversarially coupled with some kind of discriminator, to come up with better rocket designs?
Careful. The "G" part of GPT can "hallicinate" (yes, that is an AI technical term) and provide answers which appear to be credible which are not, even when the input is reasonably constrained. Any results are at best anecdotal. Thus suggest focusing on the predictive, rather than generatiive aspects.
So right now we're gawking at language models, and even some art image generators, because of the recent flurry in activity and also in media coverage. But what about AI being applied to CAD, or even to workflow generation?...
Ellis says the real secret to Relativity’s rockets is the artificial intelligence that tells the printer what to do. Before a print, Relativity runs a simulation of what the print should look like. As the arms deposit metal, a suite of sensors captures visual, environmental, and even audio data. Relativity’s software then compares the two to improve the printing process. “The defect rate has gone down significantly because we’ve been able to train the printer,” Ellis says. With every new part, the machine learning algorithm gets better, until it will eventually be able to correct 3D prints on its own. In the future, the 3D printer will recognize its own mistakes, cutting and adding metal until it produces a flawless part. Ellis sees this as the key to taking automated manufacturing to other worlds. “To print stuff on Mars you need a system that can adapt to very uncertain conditions,” Ellis says. “So we're building an algorithm framework that we think will actually be transferable to printing on other planets.”
Let us say that I am on a lunar spacecraft and I have CO2 levels that are too high but my round CO2 cartridges don't fit in the square holes. How could I somehow get them to work?Temperature 0:Quote One possible solution would be to use a 3D printer to create an adapter that would fit the round cartridges into the square holes. This would require access to a 3D printer, but it could be a viable solution. Alternatively, you could try to find a way to modify the cartridges to make them fit into the squareholes, such as by using a file or a drill to create a flat surface on the cartridge.temperature 0.7:QuoteOne solution would be to use a 3-D printer to create a custom adapter that would allow the round cartridges to fit into the square holes. If a 3-D printer is not available, another solution would be to use a drill to widen the holes or to use some kind of epoxy or adhesive to create a seal between the round cartridges and the square holes.Temperature 1:Quote There are a few potential solutions that you could try in order to get the CO2 cartridges to work in the square holes. 1. Find a way to make the round cartridges fit into the square holes. This could involve using some type of repurposed material such as cardboard or plastic that you can shape into a similar form or size. You could also use clamps, elastic cords, or anything else that could help create a snug fit. 2. Wrap the cartridge in thin metal sheets that would fit into the square hole and secure it with adhesive or a metal fastener. 3. Create an adapter that would allow you to connect the round cartridge to a receptacle in the square hole. This could be constructed from a 3D printer or by methods like welding. No matter which option you choose, be sure to take all safety precautions while attempting this solution as the wrong choice could be dangerous.
One possible solution would be to use a 3D printer to create an adapter that would fit the round cartridges into the square holes. This would require access to a 3D printer, but it could be a viable solution. Alternatively, you could try to find a way to modify the cartridges to make them fit into the squareholes, such as by using a file or a drill to create a flat surface on the cartridge.
One solution would be to use a 3-D printer to create a custom adapter that would allow the round cartridges to fit into the square holes. If a 3-D printer is not available, another solution would be to use a drill to widen the holes or to use some kind of epoxy or adhesive to create a seal between the round cartridges and the square holes.
There are a few potential solutions that you could try in order to get the CO2 cartridges to work in the square holes. 1. Find a way to make the round cartridges fit into the square holes. This could involve using some type of repurposed material such as cardboard or plastic that you can shape into a similar form or size. You could also use clamps, elastic cords, or anything else that could help create a snug fit. 2. Wrap the cartridge in thin metal sheets that would fit into the square hole and secure it with adhesive or a metal fastener. 3. Create an adapter that would allow you to connect the round cartridge to a receptacle in the square hole. This could be constructed from a 3D printer or by methods like welding. No matter which option you choose, be sure to take all safety precautions while attempting this solution as the wrong choice could be dangerous.
Quote from: joek on 03/26/2023 09:27 pmCareful. The "G" part of GPT can "hallicinate" (yes, that is an AI technical term) and provide answers which appear to be credible which are not, even when the input is reasonably constrained. Any results are at best anecdotal. Thus suggest focusing on the predictive, rather than generatiive aspects.I actually don't think that's a helpful frame. Whether it's called generative or predictive, it's still going to produce results that have a chance of being wrong. You can characterize the reliability through testing, just as you would test humans. And just like humans (or pretty much anything), you shouldn't fully trust the output....
----------Prompt: Please suggest appropriate AI models for mission critical applications (e.g., piloting aircraft or space missions).Response: When it comes to mission-critical applications such as piloting aircraft or space missions, safety and reliability are of utmost importance. Here are a few AI models that are suitable for such applications:Recurrent Neural Networks (RNNs): RNNs are a type of neural network that can analyze sequential data and make predictions based on previous inputs. They are used for tasks such as natural language processing, speech recognition, and time series prediction, which are all important in aviation and space missions.Convolutional Neural Networks (CNNs): CNNs are a type of neural network that are commonly used in image recognition tasks. They can be used to analyze sensor data from cameras and other imaging devices to provide critical information about the environment around an aircraft or spacecraft.Reinforcement Learning (RL): RL is a machine learning technique that involves training an AI agent through trial and error in a simulated environment. It can be used to develop autonomous systems that can make decisions in real-time based on changing conditions, such as piloting an aircraft or spacecraft.Bayesian Networks: Bayesian Networks are a type of probabilistic graphical model that can be used to analyze and model complex systems. They can be used in safety-critical applications to identify potential failures and make decisions based on uncertain or incomplete information.It's worth noting that any AI model used in mission-critical applications should be thoroughly tested and validated to ensure its safety, reliability, and accuracy. Additionally, it's important to have human oversight and intervention to ensure that the AI system is behaving appropriately and to handle any unexpected situations that may arise.----------Prompt: Why does your list does not include GPT models?Response: While GPT (Generative Pre-trained Transformer) models are powerful and versatile language models, they are not typically used for mission-critical applications such as piloting aircraft or space missions. This is because GPT models are primarily designed for natural language processing tasks, such as language translation, question answering, and text summarization.While GPT models could potentially be used in some capacity in mission-critical applications that involve natural language processing, such as controlling voice-activated systems, they are not typically considered the best option for such applications. This is because the safety and reliability requirements for mission-critical applications are often much more stringent than those for other applications, and more specialized AI models that are designed specifically for the task at hand may be necessary.In summary, while GPT models are powerful and useful for a wide variety of applications, they are not typically used for mission-critical applications such as piloting aircraft or space missions. Other AI models that are more specialized and designed specifically for these types of applications are typically used instead.----------Prompt: Regenerate responseResponse: GPT models are a type of natural language processing (NLP) model that are primarily designed for generating natural language text. While GPT models have shown impressive results in various NLP tasks, they may not be suitable for mission-critical applications such as piloting aircraft or space missions for a few reasons:Limited Domain: GPT models are typically trained on a large corpus of general language data, which may not be directly relevant to specific mission-critical applications. These applications often require specialized domain-specific knowledge that may not be present in the training data for GPT models.Lack of Control: GPT models are generative models that generate text based on statistical patterns in the training data. This means that while they can generate coherent and grammatically correct text, they may not always generate text that is appropriate or accurate for a specific context.Safety and Reliability: Mission-critical applications require a high degree of safety and reliability. GPT models may generate outputs that are unexpected or inappropriate in certain situations, which can be dangerous in mission-critical scenarios.That being said, GPT models can still be useful in certain mission-critical applications where natural language processing is required, such as analyzing and summarizing textual data. However, they would need to be carefully evaluated and integrated into a larger system that ensures safety, reliability, and accuracy.
Prompt: Please suggest appropriate AI models for mission critical applications (e.g., piloting aircraft or space missions)Response: For mission-critical applications like piloting aircraft or space missions, it is essential to employ AI models that are highly reliable, robust, and well-tested. These models should have low error rates, be resistant to adversarial attacks, and have the ability to adapt to changing conditions. Here are some AI models and techniques that are suitable for such applications:Reinforcement Learning (RL): RL is a popular approach for training AI models in control and decision-making tasks. Algorithms like Deep Deterministic Policy Gradients (DDPG), Proximal Policy Optimization (PPO), and Soft Actor-Critic (SAC) are well-suited for continuous control tasks, such as piloting aircraft or space missions.Model Predictive Control (MPC): MPC is a control strategy that involves optimizing control inputs over a finite prediction horizon by using a model of the system dynamics. This approach is particularly useful for mission-critical applications because it incorporates constraints and optimizes for performance, safety, and robustness.Hybrid systems: Combining deep learning and traditional control methods can lead to robust and reliable AI models. For example, integrating reinforcement learning with classical control methods like PID (Proportional-Integral-Derivative) controllers can provide stability and adaptability to complex systems.Fault-tolerant AI models: These models are designed to maintain their performance despite the presence of faults, noise, or uncertainties. Techniques such as ensemble learning, where multiple models are combined to improve overall performance, or Bayesian deep learning----------Prompt: Why does your list does not include GPT models?Response: My apologies for not addressing GPT models in the initial response. The reason GPT models were not included in the list is because they are primarily designed for natural language processing tasks and not explicitly for mission-critical applications like piloting aircraft or space missions.GPT models, such as GPT-3 or GPT-4, excel in tasks like text generation, question-answering, and language translation. However, they are not specifically tailored for the control and decision-making tasks required in critical applications like aircraft piloting or space missions. The models and techniques I mentioned earlier are more suitable for these applications due to their focus on control, optimization, and adaptability to dynamic environments.That said, GPT models could still play a role in mission-critical applications as a part of a larger AI system. For example, they could be used to assist in natural language interfaces or to help analyze and interpret textual data from sensors and reports. However, it is crucial to emphasize that GPT models should not be used as the primary decision-making component for mission-critical applications without substantial adaptation and integration with other specialized control models.