Garbage in, garbage out. The low resolution images are being made to look like high resolution images. Actual new information is not being created. I admit that the better-looking image might be easier to interpret and can be a help, but new information is not being created. Scientific uses would be a lot more limited than uses in other fields (like movie CGI). The people who want to sell these kinds of things will claim they can do things they cannot really do.
Also, keep in mind that none of this is artificial intelligence, any more than solving integrals, playing chess, natural language translation or image identification was.
Also, keep in mind that none of this is artificial intelligence, any more than solving integrals, playing chess, natural language translation or image identification was. The tendency is for AI investigators is to identify something that they find difficult to do as a benchmark of intelligence, develop an "AI" system that accomplishes the task and then realize that the product is performing with no intelligence at all. So they go back to their terminals, move the goalposts and try again. In the meantime, entrepreneurs productize these idiot savant creations and the popular press announces the imminent Rise of the Machines, again. This has been going on since the 1950's, at least.
AI is, fundamentally, a marketing term. It has no consistent technical definition.
I once heard a tongue-in-cheek suggestion of "any program which contains at least one branching instruction," and based on real-world corporate usage that definition seems about right actually...
It occurred to me that this could enable all sorts of designer bugs for ISRU purposes, and maybe even terraforming....Could we even use AI to design complex ecosystems of organisms that would cope with the existing Mars conditions while working to transform the environment into one that's more human-friendly?
The current iteration of AI is a deep learning system that instead of being explicitly programmed to do a specific task learns how to do it by being fed enormous amount of training data, guessing at what the right answer is and having the guess evaluated. The quality of the current guess is used to adjust how future guesses are made until they are consistently good enough. Then whenever new data is presented the deep learning system will make a guess based on its experience with the training data sets. This is the principle behind the current generation of "AI" systems, minus the mathematics and implementation details.
Well, there also appear to be more recently explored approaches like Generative Adversarial Networks that are using Darwinistic type algorithms, and just rapidly iterating through them to create new content.
As for classifiers, for a lot of routine Earthly applications, that's all you need. If your AI is looking through satellite imagery to spot a tank on the ground, then as long as it's already trained by looking at enough tanks, then it should be able to meet the needs. If it's looking for debris of a crashed space probe on the surface of the Moon, then it would have had to train by looking at enough debris fields.
If you're trying to upscale a person's face from low-res to high-res, then having trained on a bunch of faces won't be enough to know if a mole should be visible in the high-res image that wasn't apparent in the low-res one. But for generalizations, it should be okay to interpolate/extrapolate. Rather than creating new information, you're "transferring" (inferring) it from the training data.
The joke-that's-not-a-joke within the field is that "AI is the set of all software problems we can't solve yet."Once we solve a given AI problem (eg defeating a chess grandmaster) it immediately gets its own specialized name ("two-player game playing") and thus it no longer falls under "AI." A nice catch-22!
Quote from: Twark_Main on 09/26/2022 11:52 pmAI is, fundamentally, a marketing term. It has no consistent technical definition.I agree.Quote I once heard a tongue-in-cheek suggestion of "any program which contains at least one branching instruction," and based on real-world corporate usage that definition seems about right actually... I bet the max(0, x) in deep neural network ReLU units is typically computed using instructions for max rather than branches. If that's true then deep neural networks don't actually include any branches in the core code. So modern AI code may actually execute fewer branches per second than many non-AI tasks such as sorting.
Autonomous onboard navigation...
Quote from: sanman on 09/19/2022 12:03 amIt occurred to me that this could enable all sorts of designer bugs for ISRU purposes, and maybe even terraforming....Could we even use AI to design complex ecosystems of organisms that would cope with the existing Mars conditions while working to transform the environment into one that's more human-friendly?As far as I know, a human team would be unable to create "designer bugs," so AI can't do it either. There is no magic here.
Quote from: jimvela on 11/15/2022 03:32 pmAutonomous onboard navigation......Adding an "Autonomous" level is additional level of complexity beyond that. Hence, first get the basics right and then we can talk about AI additions
Isn't this exactly the kind of stuff a genetic evolutionary approach could tackle? To try to synthesize a viable biochemical paths working in the Martian environment? Btw, there've been things like genetically evolved antennas done by NASA: https://en.wikipedia.org/wiki/Evolved_antenna(As a side note - as you've been in this field for a long time - do genetic evolution algos belong usually under the AI moniker?)