It isn't just spaceflight content on Youtube. It's happening on every platform on just about every topic. It's making all these platforms untrustworthy even if you think you know who the content provider is.
Quote from: Eric Hedman on 09/24/2025 06:16 amIt isn't just spaceflight content on Youtube. It's happening on every platform on just about every topic. It's making all these platforms untrustworthy even if you think you know who the content provider is.And that goes both ways. I’ve seen disgusting comments on channels saying that voices on videos are AI-generated, forcing the content creators to come out and defend themselves. Just because you're reading a script that you created yourself doesn’t mean that the use of "no voice" editing to speed up the video makes your voice appear to be AI-generated or interpreted. I think making a small box at one corner with you speaking will help, but many don't due to gender, handicap, or other biases.
Indeed very worrisome.On top of that I read an article by a Danish psychologist who had interacted with AI engines while pretending to suffer from various psychological illnesses. He said that the conversations with AI would surely exacerbate illnesses people might suffer from, because the AI constantly encouraged him and egged him on, even when he displayed severely sociopathic behaviour.We're right now repeating all the errors we made fixteen years ago with social media. But repeating those errors with a much more potent technology. It will have huge societal impact.
The CEO of a company that’s pumping out thousands of lazily AI-generated podcasts thinks everybody is complaining too much about having AI slop shoved down their throats.Inception Point AI CEO Jeanine Wright told the Hollywood Reporter that “people who are still referring to all AI-generated content as AI slop are probably lazy luddites.”
Besides pumping out thousands of AI podcasts, Inception Point also aims to turn AI-generated personalities into influencers on social media, a tactic vaguely reminiscent of Meta’s ill-fated attempts to launch AI chatbots based on real-world celebrities and load its platforms with AI-powered characters.For its podcasts, the Inception’s AI chooses topics based on Google data and social media trends, according to the Reporter. It then launches five different versions of each show to see if any of them stick. To double down on search engine optimization, some of the podcasts’ titles are extremely basic, such as “Whales,” a show about whales.Wright defends those appalling practices, saying it’s all a numbers game.“We might make a pollen podcast that maybe only 50 people listen to, but I’m already at unit profitability on that, and so then maybe I can make 500 pollen report podcasts,” she told the Reporter.
But what happens when AI allows us to easily whip up machines custom-designed for particular tasks. At that point it could be just as easy to create machines tailored for particular tasks, as compared to relying on general-purpose machines.That's when we could have the machine version of spam. Instead of just text spam & audio-visual spam, we could have a spam-of-machines, or spam-of-things. We could be up to our eyeballs in custom-made one-offs created to do some particular thing, simply because it was effortless enough to make them that way.
It is a bit harsh to Blame AI for the proliferation of junk.
Quote from: John-H on 10/03/2025 12:13 amIt is a bit harsh to Blame AI for the proliferation of junk. Two recent examples.
Oct 30, 2025 Neil deGrasse Tyson Explains...What is real on the internet? Neil deGrasse Tyson breaks down deepfakes, discusses the time a deepfake video tricked Terry Crews, and learns about how AI is used to deceive with the help of Bitdefender Chief Security Strategist, Alex Cosoi. Timestamps: 00:00 - IT’S FLAT00:32 - What is “Deep” About a Deepfake? 2:35 - Neil Getting Deepfaked6:06 - The Stakes of Political Deepfakes8:11 - Scams to Watch Out For13:50 - Are We Losing Against Deepfakes?16:00 - Knowing What’s Real
Yeah, surfing various social media sites, I've seen this steady increase--the tech gets better, the disinformation gets easier. There are weird side effects too, which is that people are increasingly skeptical of real images. I don't think that's unexpected, and in fact it is often part of the point for disinformation and propaganda (erode trust in legitimate sources, and reality). But I've seen it even in obvious situations where somebody posts a video/image that by any measure is legit, but some people still say "It's AI." If they had clicked a few different sources, they would have confirmed that it was legit.Addendum: just the other day I saw another related example. Somebody had posted space art, and some comments were that it was AI generated. The poster then responded that he was a graphic artist and had created it using various CGI tools. He was polite about it, but I could sense that he was rather mad that he had spent a long time creating something that was dismissed so easily.
Here's a good example. He just reposted this from 2023. He spends a lot of time producing incredible animations (not so much still images). And then somebody comes along and gives a prompt to an AI and produces junk.I don't know what he does for a living, but he has great CGI skills.
I generally despise laziness (except, I guess, when I'm being lazy), and I've seen how AI is contributing to laziness. People go to AI and say "give me a spacecraft image" and plop it in their post about Voyager when they could just as easily go to a search engine and say "Voyager" and get the right photo. Here's an example.I've recently seen this from a guy on Twitter who should absolutely know better. He regularly posts images of a specific topic (not space), and everything I've seen indicates he knows the subject well. And recently he has been re-posting AI images that in two seconds an expert can tell are wrong. Fortunately, his followers are calling him out on it and telling him that his credibility is suffering.
Quote from: Blackstar on 11/02/2025 12:10 amI generally despise laziness (except, I guess, when I'm being lazy), and I've seen how AI is contributing to laziness. People go to AI and say "give me a spacecraft image" and plop it in their post about Voyager when they could just as easily go to a search engine and say "Voyager" and get the right photo. Here's an example.I've recently seen this from a guy on Twitter who should absolutely know better. He regularly posts images of a specific topic (not space), and everything I've seen indicates he knows the subject well. And recently he has been re-posting AI images that in two seconds an expert can tell are wrong. Fortunately, his followers are calling him out on it and telling him that his credibility is suffering.What is crazy, in my mind, is that the Voyager is iconic and how would anyone not recognize that it's the wrong one immediately? Maybe it's a generational thing...
What is crazy, in my mind, is that the Voyager is iconic and how would anyone not recognize that it's the wrong one immediately? Maybe it's a generational thing...
Researchers suggested there's more AI generated content appearing on the web than human generated content - Mike Pound from the University of Nottingham talks about why this might be a problem.
when humans were introduced to AI
when humans were introduced to AIhttps://youtube.com/watch?v=OxLjvY5CrjQ
We used to worry that history was written by the victors. Today, we should be scared for a different reason. For if you want to see the future of our historical record, look no further than the dozens of YouTube and TikTok accounts – many with hundreds of thousands of subscribers and millions of views – which pump out AI-generated “documentaries”.One popular account I came across does the seemingly impossible by churning out a two-hour “programme” every other day. For any human being or production team – actual historians who care about accuracy, nuance or truth – this pace is physically and intellectually impossible. For an AI, however, it’s just another hour or two (if that) in the office and that is a massive problem.We are witnessing the industrial-scale pollution of our collective memory. The colourful and intricate tapestry of human history is being shredded and replaced by a cheap, often grossly inaccurate one. And the worst part is that many viewers believe what they see.
May I offer a different perspective? Instead of all the "the sky is falling and rubbish is everywhere" posts, maybe, just maybe, it is just time to let go of YouTube et al? There was a time when the quality of content was important. No quality, no readers. Today, Google and Facebook care only about the money. As such, any content that make people click on serves them an advertisement and as such generates money for them. Not for the reader/user. Let it go people and find a better platform to discuss and express yourself We have seen similar issues, on a smaller scale, in the early 90s, with the transition from Usenet to WWW/Html. Html pages and graphics were so sexy and new Usenet was already besieged by spam and rubbish. I remember the neo-hippies posting daily in sci.physics with "I had a dream about the structure of space time and all is clear to me". Really, that was worse that semi accurate AI generated content. Then we had examples like Geocities. Good concept and the whole thing went under with spam, lack of good content, and more sexy alternatives like Fakebook (excuse me "Facebook").Today, similar stories, YouTube et all go under with the lack of "anything useful". As such, I'd love to see new content servers and sites with factual and accurate stories. NSF maybe? There is plenty of text and video to host on Chris' own site if he chose to do it. I have seen this transition with one or two other sites which I use frequently and it seems to work. There is a subset of readers and contributors who pay and make it work. On you own site, you can moderate.