Author Topic: Use of ChatGPT and how to (and not to) post AI content on the forum  (Read 48292 times)

Online Coastal Ron

  • Senior Member
  • *****
  • Posts: 9743
  • I live... along the coast
  • Liked: 11335
  • Likes Given: 13038
A friend of mine is a long time credentialed A.I. expert, and we have constant discussions about the state of so-called "artificial intelligence". And they will quite often reference a conversation they had with an online AI, but they give me the link for the conversation instead of pasting in the whole thing.

I view references to conversations as a good way to do things, and not just block post the entire reference.

Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information, don't directly quote it. Just say what you believe, which will be no different than what everyone has done before the current crop of AI Large Language Models.
If we don't continuously lower the cost to access space, how are we ever going to afford to expand humanity out into space?

Offline Docabilly

  • Member
  • Posts: 18
  • Liked: 10
  • Likes Given: 504
Actually I find AI incredibly correct.  Well grok is the only one I use.  I also ask questions with simple, clear concise phrases. Much like I've done with google search for the past 2 decades.  grok's answer are amazingly correct every time

Online InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3537
  • Seattle
  • Liked: 2599
  • Likes Given: 4337
A friend of mine is a long time credentialed A.I. expert, and we have constant discussions about the state of so-called "artificial intelligence". And they will quite often reference a conversation they had with an online AI, but they give me the link for the conversation instead of pasting in the whole thing.

I view references to conversations as a good way to do things, and not just block post the entire reference.

Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information, don't directly quote it. Just say what you believe, which will be no different than what everyone has done before the current crop of AI Large Language Models.

One should review the equations of the AI (really LLM+tools) before posting them.

Wasting everyone's time with unchecked information is in nobody's interest.

Reference to *full* conversations are where I think the problem is, that's why I recommend summarizing it.  You could post both the summary link and the long-winded link.

I'm a coder, and I find the same problem with using LLM generated code.  If it's 20-30 lines, I can quickly review it for hallucinations and other mistakes.

If the LLM gives me 500 lines, my eyes glaze over trying to review it.

So I break problems down into smaller chunks and it's much more pleasant and productive.  I find my productivity to be 2x of normal with LLM assistance.

So the same idea applies here.  Don't make your poor audience digest 5 pages of LLM goo.  Summarize it.
« Last Edit: 05/31/2025 02:17 am by InterestedEngineer »

Offline Vultur

  • Senior Member
  • *****
  • Posts: 3220
  • Liked: 1426
  • Likes Given: 196
Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information,

Yeah. I would honestly prefer a forum rule 'no citing AI'. Let's not perpetuate, even accidentally, the idea that language models are reliable sources for factual information.

Sure, they're often right. Often enough to be convincing, which is why it's dangerous. But then they go wrong in totally bizarre ways unexpectedly (Google Lens recently told me that a caterpillar was an aphid, or possibly rust), fail to distinguish between fiction and fact, make up citations that don't exist, etc
« Last Edit: 05/31/2025 04:34 am by Vultur »

Offline leovinus

  • Full Member
  • ****
  • Posts: 1466
  • Porto, Portugal
  • Liked: 1141
  • Likes Given: 2226
TL:DR so same rules as for people? ;) Use a summary where appropriate and cite factual sources for important conclusions?

For the current generation of LLMs, I like to think of them as the Cliff Clavin’s from Cheers. Long winded with hallucinated facts in there. In my experience, the more pruned the models, the more hallucinations. In fact, the billion+ parameter models like DeepSeek I run locally produce wrong answers almost very often. Just parse the answers carefully. The root cause is simply that these LLMs have algorithmic limitations. Think of it as a compression approach from all world knowledge into N parameters. If N is eg 1 then all answers are hallucinated. And lust like with compression research, it took a while to come up with DCT, JPEG, etc. Which is why I support algorithmic research into better language models where facts, memory, reasoning and math are properly integrated.

Offline Paul451

  • Senior Member
  • *****
  • Posts: 3930
  • Australia
  • Liked: 2783
  • Likes Given: 2414
2.  It's far easier to show your work

Isn't the issue that it's not your work. We don't know if it's anyone's work.

Offline Roy_H

  • Full Member
  • ****
  • Posts: 1383
    • Rotating Space Station
  • Liked: 481
  • Likes Given: 3443
I have managed to inadvertently insult some people on this forum. AFAICT it is because I asked questions that I should know or be able to find with my own research, including reading all previous posts. Some threads have a LOT of pages to read through for someone who is new to a particular thread. So this may be another dumb question, but is there forum rules or guidelines I could be directed to in an effort to not cause more distress on this forum?
« Last Edit: 08/01/2025 07:46 pm by russianhalo117 »
"If we don't achieve re-usability, I will consider SpaceX to be a failure." - Elon Musk
Spacestation proposal: https://rotatingspacestation.com

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 833
  • Liked: 817
  • Likes Given: 1238
I have managed to inadvertently insult some people on this forum. AFAICT it is because I asked questions that I should know or be able to find with my own research, including reading all previous posts. Some threads have a LOT of pages to read through for someone who is new to a particular thread. So this may be another dumb question, but is there forum rules or guidelines I could be directed to in an effort to not cause more distress on this forum?
Make an effort at adding value to the community rather than being a drain on the patience and goodwill of others.   Go looking for forum rules rather than spending the time asking others to spoon-feed you answers.

You can be forgiven for missing something in a long thread if you put in the effort to at least skim through it.   But nobody is going to bother spending any effort on your posts if you brag about how you didn't bother to read the thread.

Online InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3537
  • Seattle
  • Liked: 2599
  • Likes Given: 4337
Here's a "How not to post AI content" on the forums.d

Don't make it completely obvious and then not credit the AI bot.

If your posts is starting with a standard chatGPT "affirming" response (which it is notorious for), then you either should edit that out, or credit it, (preferably both).  or at least move the affirming response to the tail instead of the head.

A random sample of such:

Thank you very much for your detailed analyses — they are extremely valuable because they ...

Thanks for the insightful feedback—that's a crucial point about the return direction potentially

I somewhat excuse the author because ESL (native french speaker0, but he should at least credit the translation to chatGPT.  I suggest feeding your answer in French to the LLM and then outputting the english translation.

from https://forum.nasaspaceflight.com/index.php?topic=63514.msg2717206#msg2717206
Quote
Thanks again for the thoughtful critiques—this kind of pushback is exactly what I was hoping for when I posted.

https://forum.nasaspaceflight.com/index.php?topic=63514.msg2717219#msg2717219
Quote
I appreciate the concern about clarity and rigor.


Offline Vultur

  • Senior Member
  • *****
  • Posts: 3220
  • Liked: 1426
  • Likes Given: 196
Can we please have stricter forum rules about not using LLMs as a source?

It is one thing to use them to find sources (so long as you confirm that the source you find actually exists - fairly often it does not) but I don't feel like we should have any AI generated text.

Online Robotbeat

  • Senior Member
  • *****
  • Posts: 41091
  • Minnesota
  • Liked: 27094
  • Likes Given: 12769
Can we please have stricter forum rules about not using LLMs as a source?

It is one thing to use them to find sources (so long as you confirm that the source you find actually exists - fairly often it does not) but I don't feel like we should have any AI generated text.
Maybe limit it? Like, the big problem is like several pages of LLM slop. Using LLMs for translation or finding sources (double-checked) if fine, or even a sentence or two of summary, but dueling LLMs with pages of text is just pointless. No one is gonna read all that.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Offline Vultur

  • Senior Member
  • *****
  • Posts: 3220
  • Liked: 1426
  • Likes Given: 196
Can we please have stricter forum rules about not using LLMs as a source?

It is one thing to use them to find sources (so long as you confirm that the source you find actually exists - fairly often it does not) but I don't feel like we should have any AI generated text.
Maybe limit it? Like, the big problem is like several pages of LLM slop. Using LLMs for translation or finding sources (double-checked) if fine, or even a sentence or two of summary, but dueling LLMs with pages of text is just pointless. No one is gonna read all that.

Translation, maybe .

I dislike even using them to summarize things; my experience is that non obvious technical things & numbers are very often wrong.

Online TheRadicalModerate

  • Senior Member
  • *****
  • Posts: 6329
  • Tampa, FL
  • Liked: 4444
  • Likes Given: 775
Space nerdery in general and NSF in particular are a hobby for me.  I like working through complicated problems with other people and reasoning out the answers.  It's fun, and I learn a lot.

It stops being fun if I have to interact with a human who's interacting--often ignorantly or badly--with an AI, then forcing the content on me with reasoning that isn't recognizably human.

I don't want to argue with machines.  I like arguing with people.  If people want to use machines to research, shape, and compute their arguments, I'd like to say that's OK, but it frankly feels soulless and empty to me.  The fun of NSF comes from learning the stuff you're posting, in sufficient depth that you can defend your arguments.  You can't do that if an AI is doing all the work.

Meanwhile, my default personal policy is going to be to ignore stuff that's easily recognized as AI.  I hope that NSF doesn't morph into something where that policy precludes me from interacting.

Online InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3537
  • Seattle
  • Liked: 2599
  • Likes Given: 4337
I use AI to format stuff into NSF format with Latex images for equations. (example below)  AI is good at that, but:

1.  I verify nothing was altered from the original text.  AI sometimes ignores the original text and hallucinates
2.  I remove all "what a great response" which is a tell-tail dopamine-hit feel good thing that chatGPT and others are notorious for.


I do hate the AI wall of text and the results of bad prompting.

All in all, if you stuff is noticably AI generated, and it's long, it's gonna get a sarcastic response of "TL;DR Mr AI"


I say use the AI all you want to generate your thesis.  Take the time to make it look like you the author thought about it and that you've distilled it down to the essence and gotten rid of the AI fluff.


(for example the LateX version is more readable)

Solve the quadratic:

We start with the equation:


solve for the discriminant


which results in
,

which is far easier to read than:

x^2 - 5x + 6 = 0
x = (5 +/- sqrt(25 - 24))/2 = (5 +/- 1)/2
So the solutions are x = 2 and x = 3




« Last Edit: 11/05/2025 09:05 pm by InterestedEngineer »

Offline Paul451

  • Senior Member
  • *****
  • Posts: 3930
  • Australia
  • Liked: 2783
  • Likes Given: 2414
We start with the equation:


solve for the discriminant


which results in
,

which is far easier to read than:

x^2 - 5x + 6 = 0
x = (5 +/- sqrt(25 - 24))/2 = (5 +/- 1)/2
So the solutions are x = 2 and x = 3


Except those latex equations show as png images, and can't be selected as text / copied / edited / etc. The text equations obviously can.

"Looks pretty, useless in practice" seems like a good metaphor for AI.

Online InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3537
  • Seattle
  • Liked: 2599
  • Likes Given: 4337
We start with the equation:


solve for the discriminant


which results in
,

which is far easier to read than:

x^2 - 5x + 6 = 0
x = (5 +/- sqrt(25 - 24))/2 = (5 +/- 1)/2
So the solutions are x = 2 and x = 3


Except those latex equations show as png images, and can't be selected as text / copied / edited / etc. The text equations obviously can.

"Looks pretty, useless in practice" seems like a good metaphor for AI.

It's perfectly simple to extend the prompt to do both

Solve the quadratic:



LaTeX w/ images

We start with the equation:


Compute the solution:


So the solutions are:
and



Plaintext (copying):

We start with the equation:
x^2 - 5x + 6 = 0

x = (5 +/- sqrt(25 - 24)) / 2 = (5 +/- 1) / 2

So the solutions are x = 2 and x = 3



but this seems overkill.   How often do you need to copy the equation?

Offline Paul451

  • Senior Member
  • *****
  • Posts: 3930
  • Australia
  • Liked: 2783
  • Likes Given: 2414
How often do you need to copy the equation?

Whenever I see an interesting physics formula that I want to keep in my text-file of interesting physics formula that I want to keep.

But mostly, this is my annoyance at the increasing trend of un-highlightable/copyable text in many common websites (Reddit/Youtube). It breaks my muscle-memory for looking up something that I'm not familiar with (highlight, right-click, search. Especially on Vivaldi, which allows multiple search-sites within the context menu; and before that an add-on in Firefox which broke when they changed add-on system, which is why I switched to Vivaldi. That's how baked into my work-flow it is.) None of that is about AI, just the more general enshitification problem.

Online InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3537
  • Seattle
  • Liked: 2599
  • Likes Given: 4337
How often do you need to copy the equation?

Whenever I see an interesting physics formula that I want to keep in my text-file of interesting physics formula that I want to keep.

But mostly, this is my annoyance at the increasing trend of un-highlightable/copyable text in many common websites (Reddit/Youtube). It breaks my muscle-memory for looking up something that I'm not familiar with (highlight, right-click, search. Especially on Vivaldi, which allows multiple search-sites within the context menu; and before that an add-on in Firefox which broke when they changed add-on system, which is why I switched to Vivaldi. That's how baked into my work-flow it is.) None of that is about AI, just the more general enshitification problem.

If you are keeping equations around you should keep them in Latex format.  If you are into math it should be as natural to you as Markdown is to programmers.

In that case right-click on the image and the Latex equation is in the URL.  Copy the URL and paste the Latex equation from it.

for example:  https://latex.codecogs.com/png.latex?\small x=\frac{5\pm\sqrt{25-24}}{2}

for

As far as uncopyable text, yes a lot of that is lame attempts at copyright enforcement.  But it's also lack of proper formatting capabilities.  At minimum all websites should support markdown and Latex equations. But alas, most don't, so we are stuck with what we are stuck with.

Heck what annoys me most about AI is pictures with text on my phone. I  try press click the picture so I can copy it, and I end up selecting the text in the picture (because AI can now "read" the text in a picture).  Who thought *that* was a good idea?  Who needs to select the text in a picture as an every-day use-case?

Everyone is always breaking copy&paste. I hate it as much as you do.   But sometimes for readability a formatting picture or other entity is necessary.  If there's a workaround, you can use it.  There's a workaround for latex image generators.
« Last Edit: 11/07/2025 09:14 pm by InterestedEngineer »

Offline Paul451

  • Senior Member
  • *****
  • Posts: 3930
  • Australia
  • Liked: 2783
  • Likes Given: 2414
How often do you need to copy the equation?
Whenever I see an interesting physics formula that I want to keep in my text-file of interesting physics formula that I want to keep.
If you are keeping equations around you should keep them in Latex format.

Nope. Double nope. Triple nope. My formula & constants text files have followed me from machine to machine since the early '80s. Ascii, 7-bits, no extended characters, never change the order they were added, and absolutely no formatting.

But it's also lack of proper formatting capabilities.  At minimum all websites should support markdown and Latex equations. But alas, most don't,

Should. Don't. And I've had to learn this lesson many times. Across many operating systems, networks, software, browsers and websites. Should. Don't. Every time.
« Last Edit: 11/08/2025 07:55 pm by Paul451 »

Tags:
 

Advertisement NovaTech
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
0