Author Topic: Suggestions on how to (and not to) post AI content on the forum  (Read 2166 times)

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3213
  • Seattle
  • Liked: 2388
  • Likes Given: 3975
I have received legitimate complaints about posting long-winded ChatGPT sessions on to the forums (as a link).  See for example https://forum.nasaspaceflight.com/index.php?topic=60195.msg2690371#msg2690371


The chief complain is the verbosity.  Asking humans to wade through pages of AI generated stuff (including all the mistakes and their corrections) isn't the correct burden of work.

Those who hate AIs would just tell us to not post such things, but:

1.  The AIs can now solve equations, and the equations are in very readable LateX format
2.  It's far easier to show your work esp. with a nice formatter (Latex and Markdown rendered to HTML)
3.  I don't have all day to manually run equations and format stuff, and there isn't enough people doing the math already, we don't need to make it harder.
4.  It's early-mid 21st century, I try and keep up with the tools available.

So here's an attempt to get the extremely verbose AI to shut up and post just the summary of a conversation, so that all the false paths are elided and just the part you want your fellow forum readers to look at is short and concise.

So here's what I came up with for ChatGPT:

1. Ask your AI to write an executive summary including equations and their solutions (if applicable) in the long winded thread.
2. Copy that executive summary using the "copy" button
2. Start a new thread.
4. Paste your final summary with equations after this text: "Please summarize the discussion as follows"
5. Click the share button to get a hyperlink.




Offline sdsds

  • Senior Member
  • *****
  • Posts: 8194
  • “With peace and hope for all mankind.”
  • Seattle
  • Liked: 2824
  • Likes Given: 2554
Posting the link as you did (with proper indication of what it was) seems totally appropriate. It's a little more nuanced with diagrams that portray conceptual designs. If an AI helped create some iterations of the diagram does that need to be disclosed if the diagram it attached to a forum post? (I personally don't think so, but having 'guide rails' in place does seem to make sense.)
— 𝐬𝐝𝐒𝐝𝐬 —

Online Coastal Ron

  • Senior Member
  • *****
  • Posts: 9498
  • I live... along the coast
  • Liked: 10999
  • Likes Given: 12653
A friend of mine is a long time credentialed A.I. expert, and we have constant discussions about the state of so-called "artificial intelligence". And they will quite often reference a conversation they had with an online AI, but they give me the link for the conversation instead of pasting in the whole thing.

I view references to conversations as a good way to do things, and not just block post the entire reference.

Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information, don't directly quote it. Just say what you believe, which will be no different than what everyone has done before the current crop of AI Large Language Models.
If we don't continuously lower the cost to access space, how are we ever going to afford to expand humanity out into space?

Offline Docabilly

  • Member
  • Posts: 17
  • Liked: 8
  • Likes Given: 477
Actually I find AI incredibly correct.  Well grok is the only one I use.  I also ask questions with simple, clear concise phrases. Much like I've done with google search for the past 2 decades.  grok's answer are amazingly correct every time

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3213
  • Seattle
  • Liked: 2388
  • Likes Given: 3975
A friend of mine is a long time credentialed A.I. expert, and we have constant discussions about the state of so-called "artificial intelligence". And they will quite often reference a conversation they had with an online AI, but they give me the link for the conversation instead of pasting in the whole thing.

I view references to conversations as a good way to do things, and not just block post the entire reference.

Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information, don't directly quote it. Just say what you believe, which will be no different than what everyone has done before the current crop of AI Large Language Models.

One should review the equations of the AI (really LLM+tools) before posting them.

Wasting everyone's time with unchecked information is in nobody's interest.

Reference to *full* conversations are where I think the problem is, that's why I recommend summarizing it.  You could post both the summary link and the long-winded link.

I'm a coder, and I find the same problem with using LLM generated code.  If it's 20-30 lines, I can quickly review it for hallucinations and other mistakes.

If the LLM gives me 500 lines, my eyes glaze over trying to review it.

So I break problems down into smaller chunks and it's much more pleasant and productive.  I find my productivity to be 2x of normal with LLM assistance.

So the same idea applies here.  Don't make your poor audience digest 5 pages of LLM goo.  Summarize it.
« Last Edit: 05/31/2025 02:17 am by InterestedEngineer »

Offline Vultur

  • Senior Member
  • *****
  • Posts: 2456
  • Liked: 1049
  • Likes Given: 184
Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information,

Yeah. I would honestly prefer a forum rule 'no citing AI'. Let's not perpetuate, even accidentally, the idea that language models are reliable sources for factual information.

Sure, they're often right. Often enough to be convincing, which is why it's dangerous. But then they go wrong in totally bizarre ways unexpectedly (Google Lens recently told me that a caterpillar was an aphid, or possibly rust), fail to distinguish between fiction and fact, make up citations that don't exist, etc
« Last Edit: 05/31/2025 04:34 am by Vultur »

Offline leovinus

  • Full Member
  • ****
  • Posts: 1315
  • Porto, Portugal
  • Liked: 1047
  • Likes Given: 2008
TL:DR so same rules as for people? ;) Use a summary where appropriate and cite factual sources for important conclusions?

For the current generation of LLMs, I like to think of them as the Cliff Clavin’s from Cheers. Long winded with hallucinated facts in there. In my experience, the more pruned the models, the more hallucinations. In fact, the billion+ parameter models like DeepSeek I run locally produce wrong answers almost very often. Just parse the answers carefully. The root cause is simply that these LLMs have algorithmic limitations. Think of it as a compression approach from all world knowledge into N parameters. If N is eg 1 then all answers are hallucinated. And lust like with compression research, it took a while to come up with DCT, JPEG, etc. Which is why I support algorithmic research into better language models where facts, memory, reasoning and math are properly integrated.

Tags:
 

Advertisement NovaTech
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
1