Author Topic: Use of ChatGPT and how to (and not to) post AI content on the forum  (Read 49337 times)

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 38804
  • Cape Canaveral Spaceport
  • Liked: 23719
  • Likes Given: 436
I have a suspicion that some members are using this to make posts

Offline Asteroza

  • Senior Member
  • *****
  • Posts: 3127
  • Liked: 1211
  • Likes Given: 35
I have a suspicion that some members are using this to make posts

In a generic manner, or for specific aims? I freely admit the recent surge in low post count posters and thread necromancy seems suspect in a standalone manner, but are you implying there is a greater goal someone is trying to achieve here? Organized disinfo campaigns, astroturfed mindshare influence ops, or trying to specifically reduce the quality/authenticity/authority of NSF?

Offline Metalskin

  • Full Member
  • ***
  • Posts: 314
  • Brisbane, Australia
  • Liked: 254
  • Likes Given: 2309
Never underestimate the human need to be seen as knowledgeable, wise and right. I could easily see some types of personalities being attracted to ChatGPT to feed such needs.

I wouldn't necessarily suggest that there is any nefarious intent, more just feeding one's own needs. That's how I see it from an Occom's razer pov...
How inappropriate to call this planet Earth when it is quite clearly Ocean. - Arthur C. Clarke

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 38804
  • Cape Canaveral Spaceport
  • Liked: 23719
  • Likes Given: 436
I have a suspicion that some members are using this to make posts

In a generic manner, or for specific aims? I freely admit the recent surge in low post count posters and thread necromancy seems suspect in a standalone manner, but are you implying there is a greater goal someone is trying to achieve here? Organized disinfo campaigns, astroturfed mindshare influence ops, or trying to specifically reduce the quality/authenticity/authority of NSF?

Generic.  There seems to be some posts that have technically correct information but really don't convey anything
« Last Edit: 07/18/2023 02:41 am by Jim »

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 38804
  • Cape Canaveral Spaceport
  • Liked: 23719
  • Likes Given: 436

In a generic manner, or for specific aims? I freely admit the recent surge in low post count posters and thread necromancy seems suspect in a standalone manner,

I wasn't even thinking of those but it fits.
« Last Edit: 07/18/2023 02:43 am by Jim »

Offline catdlr

  • She will always be part of me.
  • Global Moderator
  • Senior Member
  • *****
  • Posts: 27830
  • Enthusiast since the Redstone and Thunderbirds
  • Marina del Rey, California, USA
  • Liked: 22846
  • Likes Given: 13506
I have a suspicion that some members are using this to make posts

In a generic manner, or for specific aims? I freely admit the recent surge in low post count posters and thread necromancy seems suspect in a standalone manner, but are you implying there is a greater goal someone is trying to achieve here? Organized disinfo campaigns, astroturfed mindshare influence ops, or trying to specifically reduce the quality/authenticity/authority of NSF?

Generic.  There seems to be posts that have technically correct information but really don't convey anything

Exactly Jim.  I've noticed that on some posts.  It's like receiving a perfectly constructed email from a politician.  Lots of words, with little meat.
« Last Edit: 07/18/2023 02:42 am by catdlr »
PSA #3:  Paywall? View this video on how-to temporary Disable Java-Script: youtu.be/KvBv16tw-UM
A golden rule from Chris B:  "focus on what is being said, not disparage people who say it."

Offline tater

  • Full Member
  • *
  • Posts: 128
  • NM
  • Liked: 137
  • Likes Given: 269
I have a suspicion that some members are using this to make posts

In a generic manner, or for specific aims? I freely admit the recent surge in low post count posters and thread necromancy seems suspect in a standalone manner, but are you implying there is a greater goal someone is trying to achieve here? Organized disinfo campaigns, astroturfed mindshare influence ops, or trying to specifically reduce the quality/authenticity/authority of NSF?

Generic.  There seems to be posts that have technically correct information but really don't convey anything

Exactly Jim.  I've noticed that on some posts.  It's like receiving a perfectly constructed email from a politician.  Lots of words, with little meat.

To be fair, that describes a decent percentage of human content on forums, and the models are trained on human writing, so to the extent a model can replicate a lot of words saying nothing, it would be doing a good job of copying humans.

Offline SpacedX

  • Member
  • Posts: 39
  • Gatineau
  • Liked: 11
  • Likes Given: 1301
I have a suspicion that some members are using this to make posts


Duh.

Offline joncz

  • Veteran
  • Full Member
  • ****
  • Posts: 534
  • Atlanta, Georgia
  • Liked: 311
  • Likes Given: 435
I have a suspicion that some members are using this to make posts


Duh.

ChatGPT suggests, "No surprise there" as a more sophisticated alternative to "Duh"

Offline Newton_V

  • Full Member
  • ****
  • Posts: 898
  • United States
  • Liked: 923
  • Likes Given: 135
Would ChatGPT create posts that begin with prepositions, include paragraph-long run-on sentences, and use the word "earmarked"?

Offline ulm_atms

  • Rocket Junky
  • Full Member
  • ****
  • Posts: 994
  • To boldly go where no government has gone before.
  • Liked: 1701
  • Likes Given: 1113
I have a suspicion that some members are using this to make posts
Yes, I 100% agree.  I have been wanting to say something but just held my tongue.

Some of these posts I am reading, especially from a few specific members, just ooze ChatGPT or an AI or sorts.  That combined with some extreme necroing of threads that looks like someone just copied the thread in and pasted the output as the next comment. (AI forum crawler I have seen on Reddit is an example I have seen of this).

I don't mind AI generated content but please state if your using it as AI likes to go off on tangents which I have seen here....a lot lately too.

Offline SimonFD

I'd be looking out for 'confidently wrong' information in posts that signal ChatGPT's involvement.
Mind you we do have Jim for that ;)
Time is an illusion. Lunchtime doubly so

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 38804
  • Cape Canaveral Spaceport
  • Liked: 23719
  • Likes Given: 436
Do we need to bring this to the attention of the moderators?

Offline chopsticks

  • Full Member
  • ****
  • Posts: 1202
  • Québec, Canada
  • Liked: 1202
  • Likes Given: 172
I don't doubt that there is probably ChatGPT used in some posts, but I have to wonder, what's the point? I don't quite understand why someone would do this, what the motive would be.

And I agree, I find the 10 year old threads being dug up to just post an update on what happened kind of annoying. Maybe that's the motive?

Offline SpacedX

  • Member
  • Posts: 39
  • Gatineau
  • Liked: 11
  • Likes Given: 1301
Do we need to bring this to the attention of the moderators?


No.

Offline Jim

  • Night Gator
  • Senior Member
  • *****
  • Posts: 38804
  • Cape Canaveral Spaceport
  • Liked: 23719
  • Likes Given: 436
Jeesh.

When I started this thread I was going by one or two posts by certain members.  Using the clues provided by the posters on this thread and then looking at the suspect member's profiles and all their posts at once, it really sticks out.

Online DanClemmensen

  • Senior Member
  • *****
  • Posts: 9338
  • Earth (currently)
  • Liked: 7502
  • Likes Given: 3226
Jeesh.

When I started this thread I was going by one or two posts by certain members.  Using the clues provided by the posters on this thread and then looking at the suspect member's profiles and all their posts at once, it really sticks out.
I think college professors use tools to detect stuff generated by ChatGPT. I don't know if those tools will work on our environment and apparently Jim can do this manually quicker than we could set up such a tool.

Online Chris Bergin

There is absolutely no way to know. I think the biggest problem is long term members being rude and making such claims to new members who may come across like they are when they are likely not first-language English and use a translate tool to post in English.

Our priority here is civility and and relevance of content.
Support NSF via L2 -- JOIN THE NSF TEAM -- Site Rules/Feedback/Updates
**Not a L2 member? Whitelist this forum in your adblocker to support the site and ensure full functionality.**

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3558
  • Seattle
  • Liked: 2603
  • Likes Given: 4354
I have received legitimate complaints about posting long-winded ChatGPT sessions on to the forums (as a link).  See for example https://forum.nasaspaceflight.com/index.php?topic=60195.msg2690371#msg2690371


The chief complain is the verbosity.  Asking humans to wade through pages of AI generated stuff (including all the mistakes and their corrections) isn't the correct burden of work.

Those who hate AIs would just tell us to not post such things, but:

1.  The AIs can now solve equations, and the equations are in very readable LateX format
2.  It's far easier to show your work esp. with a nice formatter (Latex and Markdown rendered to HTML)
3.  I don't have all day to manually run equations and format stuff, and there isn't enough people doing the math already, we don't need to make it harder.
4.  It's early-mid 21st century, I try and keep up with the tools available.

So here's an attempt to get the extremely verbose AI to shut up and post just the summary of a conversation, so that all the false paths are elided and just the part you want your fellow forum readers to look at is short and concise.

So here's what I came up with for ChatGPT:

1. Ask your AI to write an executive summary including equations and their solutions (if applicable) in the long winded thread.
2. Copy that executive summary using the "copy" button
2. Start a new thread.
4. Paste your final summary with equations after this text: "Please summarize the discussion as follows"
5. Click the share button to get a hyperlink.




Offline sdsds

  • Senior Member
  • *****
  • Posts: 8582
  • “With peace and hope for all mankind.”
  • Seattle
  • Liked: 3022
  • Likes Given: 2759
Posting the link as you did (with proper indication of what it was) seems totally appropriate. It's a little more nuanced with diagrams that portray conceptual designs. If an AI helped create some iterations of the diagram does that need to be disclosed if the diagram it attached to a forum post? (I personally don't think so, but having 'guide rails' in place does seem to make sense.)
— 𝐬𝐝𝐒𝐝𝐬 —

Offline Coastal Ron

  • Senior Member
  • *****
  • Posts: 9755
  • I live... along the coast
  • Liked: 11352
  • Likes Given: 13051
A friend of mine is a long time credentialed A.I. expert, and we have constant discussions about the state of so-called "artificial intelligence". And they will quite often reference a conversation they had with an online AI, but they give me the link for the conversation instead of pasting in the whole thing.

I view references to conversations as a good way to do things, and not just block post the entire reference.

Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information, don't directly quote it. Just say what you believe, which will be no different than what everyone has done before the current crop of AI Large Language Models.
If we don't continuously lower the cost to access space, how are we ever going to afford to expand humanity out into space?

Offline Docabilly

  • Member
  • Posts: 18
  • Liked: 10
  • Likes Given: 504
Actually I find AI incredibly correct.  Well grok is the only one I use.  I also ask questions with simple, clear concise phrases. Much like I've done with google search for the past 2 decades.  grok's answer are amazingly correct every time

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3558
  • Seattle
  • Liked: 2603
  • Likes Given: 4354
A friend of mine is a long time credentialed A.I. expert, and we have constant discussions about the state of so-called "artificial intelligence". And they will quite often reference a conversation they had with an online AI, but they give me the link for the conversation instead of pasting in the whole thing.

I view references to conversations as a good way to do things, and not just block post the entire reference.

Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information, don't directly quote it. Just say what you believe, which will be no different than what everyone has done before the current crop of AI Large Language Models.

One should review the equations of the AI (really LLM+tools) before posting them.

Wasting everyone's time with unchecked information is in nobody's interest.

Reference to *full* conversations are where I think the problem is, that's why I recommend summarizing it.  You could post both the summary link and the long-winded link.

I'm a coder, and I find the same problem with using LLM generated code.  If it's 20-30 lines, I can quickly review it for hallucinations and other mistakes.

If the LLM gives me 500 lines, my eyes glaze over trying to review it.

So I break problems down into smaller chunks and it's much more pleasant and productive.  I find my productivity to be 2x of normal with LLM assistance.

So the same idea applies here.  Don't make your poor audience digest 5 pages of LLM goo.  Summarize it.
« Last Edit: 05/31/2025 02:17 am by InterestedEngineer »

Online Vultur

  • Senior Member
  • *****
  • Posts: 3237
  • Liked: 1435
  • Likes Given: 197
Plus, in case you didn't already know this, AI is wrong quite a bit. Sometimes more than 50% of the time (plus it will lie, and even attempt to blackmail  ;)). So instead of normalizing false information,

Yeah. I would honestly prefer a forum rule 'no citing AI'. Let's not perpetuate, even accidentally, the idea that language models are reliable sources for factual information.

Sure, they're often right. Often enough to be convincing, which is why it's dangerous. But then they go wrong in totally bizarre ways unexpectedly (Google Lens recently told me that a caterpillar was an aphid, or possibly rust), fail to distinguish between fiction and fact, make up citations that don't exist, etc
« Last Edit: 05/31/2025 04:34 am by Vultur »

Offline leovinus

  • Full Member
  • ****
  • Posts: 1478
  • Porto, Portugal
  • Liked: 1153
  • Likes Given: 2238
TL:DR so same rules as for people? ;) Use a summary where appropriate and cite factual sources for important conclusions?

For the current generation of LLMs, I like to think of them as the Cliff Clavin’s from Cheers. Long winded with hallucinated facts in there. In my experience, the more pruned the models, the more hallucinations. In fact, the billion+ parameter models like DeepSeek I run locally produce wrong answers almost very often. Just parse the answers carefully. The root cause is simply that these LLMs have algorithmic limitations. Think of it as a compression approach from all world knowledge into N parameters. If N is eg 1 then all answers are hallucinated. And lust like with compression research, it took a while to come up with DCT, JPEG, etc. Which is why I support algorithmic research into better language models where facts, memory, reasoning and math are properly integrated.

Offline Paul451

  • Senior Member
  • *****
  • Posts: 3945
  • Australia
  • Liked: 2792
  • Likes Given: 2418
2.  It's far easier to show your work

Isn't the issue that it's not your work. We don't know if it's anyone's work.

Offline Roy_H

  • Full Member
  • ****
  • Posts: 1386
    • Rotating Space Station
  • Liked: 483
  • Likes Given: 3460
I have managed to inadvertently insult some people on this forum. AFAICT it is because I asked questions that I should know or be able to find with my own research, including reading all previous posts. Some threads have a LOT of pages to read through for someone who is new to a particular thread. So this may be another dumb question, but is there forum rules or guidelines I could be directed to in an effort to not cause more distress on this forum?
« Last Edit: 08/01/2025 07:46 pm by russianhalo117 »
"If we don't achieve re-usability, I will consider SpaceX to be a failure." - Elon Musk
Spacestation proposal: https://rotatingspacestation.com

Offline launchwatcher

  • Full Member
  • ****
  • Posts: 833
  • Liked: 817
  • Likes Given: 1238
I have managed to inadvertently insult some people on this forum. AFAICT it is because I asked questions that I should know or be able to find with my own research, including reading all previous posts. Some threads have a LOT of pages to read through for someone who is new to a particular thread. So this may be another dumb question, but is there forum rules or guidelines I could be directed to in an effort to not cause more distress on this forum?
Make an effort at adding value to the community rather than being a drain on the patience and goodwill of others.   Go looking for forum rules rather than spending the time asking others to spoon-feed you answers.

You can be forgiven for missing something in a long thread if you put in the effort to at least skim through it.   But nobody is going to bother spending any effort on your posts if you brag about how you didn't bother to read the thread.

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3558
  • Seattle
  • Liked: 2603
  • Likes Given: 4354
Here's a "How not to post AI content" on the forums.d

Don't make it completely obvious and then not credit the AI bot.

If your posts is starting with a standard chatGPT "affirming" response (which it is notorious for), then you either should edit that out, or credit it, (preferably both).  or at least move the affirming response to the tail instead of the head.

A random sample of such:

Thank you very much for your detailed analyses — they are extremely valuable because they ...

Thanks for the insightful feedback—that's a crucial point about the return direction potentially

I somewhat excuse the author because ESL (native french speaker0, but he should at least credit the translation to chatGPT.  I suggest feeding your answer in French to the LLM and then outputting the english translation.

from https://forum.nasaspaceflight.com/index.php?topic=63514.msg2717206#msg2717206
Quote
Thanks again for the thoughtful critiques—this kind of pushback is exactly what I was hoping for when I posted.

https://forum.nasaspaceflight.com/index.php?topic=63514.msg2717219#msg2717219
Quote
I appreciate the concern about clarity and rigor.


Online Vultur

  • Senior Member
  • *****
  • Posts: 3237
  • Liked: 1435
  • Likes Given: 197
Can we please have stricter forum rules about not using LLMs as a source?

It is one thing to use them to find sources (so long as you confirm that the source you find actually exists - fairly often it does not) but I don't feel like we should have any AI generated text.

Offline Robotbeat

  • Senior Member
  • *****
  • Posts: 41095
  • Minnesota
  • Liked: 27111
  • Likes Given: 12773
Can we please have stricter forum rules about not using LLMs as a source?

It is one thing to use them to find sources (so long as you confirm that the source you find actually exists - fairly often it does not) but I don't feel like we should have any AI generated text.
Maybe limit it? Like, the big problem is like several pages of LLM slop. Using LLMs for translation or finding sources (double-checked) if fine, or even a sentence or two of summary, but dueling LLMs with pages of text is just pointless. No one is gonna read all that.
Chris  Whoever loves correction loves knowledge, but he who hates reproof is stupid.

To the maximum extent practicable, the Federal Government shall plan missions to accommodate the space transportation services capabilities of United States commercial providers. US law http://goo.gl/YZYNt0

Online Vultur

  • Senior Member
  • *****
  • Posts: 3237
  • Liked: 1435
  • Likes Given: 197
Can we please have stricter forum rules about not using LLMs as a source?

It is one thing to use them to find sources (so long as you confirm that the source you find actually exists - fairly often it does not) but I don't feel like we should have any AI generated text.
Maybe limit it? Like, the big problem is like several pages of LLM slop. Using LLMs for translation or finding sources (double-checked) if fine, or even a sentence or two of summary, but dueling LLMs with pages of text is just pointless. No one is gonna read all that.

Translation, maybe .

I dislike even using them to summarize things; my experience is that non obvious technical things & numbers are very often wrong.

Online TheRadicalModerate

  • Senior Member
  • *****
  • Posts: 6358
  • Tampa, FL
  • Liked: 4461
  • Likes Given: 776
Space nerdery in general and NSF in particular are a hobby for me.  I like working through complicated problems with other people and reasoning out the answers.  It's fun, and I learn a lot.

It stops being fun if I have to interact with a human who's interacting--often ignorantly or badly--with an AI, then forcing the content on me with reasoning that isn't recognizably human.

I don't want to argue with machines.  I like arguing with people.  If people want to use machines to research, shape, and compute their arguments, I'd like to say that's OK, but it frankly feels soulless and empty to me.  The fun of NSF comes from learning the stuff you're posting, in sufficient depth that you can defend your arguments.  You can't do that if an AI is doing all the work.

Meanwhile, my default personal policy is going to be to ignore stuff that's easily recognized as AI.  I hope that NSF doesn't morph into something where that policy precludes me from interacting.

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3558
  • Seattle
  • Liked: 2603
  • Likes Given: 4354
I use AI to format stuff into NSF format with Latex images for equations. (example below)  AI is good at that, but:

1.  I verify nothing was altered from the original text.  AI sometimes ignores the original text and hallucinates
2.  I remove all "what a great response" which is a tell-tail dopamine-hit feel good thing that chatGPT and others are notorious for.


I do hate the AI wall of text and the results of bad prompting.

All in all, if you stuff is noticably AI generated, and it's long, it's gonna get a sarcastic response of "TL;DR Mr AI"


I say use the AI all you want to generate your thesis.  Take the time to make it look like you the author thought about it and that you've distilled it down to the essence and gotten rid of the AI fluff.


(for example the LateX version is more readable)

Solve the quadratic:

We start with the equation:


solve for the discriminant


which results in
,

which is far easier to read than:

x^2 - 5x + 6 = 0
x = (5 +/- sqrt(25 - 24))/2 = (5 +/- 1)/2
So the solutions are x = 2 and x = 3




« Last Edit: 11/05/2025 09:05 pm by InterestedEngineer »

Offline Paul451

  • Senior Member
  • *****
  • Posts: 3945
  • Australia
  • Liked: 2792
  • Likes Given: 2418
We start with the equation:


solve for the discriminant


which results in
,

which is far easier to read than:

x^2 - 5x + 6 = 0
x = (5 +/- sqrt(25 - 24))/2 = (5 +/- 1)/2
So the solutions are x = 2 and x = 3


Except those latex equations show as png images, and can't be selected as text / copied / edited / etc. The text equations obviously can.

"Looks pretty, useless in practice" seems like a good metaphor for AI.

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3558
  • Seattle
  • Liked: 2603
  • Likes Given: 4354
We start with the equation:


solve for the discriminant


which results in
,

which is far easier to read than:

x^2 - 5x + 6 = 0
x = (5 +/- sqrt(25 - 24))/2 = (5 +/- 1)/2
So the solutions are x = 2 and x = 3


Except those latex equations show as png images, and can't be selected as text / copied / edited / etc. The text equations obviously can.

"Looks pretty, useless in practice" seems like a good metaphor for AI.

It's perfectly simple to extend the prompt to do both

Solve the quadratic:



LaTeX w/ images

We start with the equation:


Compute the solution:


So the solutions are:
and



Plaintext (copying):

We start with the equation:
x^2 - 5x + 6 = 0

x = (5 +/- sqrt(25 - 24)) / 2 = (5 +/- 1) / 2

So the solutions are x = 2 and x = 3



but this seems overkill.   How often do you need to copy the equation?

Offline Paul451

  • Senior Member
  • *****
  • Posts: 3945
  • Australia
  • Liked: 2792
  • Likes Given: 2418
How often do you need to copy the equation?

Whenever I see an interesting physics formula that I want to keep in my text-file of interesting physics formula that I want to keep.

But mostly, this is my annoyance at the increasing trend of un-highlightable/copyable text in many common websites (Reddit/Youtube). It breaks my muscle-memory for looking up something that I'm not familiar with (highlight, right-click, search. Especially on Vivaldi, which allows multiple search-sites within the context menu; and before that an add-on in Firefox which broke when they changed add-on system, which is why I switched to Vivaldi. That's how baked into my work-flow it is.) None of that is about AI, just the more general enshitification problem.

Offline InterestedEngineer

  • Senior Member
  • *****
  • Posts: 3558
  • Seattle
  • Liked: 2603
  • Likes Given: 4354
How often do you need to copy the equation?

Whenever I see an interesting physics formula that I want to keep in my text-file of interesting physics formula that I want to keep.

But mostly, this is my annoyance at the increasing trend of un-highlightable/copyable text in many common websites (Reddit/Youtube). It breaks my muscle-memory for looking up something that I'm not familiar with (highlight, right-click, search. Especially on Vivaldi, which allows multiple search-sites within the context menu; and before that an add-on in Firefox which broke when they changed add-on system, which is why I switched to Vivaldi. That's how baked into my work-flow it is.) None of that is about AI, just the more general enshitification problem.

If you are keeping equations around you should keep them in Latex format.  If you are into math it should be as natural to you as Markdown is to programmers.

In that case right-click on the image and the Latex equation is in the URL.  Copy the URL and paste the Latex equation from it.

for example:  https://latex.codecogs.com/png.latex?\small x=\frac{5\pm\sqrt{25-24}}{2}

for

As far as uncopyable text, yes a lot of that is lame attempts at copyright enforcement.  But it's also lack of proper formatting capabilities.  At minimum all websites should support markdown and Latex equations. But alas, most don't, so we are stuck with what we are stuck with.

Heck what annoys me most about AI is pictures with text on my phone. I  try press click the picture so I can copy it, and I end up selecting the text in the picture (because AI can now "read" the text in a picture).  Who thought *that* was a good idea?  Who needs to select the text in a picture as an every-day use-case?

Everyone is always breaking copy&paste. I hate it as much as you do.   But sometimes for readability a formatting picture or other entity is necessary.  If there's a workaround, you can use it.  There's a workaround for latex image generators.
« Last Edit: 11/07/2025 09:14 pm by InterestedEngineer »

Offline Paul451

  • Senior Member
  • *****
  • Posts: 3945
  • Australia
  • Liked: 2792
  • Likes Given: 2418
How often do you need to copy the equation?
Whenever I see an interesting physics formula that I want to keep in my text-file of interesting physics formula that I want to keep.
If you are keeping equations around you should keep them in Latex format.

Nope. Double nope. Triple nope. My formula & constants text files have followed me from machine to machine since the early '80s. Ascii, 7-bits, no extended characters, never change the order they were added, and absolutely no formatting.

But it's also lack of proper formatting capabilities.  At minimum all websites should support markdown and Latex equations. But alas, most don't,

Should. Don't. And I've had to learn this lesson many times. Across many operating systems, networks, software, browsers and websites. Should. Don't. Every time.
« Last Edit: 11/08/2025 07:55 pm by Paul451 »

Tags:
 

Advertisement NovaTech
Advertisement
Advertisement Margaritaville Beach Resort South Padre Island
Advertisement Brady Kenniston
Advertisement NextSpaceflight
Advertisement Nathan Barker Photography
1