-
Notifications
You must be signed in to change notification settings - Fork 117
Description
"AI" systems can produce incorrect assertions, long low-content rambles, and copies of other people's text. If people then share that output, it can mislead other people, waste their time, or steal credit from the actual authors. I suggest that all of these should be code-of-conduct violations, and I don't see existing elements of the CoC that apply directly.
-
For hallucinations, "Deliberate misinformation" is close, but LLMs aren't conscious and so can't deliberately do anything, and the human generally also wasn't trying to misinform. Perhaps "negligent" misinformation should also be an unacceptable behavior?
-
When LLM-generated text is longer than necessary, it unfairly offloads work from the author to readers. The ban on Gish galloping is similar, but the problem here isn't always about "many weak arguments", and this isn't an intentional "attempt to cause others to waste time"—it's just the side-effect.
-
I think we're entirely missing a ban on plagiarism, and that's probably straightforward to add. However, plagiarism implies some amount of intent to "pass[]-off of another's work as one's own", and that's generally absent when a person pastes machine-generated text into a post. There are also gradations in responsibility here: When LLMs are asked to generate text, they're known to paste in portions of their input, so a person who generates text ought to take explicit steps to prevent that from happening. Attributing to "ChatGPT" is probably also not sufficient. But machine translation systems, even if they incorporate LLMs, aren't expected to incorporate quotes from other people, and if they do that instead of producing an accurate translation of the source, I wouldn't want to blame the human.