103

To be more specific, I flagged a question recently as it was of the type "ChatGPT generated this but I need some more help fixing this". But I was told by a mod that asking a question about code is fine, which led to my confusion here.

Here is the post I am referring to. And here is the response -

asking a question about code produced by CGPT is fine. Not sure why you’d want to, but it is not banned. ChatGPT is not a programmer or a mathematician, it doesn’t use logic to construct code but is based on statistics. That rarely produces code that is actually fit for purpose.

I was under the impression that my flag should have been correct based on reading what-should-i-do-if-i-suspect-that-a-question-or-answer-is-written-by-chatgpt and are-questions-about-chatgpt-code-okay-to-ask


Don't the same reasons for which answers are banned, apply to questions as well?

  • It can be used to generate a ton of 'fake' questions on the platform
  • It could attract upvotes, due to the well-formatted and "confident" looking content
  • Bad questions created by will have to be handled the same way as any other question, but high volumes of such questions will waste precious moderator and community time.

Would love to hear from mods on the policy around this.

4
  • @ZoestandswithUkraine "..while we curb the tidal wave.." Any estimation how high the tidal wave is? What order of magnitude is the number of posts allegedly by ChatGPT every day? Commented Jan 4, 2023 at 8:18
  • 1
    @Trilarion last I heard, between 500 and 1500 Commented Jan 4, 2023 at 10:54
  • I see that these question often fall in this category. Commented Apr 16, 2023 at 18:43
  • Unfortunately it would seem that the critical question in the headline on this page, is still not answered as a policy matter. Martijn's answer below addresses the obscure issue of "questions that are literally about ChatGPT." That obscure issue unfortunately has nothing to do with the question stated in the headline on this page. Commented May 24 at 12:57

4 Answers 4

116

With respect to our policy banning the posting of AI-generated content, the point here is that the specific question was not written by ChatGPT.

There is a difference between having ChatGPT write the question for you, and asking a question (in your own words) about something that ChatGPT generated. The former is prohibited by our policy; the latter is allowed under the policy, provided you attribute the source of the code.

Asking a question about ChatGPT output is no different from asking questions about other code you found somewhere, be that documentation, a tutorial, or another Stack Overflow answer.

Note that there may be grounds to close such a question as off-topic, but not because ChatGPT was involved, but simply because of the way the question is worded. It could be too broad (asking us to explain the whole thing), a typo (ChatGPT didn't put in the required punctuation), etc. That's not a reason to flag for moderator attention; that's a reason to flag or vote to close.

In other words, please don't flag questions about GPT-generated content as violating our AI-generated content policy, because they probably don't. However, that doesn't mean they shouldn't be moderated in other ways—handle them just as you normally would questions that may be low-quality or off-topic.

14
  • 2
    Thanks for the answer @martijnpieters, could you also elaborate on how the latter (asking a question (in your own words) about something that ChatGPT generated) fits in with the blanket ban as well? Commented Jan 1, 2023 at 14:23
  • 29
    @AkshaySehgal: the ban is clear: don't generate questions or answers or comments with AI. This isn't such a case. Commented Jan 1, 2023 at 14:26
  • 1
    So hypothetically (mentioned this as a comment before), if someone generated 20 pieces of code, ran them, copied the error messages and wrapped the code + trace in a problem description and posted 20 questions, that is acceptable and not banned? just to clarify Commented Jan 1, 2023 at 14:27
  • 1
    @AkshaySehgal: If the question is about why the code produces error messages, yes, that's fine. I think that trusting ChatGPT to the point that you are surprised to see errors is unwise, but I'm not here to judge why others seem to expect ChatGPT to know about logic, maths and other programming skills. Commented Jan 1, 2023 at 14:30
  • 19
    Stack Overflow licensing policy is that user contributions [are] licensed under CC BY-SA, but if someone posts ChatGPT generated code, can they actually agree to such a license? Commented Jan 1, 2023 at 17:01
  • @dbc that’s an interesting issue, to which I have no answer. If there is a license violation then presumably there’ll be an injured party that can file a DMCA takedown. Commented Jan 1, 2023 at 23:23
  • 29
    I appreciate the logic here, but I think there are more subtleties in allowing questions about chatGPT-generated code that are worth considering. I fear we will become the clean-up service for reams of chatGPT generated code that is close but still contains at least some complete nonsense. I remember one piece of python code that looked 100% correct except for a single import statement at the beginning that was utter garbage but still looked convincing. Do we want to answer those questions? We don't have reams of these yet so perhaps my concerns are premature. Commented Jan 2, 2023 at 4:45
  • 9
    It is not OK to spam 20 questions at once (and probably runs into a ban anyway). It is not ok to copy code from anywhere as a cheap substitution for one's own attempt. It is not OK to post a chunk of code without having a clue and asking us to debug/fix it. All of these are reasons to downvote for lack of effort, and vote to close. But it doesn't matter whether the asker uses ChatGPT for this - it also happens without, and stating the source of the copied code is still better than not stating it at all. For now, current moderation tools suffice to cope with this, we don't need a complete ban. Commented Jan 2, 2023 at 6:04
  • 7
    @PresidentJamesK.Polk there haven’t been many questions and even fewer that were on topic. The post that sparked this debate was closed as a duplicate, and the error made was not unlike something many newbies might have made. Had the term “ChatGPT” not been included in the post no-one would have noticed. Commented Jan 2, 2023 at 9:29
  • 7
    @PresidentJamesK.Polk Plus, the post quality system will land you in post ban hell a lot quicker for bad questions than it does for bad answers. Commented Jan 2, 2023 at 9:31
  • 5
    The ban states, in large text, "All use of generative AI (e.g., ChatGPT and other LLMs) is banned when posting content on Stack Overflow", with the word "all" emphasised. I can't read that in a way which supports this answer. It says "all" and doesn't say "except ...", nor is there any other wording which would narrow the scope of the word "all". Commented May 24 at 12:30
  • 3
    I completely agree with @PresidentJamesK.Polk 's comment as I understand that comment. The problem with this answer is it relates to the (obscure) case of, sure, questions that are literally about ChatGPT. Setting that obscure and irrelevant issue aside, I just can't see any reason garbage output from ChatGPT should be allowed in questions. Commented May 24 at 12:54
  • 1
    "Asking a question about ChatGPT output is no different from asking questions about other code you found somewhere" <- Well, it's different in the sense that you can easily engineer the situation, repeatedly, a lot. i.e. there's a nigh-infinite amount of such code you can "find" and then possibly ask about. Commented Jun 3 at 21:12
  • If you want this answer to be correct, then rephrase the policy. (See in particular what @kaya3 mentioned above.) Commented Sep 20 at 11:04
53

It seems highly unlikely to me that ChatGPT code could support a worthwhile question. A question using such code would presumably be either:

  • "Does this ChatGPT generated code do what it's intended to do?" We aren't a debugging service, and we certainly aren't a testing service.

  • "What does this ChatGPT generated code do?" - almost certainly needs more focus; and if properly focused, there would be no reason to leave behind enough code for it to still have any signature of AI generation.

  • "I tried using this ChatGPT generated code in my project, and it doesn't do what I want it to do; how do I fix it?" - almost certainly needs debugging details, even if the user has included example input, a stack trace, a description of expected output etc. The problem is that the code will not be a minimal reproducible example. Since we aren't a debugging service, the question needs to be about the specific part of the code that causes the problem; this entails that OP is responsible for determining which part that is.

  • "I have a problem with my code; to maintain NDA, I used ChatGPT to generate a MRE...." - really? And you verified by hand that the generated MRE is minimal, reproducible, and exemplifies the actual problem? Was that easier than actually just writing the code by hand, or copying and pasting the relevant lines and changing some variable names? Do you also use ChatGPT to create unit tests, and trust the result of those tests without human intervention?


That said, we care about the questions, not the code. The reason we care about banning ChatGPT content is because the ease of generating it means that relatively few users could easily overwhelm the capacity of moderators and curators (who are already overwhelmed by thousands of almost-all-worthless new questions per day). Having to write everything by hand except the actual code, certainly mitigates the problem.

9
  • 20
    The answer to "why doesn't this ChatGPT code do what I want?" is always the same: ChatGPT is a language model. It has no understanding of programming, computer science, or your goals. The code doesn't do what you want because ChatGPT is not an appropriate tool for generating code you couldn't have written yourself. Commented Jan 2, 2023 at 15:27
  • 5
    @JamesWestman it has as much understanding of programming as it does English, which is a surprisingly good ability to generate a plausible stream of convincing BS - and the most convincing BS is BS that happens to be correct - but not always. Commented Jan 2, 2023 at 20:22
  • 8
    @user253751 Right. If you ask a magic 8 ball a question and then come to Stack Overflow asking if that answer is correct, well, you should just ask us your original question. If it's on topic. Commented Jan 2, 2023 at 22:45
  • 10
    As true as all of this may be, it fundamentally does not answer the question that was asked here, which is whether questions that contain code generated by ChatGPT should be flagged as using code generated by ChatGPT, under our current, blanket ban of all ChatGPT-generated content. Commented Jan 3, 2023 at 8:28
  • I'm not convinced by the third and fourth point. The M in MRE very often does not get taken verbatim; it's very common to have bloated examples. Using ChatGPT to generate debugging questions would very well be doable with the current curation standards. It's convincing that moderators don't see this happening so it doesn't need special handling, but it's not convincing to me that that regular curation could handle such questions if people were to press our on-topic debugging capabilities. Commented Jan 17, 2023 at 9:19
  • 2
    My point is that "current curation standards" are a) abysmal and b) due to their history of being applied the same way, the reason why I constantly am unable to find canonical duplicates that should be obvious. Commented Jan 17, 2023 at 9:20
  • 1
    "it's not convincing to me that that regular curation could handle such questions if people were to press our on-topic debugging capabilities" - "our on-topic debugging capabilities" are approximately zero, and we know this; which is exactly why we should demand that questions not actually test those capabilities. As I've explained before: putting the appropriate work into a debugging question makes it effectively not a debugging question any more. Commented Jan 17, 2023 at 9:22
  • why i would ask a question using chatGPT if he can answer it to me? that's...dumb Commented May 26 at 22:17
  • 1
    @JamesWestman There is an expectation (or even policy) of doing some research before asking, and mentioning what you tried. So, after asking your favorite AI “how do I do X?” and getting an incorrect answer, the question to ask here would be: “How do I do X? I tried Y, as suggested by mt favorite AI chatbot, but that fails with error Z. So, what is the correct way to do X?’ Commented Sep 13 at 19:21
-10

A further problem arising after Martijn's clear answer, is this:

There is a difference between having ChatGPT write the question for you, and asking a question (in your own words) about something that ChatGPT generated. The former is prohibited by our policy; the latter is allowed under the policy, provided you attribute the source of the code.

That makes perfect sense and covers two categories

one the question was written by ChatGPT - obviously banned.

two the question is literally about ChatGPT or dsome output of ChatGPT. obviously OK. (This is rather like the "swear word" issue on an English site: obviously you can't use swear words in your writing on the site, but, if the discussion is literally about a swear word, the swear word will appear in the text.)

However I've noticed a third case,

three The question is written by a human but it features an example slab of code .. generated by ChatGPT

Are QUESTIONS that use ChatGPT, that is to say include ChatGPT "code", banned? I'd say they should be

Now of course a catch-all solution here is "well, situation 'three' is kind of subtly different but, any such question where the OP himself can barely write code to illustrate the question at hand, is, very like 'low quality' and that makes the issue go away."

Personally, I think "category three" should be banned. I feel questioners shouldn't be able to "bulk up" a (likely already poor) question with a breezy "oh here's how ChatGPT tried to do it and [duh] that doesn't work either"

So, it's just something to consider for SO

25
  • 3
    Scenario 3 seems to be one that is described in Matrijn's answer; the post is human written but the code (that isn't working) is written by an LLM and the asker is asking about that code. That's scenario 2 and that's allowed. Commented May 24 at 13:19
  • hi @ThomA. Matrijn said "and asking a question (in your own words) about something that ChatGPT generated". That's not what the OP is doing. The OP is asking a (confused) question, has correctly included his own (bad :O ) code, and has also thrown in some output from ChatGPT ("Used ChatGPT for Help"). Anyway - seems to be a debated issue. Commented May 24 at 15:46
  • So are you saying that the rest of the content is written by an LLM too? That's that's scenario 1 and isn't allowed. Commented May 24 at 15:52
  • 1
    So, if it's okay to ask a question about ChatGPT-generated code (Which Martijn clearly stated), then why should the question be closed if it also contains human-written code? That makes no sense, Fattie. Commented May 24 at 15:52
  • @Cerbrus - it's not a big deal, but my explanation seems extremely clear. Surely you can understand the difference between a question >about< some chatgpt code, and, asking a question about badminton game physics, and throwing in some ChatGPT code. I fully understand if you don't agree ("My, Cerbrus, interpretation is the critical issue is whether or not it includes OPs human code"), that's fine, but it's remarkable if you don't understand the difference I'm discussing. (You might note my "using the c--- word on English sites" analogy.) Commented May 24 at 16:23
  • Again not that it matters, but I'll try again. I'm writing a question about how to use Unity3D to make a badminton game. As with any SO question I need to throw in some example code (to fulfill the SO requirements), so I say to chatGPT: "Hey chatgpt, give me some example code that I can use to fulfill the requirement for including some example code when I write a question on StackOverflow about badminton physics in Unity3d". Commented May 24 at 16:26
  • Also purely for the record. (1) I believe "my" point is correct, it's absurd to allow users asking questions about shuttlecock physics to randomly throw in random chatgpt-code so as to "demonstrate effort"., however, just to be clear, (2) I'm afraid I disagree anyway with the notion from M.'s answer (presumably SO policy) that people can ask questions 'about' chatgpt / chatgpt code. Exactly as Polk says , Commented May 24 at 16:29
  • I appreciate the logic here, but I think there are more subtleties in allowing questions about chatGPT-generated code that are worth considering. I fear we will become the clean-up service for reams of chatGPT generated code that is close but still contains at least some complete nonsense. - Pres. Polk comment above. Couldn't be clearer Commented May 24 at 16:30
  • If a question includes "I asked ChatGPT and it created this..." and it provides nothing but some "proof" or research effort, it should be edited out. Such content need not be outright banned if the rest of the post has merit. Commented May 24 at 16:59
  • 2
    Fattie, you're assuming OP asked ChatGPT to generate that code just for the question... That assumption is baseless, and frankly, unconstructive. We don't act on assumptions, here. We act on facts. The fact is that OP only wrote: Method 2 Used ChatGPT for Help: Commented May 24 at 17:32
  • @Fattie: Aside from ChatGPT cases, Stack Overflow already has a quality filter. If a question doesn't pass that filter, then it is downvoted and closed. But no sanctions are applied to OP: Everyone could make a mistake. "Ban" means applying real sanctions for OP. It is for cases where regular filter work bad. E.g. only a small group of experts is able to check correcntess of ChatGPT-generated answers (comparing to easy way for write such answers). But you case (3) doesn't require experts: it is easy to see that an OP has failed to focus on a problem. Just downvote/close vote and move on. Commented May 24 at 20:19
  • @Tsyvarev. I guess. A simpler solution is "don't allow chatGTC crap in questions or answers". I genuinely wonder why everyonee is making this so complicated. Commented May 24 at 22:11
  • @Cerbrus it's all just so incredibly obtuse and legalistic. Simple fact - any question with "chatGPT" in it ......... is garbage. I'm not a Prosus shareholder, so I don't really care that much. The OP in question would have been helped by the concept "yeah you can't put chatGpt nonsense in questions on SO". The idea that ChatGPT is banned from answers, but not questions, is just astonishing. Cheers Commented May 24 at 22:16
  • There are many kinds of crap which is not suitable for Stack Overflow. But would we ban for e.g. non-reseached questions on meta, we will lost many good questions. That is why a ban is only used as a last resort for the situations which cannot be handled otherwise. The situation where a question includes much of code which OP don't understand is easily handled, whether that code is generated by CharGPT or is taken from some site. Commented May 24 at 22:22
  • 2
    (Disclaimer: I've not read any of the earlier comments here, only the answer.) For questions in category 3, I would suggest assessing the question with all of the ChatGPT-generated code deleted. If the question is fine with the worthless AI-generated slop deleted, then the question is probably fine. An overly legalistic/pedantic argument about the asker violating the AI ban is probably not productive in that case. Just edit the question, remove the noise, and answer it as you normally would. If you remove the AI-generated slop and the question doesn't meet our guidelines, then close it. Commented May 25 at 10:07
-10

TL;DR: I would advocate to allow questions that follow the pattern “How do I do X? AI suggested I do Y, but that fails for reason Z.” Assuming the question itself is yours (you typed it into that chatbot prompt, after all) and you are just mentioning the AI-generated answer as the approach you already tried.


I have asked such questions in the past. There is a social aspect to this:

Newbie questions are somewhat frowned upon on SO. These are the kind of questions which are already answered in some manual – if you manage to find the correct place in the correct manual, possibly even piecing together information from multiple places in multiple manuals. If you still dare ask that kind of question on SO, you’re going to attract a lot of downvotes.

If the question is less complex but still somewhat standard (“does language X have a way to do Y”), instead of getting the desired answer, you may find yourself getting a bunch of not-so-helpful comments telling you that your whole approach is wrong. On Home Improvement SE, the equivalent would be asking “how to find studs behind drywall” and getting comments telling you that studs and drywall are junk and you should move from the US to Europe because houses are solid brickwork there, allowing you to place your wall anchor almost anywhere, and while we’re at it, Europe is so much better in other ways, too. And along with such comments come downvotes for your question. None of that helps you to securely attach your kitchen cabinets to the wall (and you might have your reasons for living in a timber-framed house, which are likely beyond the scope of the question).

So, in order to avoid all this fruitless discussion, you give the AI chatbot of your choice a shot. It comes up with a plausible-looking answer – and AI is actually quite accurate at answering textbook questions. (Said textbooks were probably part of the training data.) One shouldn’t blindly trust AI – but with that kind of question, as soon as you try to run the code suggested by the chatbot, you have your verification.

Accuracy decreases drastically with increasing complexity. If you find yourself just in the gray zone, not being sure if your question is still a textbook question, you give AI a shot, try the suggested answer, find it doesn’t work, and then turn to SO.

SO expects you to show what you’ve tried so far, and that would include the chatbot answer (“does language X have a way to do Y? AI suggested I do Z, but that fails with an error.”)

3
  • 3
    I don't see how the main body of this answer is related to the TL;DR? Commented Sep 13 at 21:32
  • @Anerdw well, it explains why I would be in favor of allowing certain types of questions. Commented Sep 14 at 14:59
  • 2
    I disagree…it seems like a complaint of SO users’ conduct, not a reason to let people post AI-generated code in their questions. Commented Sep 14 at 16:08

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.