5356

Moderator Note: This post has been locked to prevent comments because people have been using them for protracted debate and discussion (we've deleted over 300 comments on this post alone, not even including its answers).

The comment lock is not meant to suppress discussion or prevent users from expressing their opinions. You are (as always) encouraged to vote on this post to express your agreement/disagreement. If you want to discuss this policy further, or suggest other related changes, please Ask a New Question and use the tag.

This question remains because that is still the best, most prominent, and only permanent way that we have to announce this policy site-wide.

All use of generative AI (e.g., ChatGPT1 and other LLMs) is banned when posting content on Stack Overflow.

This includes "asking" the question to an AI generator then copy-pasting its output as well as using an AI generator to "reword" your answers.

Please see the Help Center article: What is this site’s policy on content generated by generative artificial intelligence tools?

Overall, because the average rate of getting correct answers from ChatGPT and other generative AI technologies is too low, the posting of content created by ChatGPT and other generative AI technologies is substantially harmful to the site and to users who are asking questions and looking for correct answers.

The primary problem is that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce. There are also many people trying out ChatGPT and other generative AI technologies to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with significant subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

As such, we need to reduce the volume of these posts and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts.

So, the use of ChatGPT or other generative AI technologies to create posts or other content here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT or other generative AI technologies after the posting of this policy, sanctions will be imposed to prevent them from continuing to post such content, even if the posts would otherwise be acceptable.

NOTE: While the above text focuses on answers, because that's where we're experiencing the largest volume of such content, the ban applies to all content on Stack Overflow, except each user's profile content (e.g., your "About me" text).


Historical context of this ban originally being "temporary"

When this ban was originally posted on 2022-12-05, it was explicitly stated as a "Temporary policy". It was specifically "temporary", because it was, at that time, a policy which was being imposed by the subset of moderators who were present on the site over the weekend after the announcement of ChatGPT's public release, 2022-11-30, through the Monday, 2022-12-05, when this question was posted. The moderators involved strongly felt that we didn't have the right to impose a permanent policy in this manner upon the site, but did have a responsibility to impose a temporary policy that was necessary in order for the site to remain functioning while discussion was had, consensus reached, and also allowed Stack Overflow, the company, time to have internal discussions to see what policies they would adopt network wide. So, after consultation with the company, the moderators present at that time chose to implement this as a "temporary" policy.

Since then, quite a lot has happened. Based on the voting for this question, it's clear that there's an overwhelming consensus for this policy. The company has chosen that the specific policy on AI-generated content will be up to individual sites (list of per-site policies), but that even on sites which permit AI-generated content, such AI-generated content is considered "not your own work" and must follow the referencing requirements. The requirement for following the referencing requirements was, later, put into the Code of Conduct: Inauthentic usage policy. There's a lot more that's gone on with respect to AI-generated content. So much has happened such that it's not reasonable to try to summarize all of it here.


1. ChatGPT is an Artificial Intelligence based chat bot by OpenAI, which was announced on 2022-11-30. Use of ChatGPT is currently available to the public without monetary payment.

1
  • 1
    Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, or in Stack Overflow Chat. Comments continuing discussion may be removed. Commented Feb 26, 2023 at 7:28

67 Answers 67

1 2
3
-38

TL;DR: assimilate, don't exterminate!

I would like to see a separate section for AI-generated answers, i.e. why not just embrace it by retaining AI-generated answers but keeping them separate from human answers?

That serves two purposes:

  1. AI can distinguish AI-generated answers so that it doesn't feed them back into itself when they no doubt use ordered site content like this to generate answers.
  2. AI answers can still be viewed and voted on, and maybe some will even become the accepted answer.
7
  • 14
    Why create more work, which will in turn create more work? A feature like what you suggest requires time to be put into implementing it, and then once the feature is made, it will require even more time by curators to confirm the content is not invalid. What is the benefit? Commented Feb 17, 2023 at 3:11
  • 25
    Specifically, following up on @Daedalus's comment, what is the benefit of integrating an AI service like ChatGPT directly into SO? People who want AI-generated responses can just ask the AI. That service already exists. People who want answers written by human experts can come to SO. We already provide that service. Why mix them? Beyond that, the reason we don't have a section for AI generated answers is the same reason we don't have a section for answers written by monkeys with typewriters: those answers are terrible. They don't meet our minimum quality standards. Commented Feb 17, 2023 at 6:00
  • 3
    Who's gonna pay for that? At the rate SO is getting questions, this would get very expensive very quickly. Commented Feb 17, 2023 at 13:00
  • Who said the answers needed to be curated? And who said ChatGPT is the only AI? Come on people, this technology is only going to get better. Where's your imagination? Commented Feb 18, 2023 at 4:36
  • 22
    We don't create policies based on our imagination. We create them based on the reality that is in front of us, that we're dealing with right now. (As for who said the answers needed to be curated: that's the whole design principle/goal of this site.) Commented Feb 18, 2023 at 5:45
  • Okay, give it a year or two and let's see how SO is doing with your policy... Commented Mar 6, 2023 at 18:56
  • 10
    And why do you care about how SO will do in a year or two, with this or any other policy? just use whatever tool that works for you. that was true in the past, it's true now, and will be true in The Future (tm). if SO dies because something else replaces it... so be it. I don't get all these answers worried about "SO should adapt, or it will be replaced by something else!!!". Commented Mar 21, 2023 at 15:30
-39

A Solution?

I agree to most other answers, except the "but there is no solution" part. Also, I believe not all posters here understand that we're just at the beginning.

Hence, my proposal would be to attack, instead of defend.

Why not enable a feature that sends all questions to ChatGPT right after posting and display the result alongside the answer? It should be marked as the ChatGPT answer and users could opt to not display it.

  • This would immediately stop people from abusing ChatGPT to farm reputation. The similarity would be too obvious, at least for the case where the question is just copy-pasted. If ChatGPT users enhance the question to improve the response, they already added some value and would not be in rapid fire mode anymore.

  • It would give the benefit of the doubt that an AI answer might actually be valuable. By rating those answers the same way as rating human answers, we can see how they rank with others.

  • Humans who write answers can refer to it and agree or disagree, if that makes any sense. They can point out whether there is only a minor mistake in the AI answer or whether the answer is based on a misunderstanding or predominant misconception on the internet (as the source of information).

I think this solution would scale for some time to come, but I am not sure, how feasible that is. Will Stack Overflow be charged, or can Stack Overflow sell this to OpenAI as a marketing hack? I don't know.

9
  • 23
    No. This has been suggested plenty of times already. Look at the other answers here as to why this can't work. Commented Mar 3, 2023 at 17:07
  • would you care to point me at it, because I didn't see it. Commented Mar 3, 2023 at 17:10
  • @Cerbrus, just found it on the next side, sorry for not being thorough in the first place. It seems indeed, that the solution could be difficult, but I am not entirely on your side. In the long run, ChatGPT will become less expensive and in the short term, it may wish to run this as advertisment the same way I can go there and ask questions to it. Commented Mar 3, 2023 at 17:20
  • For the point with repeated questions yield different answers: I would assume that the different answer will be somewhat suffering from the same problem. Might be difficult to compare word by word, but might be good enough to discourage abuse. Commented Mar 3, 2023 at 17:22
  • 23
    ChatGPT can offer completely different answers to the same question, including absolute contradictions of what it said mere seconds ago. The similarities between pairs of ChatGPT answers are structural in nature, not content based; there is no use in having a "reference answer" to spot other generated answers for the same question. Commented Mar 3, 2023 at 17:39
  • 8
    "… in the short term, it may wish to run this as advertisment the same way I can go there and ask questions to it." That would be a rather poor advertisement. ChatGPT isn’t made nor meant for the kind of questions SO is made for. Expecting experts to waste their time trying to curate a stream of technical nonsense isn’t a winning story… (This is in essence something this very meta-question already said - there is just no capacity to manually vet all the content that ChatGPT has generated, let alone could generate, for SO.) Commented Mar 3, 2023 at 17:46
  • 2
    gotcha, and to be honest, I didn't expect the answers would be 'contradicting' and that sounds like a general flaw to me. It would at least make sense, if ChatGPT would enhance itself based on the content it receives, but I was not able to observe any valuable learning, based on my feedback. Even in the most stupid way. Commented Mar 6, 2023 at 16:50
  • 12
    @Ingo ChatGPT doesn't "remember" the conversations it has. As I said here "ChatGPT generates plausible text, consistent with its training data and the prompt, but it doesn't know what it's talking about, and it has no way of representing or evaluating the truth of its utterances. Yes, it can say true things, but it can also say complete nonsense, and it can't tell the difference". It's designed to manipulate syntax, not semantics. Stephen Wolfram gives a good outline of how it works in the first of his articles linked in my answer. Commented Mar 8, 2023 at 14:20
  • 1
    See also now the train wreck at meta.stackoverflow.com/questions/425162/… Commented Jun 19, 2023 at 3:43
-40

Reading through the answers and comments, I can't help but detect a lot of bias, seemingly out of fear for the unknown or potential competitor.

This line in the OP is telling:

in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

Why set out to determine it 'is actually bad' instead of good. In my experience, it's usually correct (because I ask the right questions). In the cases it's not, it's useful to discuss with ChatGPT where the mistake lies. With some frequency I ask it to reread its reply and whether it is sure that's correct.

Similar in many comments, where people clearly show bias without supporting or even convincing arguments. Comments like "it is stupid" and "it's a good joke generator". The main argument seems to be "it's not always correct". Yes, neither are all (or even most?) human answers, but that aside, if that's your main argument, what will you do in 6 months or 2 years?

Personally I think ChatGPT is hands down the most productive assistant / near-coworker I ever had (in 30+ years IT and coding) and anyone not adapting it ASAP to gain at least experience with it is contributing to their own demise.

Important to understand is that it's an assistant, a tool, not a substitute. AI won't replace developers; developers who use AI will replace developers. Pick a side that suits you and your family. Sticking your head in the sand isn't a fruitful approach to AI, embrace it, control it, use it to increase productivity.

Posting answers or questions written by ChatGPT straight to SO is like copying and pasting from another site, but banning questions and answers ChatGPT assisted in writing just seems wrong. It's almost like banning a spell/grammar checker.

I learned never to complain without offering alternatives. People posting answers should be held accountable for bad answers. That way they'll put in the extra effort to make sure the AI-assistant answer is useful to the one asking the question. Whether it's text, questions and answers, or code, everything an AI produces should be considered a draft. Perhaps a test section limited to certain topics, or show the ChatGPT-assisted answers (allow answerers to mark them as such) at the bottom of the answer list, collapsed and hidden till the reader opens them. Anything that doesn't involve throwing the baby out with the bathwater.

31
  • 18
    "seemingly out of fear for the unknown or potential competitor." Wrong. This bias is based on a understanding of how LLMs work, and what their limitations are. Commented Sep 5, 2023 at 14:38
  • 13
    "Actually bad" vs "Actually good" doesn't matter. The same level of effort is required to validate it. Commented Sep 5, 2023 at 14:39
  • 7
    "what will you do in 6 months or 2 years." We'll cross that bridge when we get to it. Commented Sep 5, 2023 at 14:39
  • 28
    You're missing the point that GPT was causing a flood of low-effort generated copy-pasted content. There was no way to accurately moderate all of it. Your last paragraph assumes users are honest. They're not. They're just dumping AI-generated text on the site and seeing what sticks. Commented Sep 5, 2023 at 14:41
  • 15
    Seems like you're missing the point here. We're not banning chatgpt due to fear of being replaced... banning it in that case would have no effect on the outcome anyway. Instead, it's banned for the reasons outlined in the question: the success rate is too low. Yes, you using it yourself can poke and prod chatgpt enough to end up at a valid answer, however, that doesn't work for generating long-term useful content, particularly when answerers use it as a fire and forget tool for farming reputation rather than for producing high quality content. Commented Sep 5, 2023 at 14:44
  • 11
    "Important to understand is that it's an assistant, a tool, not a substitute." That's why it is banned as a substitute for manually writing answers, not as an assistant, a tool. This answers seems to be missing what the ban is about: People are still free to use ChatGPT themselves. As many (all?) of the positives mentioned here require interacting with ChatGPT, it is not suitable for a Q&A format where answers are fixed and discussion is intentionally kept to a minimum. Commented Sep 5, 2023 at 14:58
  • 3
    @Xartec Clearly the latter, however the former is a mixed bag. It's difficult to ban the one without also banning the other. Commented Sep 5, 2023 at 15:05
  • 9
    Let me put it this way. Users who properly use chatgpt to assist creating their answer are creating answers that are indistinguishable from answers that aren't assisted by GPT at all. if they're indistinguishable we clearly can't do anything about them. Commented Sep 5, 2023 at 15:08
  • 5
    If users need to be told how to use chatgpt to write their answers they're clearly using it for the wrong reasons. I don't think a guide would help, given the existing guidance we have for writing questions and answers is largely ignored anyway. Commented Sep 5, 2023 at 15:16
  • 5
    I mean, you're missing the point, as i expected, ;) the correct use of chatgpt is as a research assistant or a last resort at getting ideas, not a code-writing service or debugging tool. It shouldn't be used to explain what code does or why it was written in the way it is without heavy work from the user in improving the output, given most of the time the output is full of useless or irrelevant information such as "how an if statement works". Commented Sep 5, 2023 at 15:23
  • 6
    @Xartec If someone treats ChatGPT as a draft and then improves on that draft manually their answer is not the output of ChatGPT rewording it and not covered by the ban. The point of treating any verbatim output of ChatGPT as banned is that one cannot efficiently tell the difference between "small rewrite" and "complete rewrite" (or supervised/unsupervised, or whatever you want to call responsible versus irresponsible use) since ChatGPT by its nature always rewrites. Commented Sep 5, 2023 at 15:26
  • 5
    To put a counterpoint into context: You don't become an expert at programming by using autocomplete or an IDE. Those who contextualize this can do OK with AI-derived tools since they know that it's not a panacea. The problem is that around the world, there are a lot of terrible engineers that treat AI as the solution. This is why it has to be banned; a lot of people who copy-paste from this site don't get that they still have to validate what it is they're doing. Commented Sep 5, 2023 at 15:56
  • 7
    @Xartec - ". In which case it's my opinion ..." - It's not our responsibility as a community to teach users how to effectively use ChatGPT as a tool. In fact, Stack Overflow is NOT a learning resource, or more specifically not a replacement for adequate learning from other resources on the user's part. For every "good" output you have been able to be generated with ChatGPT I can show you 30 outputs that appear right but were actually factually incorrect. They appeared to an individual with zero domain knowledge to be correct but in reality, were factually and technically incorrect. Commented Sep 5, 2023 at 17:38
  • 6
    @Xartec - Users already don't follow those guidelines. Given the amount of inaccuracy with regards to ChatGPT I don't believe it's worth the squeeze. Commented Sep 5, 2023 at 19:52
  • 3
    There's no value in allowing answerers to be proxies for users using chatgpt, regardless of the accuracy levels chatgpt may be able to reach now or in the future. If the user wants to provide enough context to get an answer from chatgpt, they can do so themselves through chatgpt without the answerer's help. Commented Sep 22, 2023 at 14:46
-41

It seems like a slippery slope here. Am I banned from using ChatGPT for doing my own research? Certainly not. So if I gain education by way of ChatGPT, am I then banned from conveying that knowledge by way of answering a SO question? I wouldn't think so, as how I came to know something should be irrelevant.

So then, I suppose the question is "If I use ChatGPT to research a topic solely so that I can answer a question on SO, is that wrong?" I can't think why it would be, so long as I'm properly curating the answer from my own knowledge.

And if that's ok, then the question becomes "How much does my answer have to differ from the ChatGPT answer that I used to inform myself so that I could answer the SO question?"

10
  • 17
    "…if I gain education by way of chatGPT…" Ah, I see this is a pure hypothetical, so we don't have to worry about the answer to it, because that's not going to happen. Commented Feb 15, 2023 at 6:51
  • 16
    "so long as I'm properly curating the answer from my own knowledge" The fact that most people weren't doing this is why we're in this mess in the first place. Commented Feb 15, 2023 at 9:28
  • 2
    How you learn isn't going to turn into you write answers in a form that will 99% of the time be wrong but very well written. Commented Feb 15, 2023 at 15:28
  • 4
    Wow, 14 dislikes. Guess I hit a nerve. Funny how a dissenting opinion amidst a sea of minds that are already made up leads to downvotes with no real mention of why. I'd love to know where the flaw in my logic lies. I've already proven to myself that chatGPT makes a great research assistant. So this response came from an informed position. I could prove that statement if held to the fire. But there's no interest here in evolving opinions here it seems. Commented Feb 15, 2023 at 16:32
  • 20
    Yea, i mean, funny how people express agreement on meta with votes, and how unpopular opinions meet a lot of disagreement. Almost as if the system is working exactly as designed. Funny! ¯\_(ツ)_/¯ Commented Feb 15, 2023 at 16:52
  • 18
    So where exactly is the slope and why is it slippery? The announcement makes it pretty clear where the line is drawn and why it is exactly where it is. If you manually write about your own, verified knowledge then no one cares where that comes from. Commented Feb 15, 2023 at 17:27
  • @MisterMiyagi - If the metric is "copying/pasting from the chatGPT site", then you're right, it's not a slippery slope. And by reading the notice of the ban, that's the way I read it. The slippery slope would be if one wanted to take it any further than that...to say that one can't "paraphrase chatGPT output". Commented Feb 15, 2023 at 23:34
  • 9
    The problem is with people blindly copy-pasting content in bulk, without validating the contents... If a user were to take the effort to paraphrase the content (manually, not with some kind of AI), I'd presume they'd at least check if it's correct. Commented Feb 17, 2023 at 13:02
  • @CodyGray hello from the future where gaining education from ChatGPT is indeed possible. Not that it matters, it was never about logic :-) Commented Mar 20 at 1:10
  • I suppose in the same way that you can gain an education from a Magic 8 Ball. I don't define that as an "education". Commented Apr 6 at 10:20
-43

I have questions from ChatGPT and some of the answers were 100% accurate. Now Stack Overflow should allow accurate and acceptable answers from AI. It can save a lot of time.

It has come to experience that the logic, queries (MySQL and MongoDB) can take up to 12 hours. ChatGPT has answered and created queries like that in just seconds. (I have pro ChatGPT.) I have created an API that has multiple if-else and multiple queries with more than 500 lines of code (2000 ms response time), but with the help of ChatGPT, I have done that API with just 20 lines of code, with an average response time of 500 ms.

Now is the time to use ChatGPT and such platforms to speed up the development process. ChatGPT is really helpful to newcomers and for developing small-scale logic and functions.

16
  • 12
    You should really use a spelling checker. "sorry for any gramitical miskates." does not excuse a lack of effort. Commented Jul 17, 2023 at 10:52
  • 19
    That aside, just because ChatGPT sometimes generates correct output, doesn't mean it's a valuable addition to SE. Users can get that from the AI itself, no need to host it here. Commented Jul 17, 2023 at 10:53
  • I think about 80% it generate correct answers. @Cerbrus Commented Jul 17, 2023 at 10:58
  • @Cerbrus some developers who native language is not english find it difficult to use AI tools for programming. they just search for error in SO and like in the ChatGPT so ChatGPT can't answer them . that why it well be helpful to copy-paste and answer and write a human explaination of that. Commented Jul 17, 2023 at 11:02
  • 31
    So you lack the experience to be productive in creating software, and you outsource that work to GPT. You're happy with what it produces all the time, 80% of the time. Yet you fail to see why others don't value your assessment as much. It's like you skip going to the doctor with your headache, because ChatGPT said it was probably nothing. Your claim is then that ChatGPT is way cheaper than a doctor but equally useful, and your proof is that you're still alive while not paying as much as for a visit to a professional. The world has become a worse place thanks to GPT. Commented Jul 17, 2023 at 11:17
  • 7
    this seems to merely repeat points already made in several prior answers here Commented Jul 17, 2023 at 11:20
  • @CodeCaster i think if the result is 100% accurate then we have to accept that. it not just about the copy paste. ask the chatgpt get the answer , test it and if it is correct then write you answer in SO to help others. becuase sometime ChatGPT correct answer while sometime it answer may be wrong. Commented Jul 17, 2023 at 11:23
  • 12
    Related: meta.stackoverflow.com/questions/422392/… See this answer. That the LLM is fast and seemingly accurate enough for your use case is not a sufficient condition for letting people mass-dump AI generated answers. Commented Jul 17, 2023 at 11:26
  • 14
    If you are qualified to test chatgpt, just write an answer from scratch. If you can't write an answer that can be edited to suffient quality without chatgpt, you probably don't know enough to know if a chat gpt answer is correct or subtly but significantly wrong. Commented Jul 17, 2023 at 11:27
  • 18
    "i think if the result is 100% accurate" And that's the problem. It's not. Not even close. Commented Jul 17, 2023 at 11:29
  • Please read this ; meta.stackoverflow.com/a/423112/9570734 Commented Jul 17, 2023 at 11:35
  • 5
    After you read all the answers here explaining why we don't need it on SO. Commented Jul 17, 2023 at 11:42
  • 9
    To be honest, your specific case is the exact reason I rail very hard against AI in code. Someone who doesn't really understand what they're doing and can't really independently verify what the actual output of the LLM is would fare no better in a practical situation than someone who can copy and paste from Stack Overflow (ironically). Commented Jul 17, 2023 at 15:28
  • 12
    @Engr.AftabUfaq my point is that I don't trust you to validate an answer given by ChatGPT. What does "100% accurate" even mean? That it compiles/lints and runs without errors? What bar is that? Commented Jul 17, 2023 at 17:20
  • @CodeCaster Over-treatment is a very serious issue. Don’t go to the doctor because of simple headaches. If ChatGPT tells you that your headache is probably nothing, and it makes you cancel your plans to see the doctor, ChatGPT has made the world a better place. Calm down, rest, stop unnecessary medication, and stop wasting your doctor’s time. Commented Jul 23, 2023 at 5:41
-63

I was reading this post extensively and I'm really worried by the reactions.

The first thing I remembered was cab driver's reaction when Uber came to my town. They reacted extremely angrily. They got together, persecuted Uber drivers. They used lawfare and political connections against Uber drivers, and they got to physical fights with Uber drivers to the point Uber drivers initially couldn't reveal themselves when they picked up a client because cab drivers where constantly looking for them and picking fights with them and their clients.

This scenery lasted a couple of months until they realized the inevitability of their fate. Some cab corporations even tried to educate cab drivers to give candies and treat client the best way possible. Nothing could resist Uber and today there are very few cab drivers resisting in my town.

Now when I read this thread I notice some very worrying trends:

First the level of ChatGPT answers on this matter (most upvoted answers) just shows how advanced it is. The sarcastic answer was terrifying.

Second, I saw that most people see the ban as the correct option, without having a reasonable way of distinguishing AI-generated answers from human answers. I think there are only two possible ways: letting users decide if it's an AI generated answer or having direct help from OpenAI itself. But I really don't think that humans will have the ability to tell one from the other. That leaves us with the only option of asking OpenAI for help. Has anyone contacted them yet?

Then the level of harshness with those who advocated ChatGPT integration (most downvoted answers) only reinforced the memory of cab drivers reaction. This worries me the most because disruptive technologies have to be embraced from the start or things will only get worse.

Adding to this is a very compelling pro-AI factor: the fact that some users are really fed up with aggressive answers from humans in SO and would much rather prefer to interact with a AI that treated them good. This is getting so critical that some people left SO altogether. I live in Brazil and I don't use the Portuguese SO because of extreme rudeness I got there several times. English SO is less bad, but I can assure you that if there were any other options people would embrace them in a heartbeat.

Just like I and many other people were fed up by cab drivers unethical attitudes like trying to figure out if the users knew the town so they could make a longer paths to the destination, not giving correct change, rudeness and many other things. That made Uber irresistible. As soon as it was available I never ever used a cab again. Since the first day it was on town.

Finally, remember that ChatGPT is learning and its answers will only get better and better. What are wrong or bad answers now will probably be the best answers in the future.

My advice (which will make my answer quickly get to most downvoted): If we can't get OpenAI to help, integration with ChatGPT is the only possible option. Create a clearly labeled automated answer for each post from ChatGPT and let users downvote it if it's bad as with any normal user.

This way users will have an immediate answer they know was AI generated and they will know there is a greater risk of being wrong, just like we know with automated translations of text.

I know this will be unpopular because it will make more difficult to build reputation points, especially if ChatGPT improves its answers. It's still better than losing all SO or making fruitless attempts to differentiate AI answers from human answers.

Any other option will not stand this test.

Maybe this means that some time in the near future SO will be no longer relevant because you can just ask an AI what the problem is with no need for human interaction. Well, if that is the case SO is already doomed and needs to rethink its business model from the scratch. If that's the case, it's better to embrace it as soon as possible. Humans can always help with comments and corrections at least while it still generates wrong or bad answers. But if it gets really good at it, there is no possible future for SO.

If you can't beat them, join them - a popular proverb.

Resistance is futile, you will be assimilated - Borgs.

Edit on 15/03/2025

Just in case anyone wants to know how things are going for SO, here is a very informative graph.

This is an all time posts per day count:

enter image description here

This was done with Data Explorer. This was the query I used to generate this:

SELECT Cast(CONVERT(Date, CreationDate) as date) PostDate , COUNT(*) AS NumQuestions
FROM Posts
-- questions only and removing weekends
WHERE PostTypeId = 1 AND DATEPART(dw,CreationDate) not in (1, 7) 
GROUP BY CONVERT(Date, CreationDate) 
ORDER BY PostDate Desc;
33
  • 49
    "remember that ChatGPT is learning and it's answers will only get better and better. What are wrong or bad answers now will probably be the best answers in the future." then why don't we discuss this in the future, rather than the present. We act on what we have right now. And right now ChatGPT can generate content that is very wrong and potentially dangerous. Which is a big part of the reason why it was banned. Commented Mar 16, 2023 at 17:14
  • 17
    Instead of comparing people to physically violent folks, I recommend to actually acknowledge and address the points that have been brought up for the umpteenth time already. Commented Mar 16, 2023 at 17:15
  • 54
    Uber vs cab drivers is a flawed analogy. Using an Uber, you get to your destination, just like a cab. If you have an "answer" provided by ChatGPT or other AI generation (at the current level of capability), you don't have an actual answer. You have "eloquent bullshit" that sounds like an answer. It is, sometimes, an answer, but it's quite likely to be hilariously wrong, self-contradictory, and/or insidiously wrong such that it takes a subject matter expert to see that it's incorrect. So, it's not actually an answer and is likely to substantially mislead readers. That's not a replacement. Commented Mar 16, 2023 at 17:21
  • 23
    I'll put it more laconically. You can ask ChatGPT whatever you want. It might even work for you. But you shouldn't be posting it here and representing it as your work. Worse, you shouldn't really look to use it in your code and represent it as your work, since depending on what you're working on, you could get bit hard by licensing. Commented Mar 16, 2023 at 17:22
  • 21
    @Makyen I guess it would be like hiring a cab and having the cab driver confidently take you to some other destination and then drop you off. Eventually you might realise you're in the wrong part of town, or even the wrong town altogether. Commented Mar 16, 2023 at 17:57
  • 24
    After ~30 answers that all state that AI is the future (I agree with that) and that ChatGPT is a big step forward (also agree), there is not a single post here that states a reason why ChatGPT answers should be posted on SO. Or how we deal with the fallout of people copy-pasting AI generated answers faster than they can be reviewed without any checking. Commented Mar 16, 2023 at 18:45
  • 22
    There are so many misconceptions in here... And none of them are not yet discussed in the answers here. Commented Mar 16, 2023 at 19:17
  • 18
    "the fact that some users are really fed up with aggressive answers from humans in SO and would much rather prefer to interact with a AI that treated them good." Why does that interaction need to be on SO? Commented Mar 16, 2023 at 19:21
  • 31
    "remember that ChatGPT is learning and its answers will only get better and better." Blatantly incorrect. The "P" in "GPT" stands for Pre-trained. It's not learning, and it's not getting significantly smarter. Certainly not smart enough to provide answers with any measure of consistent technical accuracy. Commented Mar 16, 2023 at 19:23
  • 27
    "If we can't get OpenAI to help, integration with ChatGPT is the only possible option." Again, why does it need to be on SO? Who is going to pay for that? What benefit is there to having SO embed the mediocre output, over users just going to ChatGPT if they want to? Commented Mar 16, 2023 at 19:24
  • 20
    "I know this will be unpopular because it will make more difficult to build reputation points" That has absolutely nothing to do with this... Commented Mar 16, 2023 at 19:25
  • 11
    "If you can't beat them, join them - a popular proverb." We aren't setting out to do the same thing as a chatbot. we are not a chatbot. See also The future role of Stack Exchange vs. emerging AIs and Could ChatGPT be a viable way to answer people's questions?- both of which I have written answers to. Commented Mar 16, 2023 at 22:41
  • 29
    Why are all the ChatGPT supporting answers analogies with things that have nothing to do with the subject matter? Commented Mar 21, 2023 at 10:30
  • 8
    "First the level of ChatGPT answers on this matter (most upvoted answers) just shows how advanced it is." - no, it really doesn't. Instead, it illustrates how vacuous typical marketing-speak really is. The top answer was labelled as being "for comedic and ironic purposes". Try actually reading the comments - you can easily see that people don't actually think the output reflects any insight, let alone being "terrifying". All the sarcastic answer proves is that people who wish to signal sarcasm in text are heavily reliant on certain conventions, as they lack tone-of-voice indicators. Commented Jun 12, 2023 at 11:18
  • 10
    AI powered search is not the same as using AI to dump questions onto SO. You're comparing apples and oranges, and weakening your entire argument while doing so. Meanwhile, you haven't answered a single misconception that was pointed out to you in these comments. Commented May 19, 2024 at 8:19
-78

I get the point, but if you'll allow a lurker's five cents: I believe that ChatGPT has more to contribute to the platform than to hinder it. How about implementing the bot natively on the platform? Let it answer the questions and, if you want, put an alert saying "this is an automatic response and may contain errors". ChatGPT is helping me a lot, it's fast and practical. It may (yet) not be the right one, but it's enough to help get to the answer.

15
  • 14
    We've been over this twice already. Commented Dec 5, 2022 at 12:03
  • 37
    You clearly didn't read the other answers here: ChatGPT writes bad answers, contradicts itself in the answers, and is extremely costly to implement on a scale SE would require. Commented Dec 5, 2022 at 12:03
  • 1
    “How about implementing the bot natively on the platform?” - No; These CGPT answers are absolutely horrible and useless. Commented Dec 5, 2022 at 12:11
  • 5
    Aggressive responses are one of the things that discourage people from posting here, another advantage of ChatGPT. By the way, have you tried to tell him that the answer is wrong or bad? He usually fixes it. I won't insist, it's just my opinion. :) Commented Dec 5, 2022 at 12:11
  • 31
    [1/2] The problem isn't ChatGPT itself. Feel absolutely free to use it to solve your own problems. You may even use it during your research for writing an answer here. The real problem are users who copy-paste ChatGPT answer to SO without even checking if they are correct at a high rate. We had a user yesterday who posted 20 answers in a little bit over an hour, were at least a third of the answers didn't even match the programming language of the question or were outright wrong. Commented Dec 5, 2022 at 12:14
  • 7
    And who is going to tell the bot the answer is bad, if it's just automatically showing the author of the question (that doesn't know the answer) whatever it generated? Who's to stop the bot from giving a incorrect, or even dangerous answer? And who on earth is gonna pay for the bot? Commented Dec 5, 2022 at 12:15
  • 12
    [2/2] Unless you find a way that the person who copies the answer to SO makes sure that it's a good answer and tells the bot when he is wrong, this isn't going to scale. You can't rely on volunteers here to vote on these answers to get the signal. That's not going to scale on the size of SO. Commented Dec 5, 2022 at 12:18
  • 3
    “ChatGPT is helping me a lot, it's fast and practical.” - But the user’s based on output generated by CGPT are absolute trash answers. Low quality answers generated by CGPT are beyond unhelpful. Feel free to use it, just Don’t Post its Output, and experienced users in the community can tell when an answer is based on useless CGPT output. Commented Dec 5, 2022 at 19:56
  • 1
    If ChatGPT helps you, there's nothing preventing you from using it. But in doing so, you're fully aware that the answers you're getting are coming from ChatGPT, and you probably know enough to at least take them with a grain of salt. The issue for SO is people expecting relatively high quality, moderated SO answers could be getting low-quality ChatGPT answers, usually without knowing it, and that's not a benefit to anybody. Commented Dec 7, 2022 at 4:23
  • It's funny, but I think this is involuntarily giving the right answer. Sure, it's pricey and won't be added, but chatGPT answer as first answer would not be the answer, but a reference for any other answer to say: hey, this is an AI answer, if you're answering with this, you're an AI, and your answer will go straight to moderation (or, deleted) - (WOW, I've used the word "answer" more than anyone else here :D) Commented Mar 9, 2023 at 21:14
  • @nnsense "a reference for any other answer to say: hey, this is an AI answer, if you're answering with this, you're an AI" why do you think there is the AI answer here? ChatGPT can generate different answers based on how you've asked and/or based on your existing chat history in the session. Each can claim either A or B if there are two options available. It's not like any and all ChatGPT answers always choose A, for example. Yet again, the only thing ChatGPT does is generate plausible text. It doesn't take decisions on questions. Commented Mar 10, 2023 at 2:04
  • The topic here is: chatGPT answers are banned. Fine, so you need something to understand that an answer is indeed a taken from chatGPT, the only way I see is to have a reference, to compare. ChatGPT answers are more or less similar when the question is exactly the same, if someone used it just as reference to answer it's fine, the point is to avoid those which are copy/pasting from it. Can't think of any other way, filtering out chatGPT answers can't be just "guessed", that would create a lot of false positive. Commented Mar 10, 2023 at 19:13
  • The methods in use to detect chatgpt answers isn't producing a lot of false poisitves. Commented Mar 10, 2023 at 19:16
  • @BDL With respect to "You may even use it during your research for writing an answer here." - the replies & comments I got to meta.stackoverflow.com/questions/425211/… seemed to imply otherwise. Doing exactly that is what made moderators delete the original answer I had posted here stackoverflow.com/questions/48119360/…. Shows rules are really not clear... Commented Jun 17, 2023 at 12:30
  • I just had used "bing chat" last week: it seems the progress in AI is currently from a 3-5 year old child to a student that has learned some parts by heart and randomly cites such parts, but does not really understand what it is all about. Specifically it showed that bing chat did not understand the code it suggested, nor could it explain it. Commented Dec 4, 2024 at 22:43
1 2
3

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.