5356

Moderator Note: This post has been locked to prevent comments because people have been using them for protracted debate and discussion (we've deleted over 300 comments on this post alone, not even including its answers).

The comment lock is not meant to suppress discussion or prevent users from expressing their opinions. You are (as always) encouraged to vote on this post to express your agreement/disagreement. If you want to discuss this policy further, or suggest other related changes, please Ask a New Question and use the tag.

This question remains because that is still the best, most prominent, and only permanent way that we have to announce this policy site-wide.

All use of generative AI (e.g., ChatGPT1 and other LLMs) is banned when posting content on Stack Overflow.

This includes "asking" the question to an AI generator then copy-pasting its output as well as using an AI generator to "reword" your answers.

Please see the Help Center article: What is this site’s policy on content generated by generative artificial intelligence tools?

Overall, because the average rate of getting correct answers from ChatGPT and other generative AI technologies is too low, the posting of content created by ChatGPT and other generative AI technologies is substantially harmful to the site and to users who are asking questions and looking for correct answers.

The primary problem is that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce. There are also many people trying out ChatGPT and other generative AI technologies to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with significant subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

As such, we need to reduce the volume of these posts and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts.

So, the use of ChatGPT or other generative AI technologies to create posts or other content here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT or other generative AI technologies after the posting of this policy, sanctions will be imposed to prevent them from continuing to post such content, even if the posts would otherwise be acceptable.

NOTE: While the above text focuses on answers, because that's where we're experiencing the largest volume of such content, the ban applies to all content on Stack Overflow, except each user's profile content (e.g., your "About me" text).


Historical context of this ban originally being "temporary"

When this ban was originally posted on 2022-12-05, it was explicitly stated as a "Temporary policy". It was specifically "temporary", because it was, at that time, a policy which was being imposed by the subset of moderators who were present on the site over the weekend after the announcement of ChatGPT's public release, 2022-11-30, through the Monday, 2022-12-05, when this question was posted. The moderators involved strongly felt that we didn't have the right to impose a permanent policy in this manner upon the site, but did have a responsibility to impose a temporary policy that was necessary in order for the site to remain functioning while discussion was had, consensus reached, and also allowed Stack Overflow, the company, time to have internal discussions to see what policies they would adopt network wide. So, after consultation with the company, the moderators present at that time chose to implement this as a "temporary" policy.

Since then, quite a lot has happened. Based on the voting for this question, it's clear that there's an overwhelming consensus for this policy. The company has chosen that the specific policy on AI-generated content will be up to individual sites (list of per-site policies), but that even on sites which permit AI-generated content, such AI-generated content is considered "not your own work" and must follow the referencing requirements. The requirement for following the referencing requirements was, later, put into the Code of Conduct: Inauthentic usage policy. There's a lot more that's gone on with respect to AI-generated content. So much has happened such that it's not reasonable to try to summarize all of it here.


1. ChatGPT is an Artificial Intelligence based chat bot by OpenAI, which was announced on 2022-11-30. Use of ChatGPT is currently available to the public without monetary payment.

1
  • 1
    Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, or in Stack Overflow Chat. Comments continuing discussion may be removed. Commented Feb 26, 2023 at 7:28

67 Answers 67

14

In one line: I want a pure "human" platform, and for this reason I am on Stack Overflow.

In truth, I have learnt coding from SO, and I believe that every line of coding on SO is tested by humans, on a real project with their own hands. (This is mostly true for accepted answers.)

On the other hand, I have used ChatGPT for only for three days and came back. Please don’t ask me about the experience. I do not want to remember that I could lost my coding knowledge. uffffff!

3
  • 17
    "I believe that every line of coding on SO is tested by humans, on a real project with their own hands" - hahahaha. Commented Aug 24, 2023 at 15:30
  • @CodeCaster, so AI tested your question and code on his own project? Ha Ha Ha! All are fake. I know my opinions cross worrd and thinkings, but must see that the word I have used is "believe". Try to know the meaning of this word. " will be or has to be or should be" hide behind the word silently. The result is "could be" Commented Aug 25, 2023 at 12:05
  • 3
    At least your motivation is sound "I want a pure human platform". That's a valid preference that doesn't even need to be substantiated with reasonable arguments. However, I don't have that desire, I want a platform that will still be relevant in 10 years and for that to happen the attitude towards AI needs to change drastically. Commented Sep 5, 2023 at 13:42
13

The more I use ChatGPT, the more I agree with this policy. I asked it "Can you explain the difference classes of amateur radio licenses?", and it said this:

There are three classes of amateur radio licenses in the United States: Technician, General, and Extra. Each class of license has its own set of privileges and requirements.

The Technician class license is the entry-level license for amateur radio operators. It allows the licensee to operate on all amateur radio frequencies above 30 megahertz, including the very popular 2-meter and 70-centimeter bands. Technician licensees are not allowed to use the HF (high-frequency) bands, which are the bands used for long-distance communication.

The General class license is the intermediate level license. It allows the licensee to operate on all amateur radio frequencies, including the HF bands. General class licensees are also allowed to use higher-power transmitters and to operate certain types of equipment that are not available to Technician licensees.

The Extra class license is the highest level of amateur radio license. It allows the licensee to operate on all amateur radio frequencies, including the HF bands, and to use the highest power levels allowed by law. Extra class licensees are also allowed to use a wider range of equipment and to operate in certain types of specialized modes, such as digital modes and satellite communications.

To obtain an amateur radio license, an individual must pass a written examination administered by a team of volunteer examiners. The examination covers the rules and regulations governing the operation of amateur radio stations, as well as basic concepts in electronics and radio theory. The level of difficulty of the examination increases with each class of license, with the Extra class license being the most difficult to obtain.

This seemingly very well-written answer is wrong in several ways.

  1. Technician licensees are allowed to use certain small parts of some HF bands, with restrictions.
  2. General licensees can use large portions of all bands, but there are Extra-only frequencies within many of the bands.
  3. On VHF and higher bands, all licensees have exactly the same privileges. General licensees being able to use higher power than Technicians is true on HF bands, but it is not universally true.
  4. Extra licensees have the same power limits General licensees do.
  5. Virtually any transceiver that an Extra can use can also be used by a General licensee.
  6. Even Technicians can use digital modes and satellites. All licensees can use all modes; they're just limited to certain frequencies.

If/when a better AI for answering programming questions is developed, I think it would be helpful to have an authorized way to post AI answers such as my suggestion from a previous answer, but having used ChatGPT more, I've realized that it isn't that AI (yet).

15
  • 2
    @Rubén I'd previously posted an answer arguing against the ban and suggesting a way to allow AI answers, but having used ChatGPT more I decided it's not really a good idea at this time. I don't want to delete my old answer because I still think it will be a good way to handle AI if/when it becomes good enough to be useful for answering SO questions. Commented Dec 11, 2022 at 17:38
  • 16
    The accuracy of describing radio licenses doesn't seem hugely relevant for Stack Overflow's topics. Commented Dec 11, 2022 at 17:38
  • 45
    @MisterMiyagi but it's still a good example of how ChatGPT can sound very knowledgeable and confident, but be totally wrong. Commented Dec 11, 2022 at 17:39
  • @Someone That's true. It wouldn't have triggered any of my usual red flags. Interestingly enough, it also doesn't trigger any of the things I look out for in ChatGPT answers. Commented Dec 11, 2022 at 17:40
  • 10
    Please bear in mind that this "question" is not a "call for whatever". It might be OK to provide feedback about how the situation was handled previous to the launch of the help article that bans the use of chatGPT, but I not see any reason to polute this question with "ideas" or "opinions" Commented Dec 11, 2022 at 17:43
  • 1
    @Rubén so all of the other answers explaining why people support or oppose the policy are off topic too? I saw that there are quite a few of them and thought it was okay to post another. Commented Dec 11, 2022 at 17:44
  • Each "answer" should be directly related to the question, if you want to criticize other answers that usually should be done posting comments direclty to the corresponding answer. If a comment don't provide enough room it might be fine to post another answer but it should include a link to the criticized post, but only if that is directly related to the question... answers should not divert from the question topic. Commented Dec 11, 2022 at 17:48
  • 2
    @Rubén I'm not really criticizing any answer. I'm just giving another piece of information supporting the policy. Commented Dec 11, 2022 at 17:49
  • This is not the place for that and regading the temporary ban set by the mods that is late. Commented Dec 11, 2022 at 17:50
  • @Rubén since it's temporary, that means it could end. I would suggest that it not be ended unless/until ChatGPT improves significantly. Commented Dec 11, 2022 at 17:53
  • It looks that you missed New help center article and banner on the site about GPT-Generated content Commented Dec 11, 2022 at 17:54
  • The answers by ChatGPT are most of the times very general. Maybe good if you don't know anything about a topic but not good enough to learn much more about a specific problem, not to speak of solving a problem. It's definitely not a problem solver although kind of a search engine giving you some general insight into topics. That's what the current status of ChatGPT seems to be to me. Commented Dec 11, 2022 at 22:33
  • This is an illustration of the fallacy of how the answers are farmed. The Extra class manual contains a section on satellites not found in the other manuals, and it shows up as a topic covered deeply only on that license exam, but it has nothing to do with what's allowed by that license at the exclusion of others. Similarly, a question about Oracle might wind up drifting off into ISO SQL or vice versa. -- K7COI Commented Dec 13, 2022 at 18:42
  • @MisterMiyagi, I disagree. This is a great illustration of the fallacy of farming answers by the false appearance of textual relevance. Commented Dec 13, 2022 at 18:44
  • as long as it is an answer that can be used in certain circumstances and no one is around to dispute it, it is a useful answer (conditionally) Commented Jun 17, 2023 at 10:12
13

My take on AI-generated answers is this:

Using ChatGPT and similar services is very easy. It's comparable to Google Translate. If you want an AI-generated answer, then you simply use one of those services.

Compare to the language sites. If you ask for a translation, then what you're looking for is something else than what Google Translate produces. If you wanted something from there, then you would use that service instead of asking on a forum.

I think it's good to assume that a user who is asking a question on a forum has tried those easy-to-use services and found them unsatisfying. Note that I'm not saying that it is likely that they have done it. Just that it is good to assume it, in the sense that you should answer a question as if ChatGPT or Google Translate was not enough for the asker.

8

What if we fend off AI-generated content with AI-assisted moderation?

The video sharing service I use the most often in China - BiliBili, has an AI-based moderation bot called Avalon, and it monitors for harmful content, makes automatic decisions when harm score is high, and defers to human moderators when it's lacking confidence. It's constantly improving itself based on the evolution of contents and input from human moderators. (Of course, being in China, we also use it for censorship in addition to day-to-day moderation).

This is just my personal opinion, but I think investing in an AI-assisted moderation system is worth it in the long term.

8
  • 10
    The current situation with moderation bots on Quora (with unspecified IQ) is a complete disaster. Though detection of text as images could be a useful addition here. Commented Dec 8, 2022 at 22:57
  • 3
    There was the unfriendly comments detector robot where they used AI-assisted moderation already with moderators being the final decision makers but an AI model was used for automated flagging. The same could probably be done for a "really bad answer" category. However, it's kind of sad to see that this basically results in a technology battle between spammers and cleaners, instead of humans learning how to improve their skills. Commented Dec 9, 2022 at 9:35
  • 4
    This isn't up to SO moderation but rather up to the company behind SO. Good luck convincing them. Commented Dec 10, 2022 at 21:15
  • I like the idea of AI-assisted moderation. Humans still making the final decisions, but automation providing tools to make their job easier/quicker. Still, banning people from posting ChatGPT answers as their own answers seems like an obvious no-brainer to me... Commented Dec 13, 2022 at 4:01
  • 2
    I don't know. I've been flagged on MSN.com numerous times for stating various things such as: political views; religious views; and, I believe sometimes just simply stating facts. The problem is the algorithm on MSN.com has a notable liberal bias and Microsoft doesn't seem to have any intention of correcting that. This actually goes against Microsoft's core principles of: inclusiveness, fairness, transparency and in some cases even safety... Commented Feb 28, 2023 at 20:40
  • This is a contradiction. You want to ban AI with the very same technology that you are banning? This sounds like the old computer science problem of a program that debugs itself - not possible. Commented Jun 5, 2023 at 16:02
  • What if INDEED - stackoverflow.co/teams/ai Commented Jan 9, 2024 at 14:26
  • 2
    @bmike OverflowAI is not "AI-assisted moderation" Commented Jan 9, 2024 at 14:29
7

The spirit of Stack Overflow is that it's from developers to developers, which is quite more than a question-answer site.

While generative AI can often give right answers, especially for simple-to-medium questions, there is a billion questions on Stack Overflow which are more helpful already than their answers. This spirit needs to be preserved.

And why would someone use an LLM to automate answering? I don't see any other reason, except abuse of the rating system. Abuse should be banned.

3
  • 1
    There are 23,975,364 questions on Stack Overflow. Commented Nov 1, 2023 at 15:38
  • 1
    In what way are questions more helpful than answers? Commented Nov 1, 2023 at 15:38
  • 4
    @PeterMortensen there are plenty of questions which show almost working solution to a problem, but require a tweak, which an author won't know. It's a well-known phenomenon of SO that sometimes questions are more helpful than answers. Another way how questions can be helpful is getting to know things you never imagined exist or are a problem, just by scrolling through them. And of course, figuratively speaking, there is a billion helpful questions, sorry for confusion, won't edit. Commented Nov 1, 2023 at 21:31
6

I agree with the ban. Stack Overflow is for questions that the author researched and tried to find an answer for and is still stumped. Questions that require a decent level of expertise to answer appropriately.

If an answerer can paste the question into ChatGPT and get an answer, so can the asker. If a question could have been answered correctly by ChatGPT, then probably the question was poorly researched anyway.

But high quality questions deserve high quality answers. A good asker would have already put their question through ChatGPT and not have gotten a satisfactory answer, and are asking on Stack Overflow for a human expert written answer.

-1

I'm not advocating for its immediate removal, but given that this is has been a permanent policy for a while now (and a temporary policy for way longer), and there's now additional ways of letting people know about the policy - mainly the notice in the answer box for new contributions, and to a lesser extent the help center article - is it still necessary to have this post ?

Even if we have to wait a bit for the heat around the OpenAI partnership announcement to die down before it's removed.

2
  • 6
    We'll probably unfeature it in the not-too-distant future, but we (the moderators) wanted to leave it up a little bit longer after the removal of the temporary status. Overall, though, I agree it should be unfeatured. Commented May 21, 2024 at 0:34
  • 8
    People have been ignoring the featured post since it's been featured. Also the help center article. And (as of more recently) the notification on the answer box. Then tried to claim "But how would I have known?!" It's quite satisfying to be able to point them to the multiple thing they ignored to make the claim they weren't inform. Commented May 21, 2024 at 5:53
-3

I've given a lot of time before writing this because I think at this point we all have been able to digest and rationalize what is happening around this phenomena.

First of all, I fully agree with the temporary policy, and I am in favor that it becomes a "permanent temporary" one.

The main reason I'm on that opinion is not the fact of the tool being available, but the way so many people were using it: as a copy-pastable BS generator for social networking engagement (in our case, SO reputation). This alone spawns several reasons for why it should be banned, but I don't have to point them out anymore, as it should be common knowledge right now.

I am not in favour of computer-aided code writing as a tool to show examples of how a particular question can be solved. That is a job for GitHub Copilot or other places, not for this website. People can go straight to GPT-3's playground and ask the program to write code for them as an additional reference, but Stack Overflow should remain as human-aided different points of view in a given problem in the form of a question.


On the other hand:

I would like to add to the debate "legitimate" use of the technology. Consider some people who are savvy on the topic of the community (in our case, programming), but they are not proficient in the use of English or in a general sense, for whatever reason, haven't developed very good communication abilities.

Would you consider the potential contributions of such a person less valuable than another one who can express themselves "better" (in the sense of getting their point across close to the most optimal possible way)?

Now take the conclusion you've reached by reading the last two paragraphs and let's see what professionals of the marketing area are doing. They're using ChatGPT as a tool for computer-aided writing. Sure, the lazy ones are just copying and pasting whatever the program spits out, but in the hands of a capable professional, ChatGPT is much more powerful and is faster than hours of googling, which is what they were doing until now.

Also, that is what we programmers were doing until now. Of course, our main sources of inspiration are probably Stack Overflow, but there's a lot of forums around. Just like Server Fault users rely mostly on Server Fault, but the ArchWiki is still a reliable curated source of information, among other wikis and forums. But we all became dependent on search engines like Google Search.

Come to think about it, it came as no particular surprise to me that Microsoft decided to integrate ChatGPT into the Bing search engine.

There is a specific ELI5 prompt on the ChatGPT API examples page that try to show how to use the power of synthetic dialectics to further clarify a subject. The job of a communicator is to find the best words in the best order to exchange an idea to a specific target group. And in that field, ChatGPT is not a terrific tool, but it can aid many people as it is right now.


My point is that computer-assisted writing is a beneficial thing. People can use it to write better questions primarily, but also improve their answers' wording. This is specially useful for the handful of people I've described earlier: the ones proficient in programming, but not as much in writing English, or in communicating in general.

The distinction between computer-assisted writing and copy pasting from a BS generator should be obvious.

I could send this whole answer to proof reading humans or ChatGPT and bet excellent feedback from the humans, but reasonable good feedback from the program. It would look less like I'm the author of it, but in both cases, you, the reader, would struggle less to understand what I want to tell you. DISCLAIMER: I did neither. This is 100% my first take on writing the answer, without revision.

Finally, my opinion is that people should somehow feel that it's ok to use computer programs to aid their writing. I'm not sure how the anti GPT policy could be further improved to include this, or if is it even necessary.

I will repeat this paragraph from the beginning, now that the reader has a new perspective:

I am not in favour of computer-aided code writing as a tool to show examples of how a particular question can be solved. That is a job for GitHub Copilot or other places, not for this website. People can go straight to GPT-3's playground and ask the program to write code for them as an additional reference, but Stack Overflow should remain as human-aided different points of view in a given problem in the form of a question.

12
  • 15
    Why use ChatGPT for computer-aided writing? Why not use translators and/or Grammarly and/or other tools that fill this niche? Honestly, the most likely outcomes I can see from using ChatGPT to "aide" writing is a) copy/paste. So pretty much exactly the same as now. But the user can claim "me not good speaker, used ChatGPT to aide". b) ChatGPT fumbles the writing and does not represent the idea the user wanted to put across correctly. There is also a chance that it's OK, however, what are the chances that a user needs tool assist will recognise bad output? Commented Feb 20, 2023 at 18:44
  • @VLAZ looks like ChatGPT agrees with you: platform.openai.com/playground/p/… Commented Feb 20, 2023 at 19:12
  • 14
    Code completion isn't a substitute for professional engineering. Using code completion tools doesn't make you an engineer, let alone a good engineer. Code completion tools optimize an expert's workflow, but it's not the workflow unto itself. Commented Feb 20, 2023 at 19:21
  • 4
    @Makoto I stated two times the exact same paragraph in my original unedited text arguing against code completion on SO Commented Feb 20, 2023 at 19:44
  • 4
    Then you've used a whole lot of words to simply say that you agree with this. There's been plenty of prose on this matter and there's really not a lot of value to keep adding to it unless you're offering a completely unique take on it. Commented Feb 20, 2023 at 19:49
  • 7
    I'm sympathetic to the concept of using LLM-based tools (not necessarily ChatGPT) to clean up one's writing. But there's a danger that the tool will say something different to what the writer intends, especially if their English skills aren't strong. That could lead to greater misunderstanding. If I see clear confident English I tend to assume the author knows what they're talking about. If the writing has a few flaws I can tell the author is struggling to express themself, so I might not have the correct interpretation, and I may need to post a comment for clarification. Commented Feb 20, 2023 at 19:50
  • 5
    @PM2Ring fun fact - you can interact with ChatGPT in languages other than English. For example, you can instruct it to translate something in English. However, the training it has had other than English is a bit hit and miss. I have no confidence whatsoever it will be able to correctly pick up the meaning in another language. I've seen it very badly misinterpreting rather simple prompts when not in English. Commented Feb 20, 2023 at 19:58
  • 8
    I’m really not a fan for using ChatGPT to translate or significantly brush up technical writing. Since the tool basically makes up stuff, it is absolutely vital that the posting user can verify all of its output; if that output is much more sophisticated or even entirely unintelligible to the posting user, they can’t do that. Commented Feb 20, 2023 at 20:29
  • @Makoto agree with what? Your wording makes me think that you think I'm either agreeing or disagreeing with the anti ChatGPT policy, which is not the point of my answer whatsoever. Commented Feb 20, 2023 at 23:51
  • @PM2Ring that is a fair point, but in this circumstance people are already struggling with tools like Google Translate and getting worse results. If they include the customary disclaimer that they're not proficient in English and they're being aided by online tools, they'll be fine as always Commented Feb 20, 2023 at 23:53
  • @VLAZ I happen to use ChatGPT exclusively on Brazilian Portuguese and I though that was the same output I'd get from English. Commented Feb 20, 2023 at 23:55
  • 11
    Even during translation or editing, ChatGPT is prone to introducing falsehoods or misstating facts to the level that they become incorrect. Often this creates the appearance of lying from the author, when the intent was simply to address any grammar or spelling issues. The tool really causes more harm than good, and that is why it is banned. Commented Feb 20, 2023 at 23:55
-6

LLMs are trained on data from sites like SO. What do you think is going to happen when they're going to train LLMs on content created by LLM? That can only create a self-reinforcing loop of generating incorrect nonsense and therefore completely damaging the usefulness of both SO and LLM itself.

LLMs do not really understand the topic. They only understand relations between words. They don't really have insight in the subjects described by the text. If we're going to post LLM-generated content here on SO, over years it will lead to the latest more advanced LLMs being trained on data produced by early simple LLMs (potentially without being verified by someone with actual insight).

Some might argue it's possible to prevent this using voting for/against the posts. But how can you be sure the LLMs are not voting as well? Also casting a vote does not require the voter to explain why he/she votes the way he/she does, nor to prove his/her insight.

For this reason, I think the policy to ban AI-generated content is correct.

5
  • 7
    This doesn't in any way answer the policy, or comment on it... Commented Apr 3, 2024 at 12:59
  • 5
    "But how can you be sure the LLMs are not voting as well?" That's not a feature LLMs have... Language models generate text. They don't interact with websites. Commented Apr 3, 2024 at 13:02
  • "Language models generate text. They don't interact with websites." oh really? how does the LLM generated content gets posted then? obviously someone needs to copypaste or write script to do that. based on LLM output. I think this is beyond scope of my argument. you can present LLM with post from SO, let it decide and then vote on its behalf. obviously. Commented Apr 3, 2024 at 13:02
  • 8
    That's... Wat... Look, if a user decides to copy a post into a LLM and vote on that post according to what the LLM says the user should do, then that user is free to do so. That's not the LLM voting, that's just the user using a convoluted process. If someone were to automate that, they'd get banned very quickly, as you're not allowed to automate voting. None of that has anything to do with an LLM ban. Commented Apr 3, 2024 at 13:12
  • 5
    Today's ChatGPT fail ("principal" is the exact same as, well, "principal") Commented Apr 4, 2024 at 0:38
-8

I see things differently.

1. We all gain from using AI tools

We all know that AI tools are created by developers like us, not only for the end user, but also for the developers. Even an expert prefers an editor with colored code rather than a black-on-white editor.

The world of AI is developing rapidly and we all benefit from finding cases where tools like ChatGPT will be useful for our community and how to use them instead of acting like it's some weird tool that we don't know how it works.

Having tested ChatGPT, I find it to be a very good tool for general knowledge. Like other testers, I found a lot of mistakes in the answers when I ask technical questions, but it is still a great tool that we should use in the community.

2. The ban should be permanent for answers generated from any resource.

I personally use Stack Overflow since I'm a novice and it helps me and helped me a lot especially when I started: it was my first search engine and I think it's not about ChatGPT or other tools, but about developers who copy and paste answers from resources without being able to validate or explain them in context.

ChatGPT just increased the priority of problems we've already had.

While AI is getting more and more accurate, it's not very efficient for us to allow an AI-generated response in a response thread, because our human response (even if incorrect) is an information about how we are thinking (in a computer context) and that's useful data in analysis.

Allowing AI-generated responses will corrupt this data set. Simple example: without AI responses, we are currently able to say "According to Stack Overflow data, developers learn 2 more languages and 5 more frameworks every 5 years". This kind of information is useful and it's just one of thousands of cases. So I think we definitely need to ban AI-generated responses in our current response thread.

It takes time to write a good response and I don't think people who know what they are talking about should want to copy and paste responses from a resource. So I suggest adding some new features to the reply editor:

  • Disable copy and paste as clean text: The developer should write everything in their own words, and even if there is a typo, it is not a problem. We can have tools that clone and correct the written response before publishing the correction.

  • We need to highlight all the pasted data and it should not be editable. A more powerful change would be to manage it as a resource that should have a link and a description of how to find (or generate) that resource.

3. AI should be used in Stack Overflow

As I said, I really think ChatGPT is a good tool and that's why I suggest the community train its own model for a text-based bot that will be more accurate in the computing context and integrated into our search bar. That way we give the developer the choice when searching for something to go through the bot generated answers or explore solutions provided by other developers.

3
  • It's not clear to me what your proposal is for using AI in the site (is it only to improve the search functionality?), and it seems way off on a tangent anyway. If your proposal is that the site should run questions through ChatGPT first before offering a submit button, that has already been roundly rejected: it risks a flood of people leaving with broken code that we don't know anything about. Commented Jan 6, 2023 at 23:12
  • "We need to highlight all the pasted data and it should not be editable. A more powerful change would be to manage it as a resource that should have a link and a description of how to find (or generate) that resource." Aside from being absurd, this would be trivially circumvented by anyone who knew about it, by just doing the copying and pasting in a separate program. I am generally against putting arbitrary speed bumps like that into the UI, especially if they are not very explicitly explained. Commented Jan 6, 2023 at 23:14
  • seems kinda odd to expect a language model to be able to link user search queries with accurate search results. Wouldn't any other "AI" built for search indexing serve that purpose better? Commented Jan 6, 2023 at 23:15
-9

It's 2024. I visited 3 pages of answers. None contains the buzzword "copyright".

I'm just trying to contribute in the first message of this thread, to try to add a mention that this is not "just" a community policy about being precise or not; the ban reasons should also stress more about being a needed moral and legal proactive measure, to avoid additional plagiarisms and copyright infringements.


LLMs causes extra Copyright and Credit Nightmares

Popular large language models are like Pandora's pots, trained over millions and millions of obscure copyrighted materials, and this can surely cause extra potential copyright violations and plagiarism that can be tricky to be proactively identified, to assure long life to the Stack Exchange network, distant from boring extra lawsuits and extra mass "content takedown" requests.

Because LLMs Do Not Give Authorship Credits

Even taking copyright apart; popular LLMs do not mention the author, so they do not respect moral rights, and they do not fulfill our sane referencing standards.

https://meta.stackexchange.com/help/referencing

...for Technical Reasons

No known popular large language model was currently designed to give you the original sources. A "generic human" can tell you who taught to sum 1 plus 1: they can find their early Math school book and find out which page of that book says so. LLMs, instead, are trained differently, and cannot just give references in the expected way.

Some advanced LLMs acts like are able to give you references, but if you pay attention, even these are "just" capable to share "further information", and only after whatever text is generated (e.g. Microsoft Copilot, ...).

...for Political Reasons

Moreover, popular LLMs usually do not even share the original dataset on their website (and this is a political issue, not a technical issue; as the dataset can be shared in whatever moment, especially from organizations that have "Open" and "AI" in their official name).

Indeed this practice of closing the dataset does not simplify the backward work of finding the right credit to a generated text.

Because we are supposed to release in CC BY-SA 4.0

Premising that, as I hope everyone already noted, in every single page of the Stack Exchange Network, there is this phrase at the footer of the website:

Site design / logo © 2024 Stack Exchange Inc; user contributions licensed under CC BY-SA.

Please take 60 seconds to read this page, if it's your first time:

https://stackoverflow.com/help/licensing

It just means that new contents must be covered by these terms:

https://creativecommons.org/licenses/by-sa/4.0/

(Again please take additional 60 seconds to read this ↑)

Also note that there is not any "unless otherwise noted" in the above SE copyright terms.

TL;DR

Both the Stack Exchange Inc. and the SE community have probably no sufficient time to fight this additional copyright risk, and moral risks, of giving no sufficient credits to original content authors.

The potentially high benefits do not outweigh the potentially very very very high risks, especially without enforcing an extra care and awareness.

Next Steps?

  1. SE policy lacks an "unless otherwise noted"
    • At the moment the copyright terms of SE does not mention the phrase "unless otherwise noted". That phrase is quite useful, since our planet has billions of contents under thousands of licenses, and very often answers are like "«bla bla bla» very long snippet citation1 citation2 citation3" and indeed this kind of answers are not content under CC BY-SA 4.0, but are instead contents released under the terms of the upstream copyright holder. Usually, official code snippets are pasted here on Stack Overflow as answer but just as mention, to quickly find that upstream documentation. So indeed, with or without AI-generated contents, a global "unless otherwise noted" would probably help in quoting external contents (ChatGPT included I guess...).
  2. Evaluate "fair use" policies
    • If you know what you are doing, small use of proprietary sources can be used even if they are "all rights reserved". But, you must clarify that the content is not yours, and you should clarify the reasons why you believe that the content can be shared in "fair use". Note that the Wikipedia community has interesting "fair use" policies. Instead the community of Wikimedia Commons does not generally include contents in fair use. But clarifying such policy in our website may be necessary, sooner or later, with ad without AI; and with and without AI contents that are assumed under "all rights reserved" as default.
  3. Evaluate big disclaimers about AI-generated contents
    • Basically stuff like https://commons.wikimedia.org/wiki/Template:PD-algorithm that it's currently embedded in some multimedia files, to say that «This file is in the public domain because it is the work of a computer algorithm or artificial intelligence and does not contain sufficient human authorship to support a copyright claim. The United Kingdom and Hong Kong provide a limited term of copyright protection for computer-generated works of 50 years from creation. 1 2».

So, I think the current ban is OK. Before even discussing a re-activation, we should at least afford the above points, to improve the legal safe space for editors but also readers, and use this kind of tools in a legal way.

21
  • 11
    This point is kinda moot, since posting AI-generated content as if you wrote it yourself is more or less plagiarism, any way, regardless of if the content in and of itself is plagiarism. Commented Jun 20, 2024 at 23:16
  • 1
    2nd definition in this context: en.wiktionary.org/wiki/moot_point Commented Jun 21, 2024 at 0:00
  • Ah thanks. I wonder if at least a small little additional stress about potential copyright issues and plagiarism could be added in meta.stackoverflow.com/q/421831/3451846 in a short phrase (avoiding my late wall of text indeed). Also maybe stackoverflow.com/help/referencing - since they seems a bit somewhat related to community standards (variable) and not so much to laws (not easily variable) Commented Jun 21, 2024 at 0:39
  • 1
    My point is that the copyright argument is irrelevant, since even if the generated content didn't have any copyright issues, it's still a form of plagiarism, as the user does not write AI-generated text themselves. Commented Jun 21, 2024 at 7:21
  • (Premising that, generally, plagiarism is not a copyright infringement, and so a plagiarism does not necessarily causes legal penalties) I'm just trying to contribute in the first message of this thread, to try to add a mention that this is not "just" a community policy about being precise or not; the ban should be shared mainly as a moral and legal proactive measure. Unfortunately, comments in the first thread are blocked and I cannot say that. But that's my small scope: give these two strong reasons to the current ban. Commented Jun 21, 2024 at 7:41
  • 3
    The reason for the ban is the low quality of the content, though. SE was being flooded by good-looking nonsense. Whether or not AI-generated content even qualifies as copyright infringement is still up for debate. I mean, I get what you're suggesting here, but I don't think SE needs to "dilute" the reasoning for the ban. Commented Jun 21, 2024 at 7:45
  • Dilute? I'm not talking about removing any reason. I'm just trying to add 1 o 2 (still strong) additional reasons. Commented Jun 21, 2024 at 7:51
  • 1
    The "moral" issue is a red herring. A huge amount of answers copy information directly from official sources like learn.microsoft.com. That particular site specifically states that user's may not copy content. So given the moral issue isn't enforced for humans, why would it be any more burdensome to not enforce it for LLMs? Commented Jun 24, 2024 at 11:50
  • @IanNewson premising that quoting somebody is generally allowed by right of citation (fair use, out of copyright) and that mentioning "learn.microsoft.com" is already respecting moral rights, since there is attribution. Instead, LLMs do not generally respect these. "ChatGPT said" indeed is misleading citation, and just potential plagiarism. Commented Jun 25, 2024 at 8:23
  • @ValerioBozz I don't understand your point. The copyright holder determines what is allowed, not StackOverflow or contributors here. If the copyright holder determines that attribution is acceptable to meet their terms then that's great. However that's never been enforced on SO so trying to say it's morally wrong if ChatGPT does it but not if a human does is obviously inconsistent. Commented Jun 25, 2024 at 23:18
  • @IanNewson I agree when you say, copyright holder determines what is acceptable: about ChatGPT, it's all rights reserved, by the way. The thing is, we cannot even use it by some fair uses, since the right of citation in ChatGPT has no sense: it's not a source of information (contrary to any other author, documentation, source of truth, included learn.microsoft.com, following your example). That is why, it would still be useful to better mention in our policies that LLMs are also causes of additional troubles about copyright and moral rights, since they were not designed to respect these. Commented Jun 26, 2024 at 8:02
  • 3
    @IanNewson "However that's never been enforced on SO" - if you mean that we don't require attribution of quoted content, that's not true, and we do enforce that. See How to reference material written by others. Commented Jun 26, 2024 at 8:32
  • @ValerioBozz What do you mean by "about ChatGPT, it's all rights reserved"? Commented Jun 26, 2024 at 8:33
  • 1
    @ValerioBozz "premising that all footers of their websites have 'OpenAI ©'" They don't; in fact, I cannot find any page on chatgpt.com that does (though I don't think this is relevant under US law, at least). "without further indications mentioning the word 'copyright'" The terms specifically say "We hereby assign to you all our right, title, and interest, if any, in and to Output.", and "all our right" is broader than just "copyright". Commented Jun 26, 2024 at 9:43
  • 2
    I think that not everything in this answer is correct but it does put emphasis on an important aspect that legality of GenAI created content is currently contested in courts. One could wait until decisions are made there although that can take a long time often. Commented Feb 25 at 8:23
-14

Having used this site since pretty much the beginning, Generative AI allows gamification of the whole system thus destroying the basic concept of developers helping developers. Now maybe the focus of the site has changed since the beginning, but the whole idea of encouraging people to contribute was to award the reputation and badges. And maybe people have come along with the favorite Gen AI tool and answered questions they have no knowledge of so they can gain these in the hope of putting it on their resume or otherwise impressing people. But that is why Gen AI should not be allowed because it is about people helping people.

1
  • 2
    I think this alone is a weak line of reasoning. Commented Apr 1 at 0:21
-14

I'm not advocating for or against, but I like one of the highest voted answers that asked chatGpt itself.

Though to be honest it was from 2022 and chatGPT has improved quite a bit since then.

I believe this should be revisited and deserved an update - let's say... AI representing itself but 3 years later (and getting smarter).

So let's try again:

  1. why should stack overflow ban ai generated answers?

GPT 5 content below:

There are several reasons why Stack Overflow has chosen to ban AI-generated answers (at least by default) and why many in the community support this decision:

  1. Accuracy and reliability risks

AI outputs often look convincing but may contain subtle or serious errors.

A wrong but well-phrased answer can mislead readers more than a clearly bad one.

Debugging incorrect AI content consumes community time and undermines trust.

  1. Verification challenges

Moderators and other users cannot easily determine whether a response was AI-generated or human-written.

The burden shifts to the community to fact-check large volumes of answers, which is impractical.

  1. Dilution of quality standards

Stack Overflow’s value lies in curated, peer-reviewed, high-quality answers.

Allowing AI content in bulk risks drowning high-effort, expert-level contributions under a flood of mediocre posts.

  1. Reputation system integrity

Users gain reputation points when their answers are upvoted. If AI is used to mass-generate content, individuals could unfairly accumulate reputation.

This undermines the incentive structure for genuine expertise and effort.

  1. Legal and ethical concerns

AI models sometimes reproduce copyrighted or proprietary material.

There are also risks of plagiarism, since AI may restate existing answers without attribution.

  1. Community sustainability

If the site is filled with low-signal, AI-produced answers, knowledgeable experts may stop contributing.

That would weaken the long-term health of the platform.

That said, there is an ongoing debate. Some argue that responsible AI use (e.g., drafting, summarizing, or assisting with research) could be beneficial if transparently labeled and verified. But the default ban exists to protect the knowledge base from being overwhelmed before those safeguards are workable.

  1. And why should it allow AI content?

GPT 5 content below:

Reasons not to ban AI content entirely:

  1. AI can accelerate problem-solving

Developers often already use AI tools privately (ChatGPT, Copilot, etc.) to get a starting point.

Allowing curated AI answers on Stack Overflow could save time for routine or repetitive questions.

  1. AI can lower entry barriers

Novices who struggle to articulate answers might use AI as a scaffold to contribute.

This could democratize participation, instead of limiting the site to only advanced users with strong communication skills.

  1. AI is already part of the workflow

Pretending that developers don’t use AI is unrealistic. Stack Overflow could position itself as a place where AI output is validated by experts.

That would add value: “AI draft + community correction = verified knowledge.”

  1. Ban is difficult to enforce

It is nearly impossible to prove whether text was written by AI, rewritten by a human, or inspired by a human using AI.

Enforcement risks being inconsistent and arbitrary, frustrating honest contributors.

  1. Risk of irrelevance

If Stack Overflow rejects AI completely, developers might migrate to platforms that embrace AI collaboration.

This could accelerate the perception that SO is “hostile,” “outdated,” or only for gatekeepers.

  1. AI can improve existing knowledge

Properly guided, AI can help create better explanations, refactor unclear posts, or summarize duplicates.

If labeled clearly (“AI-assisted answer, verified by human”), it could enhance—not undermine—the knowledge base.

So the core anti-ban argument is: AI is a reality of modern programming, and Stack Overflow should evolve to integrate it responsibly rather than prohibit it absolutely.

👉 The real debate isn’t AI vs no AI—it’s how to balance speed and accessibility against accuracy and trustworthiness.

References: https://chatgpt.com/share/68d3de79-2154-8012-b3fc-d18e028e40e3

So there you are... much better than 2022 when LLMs were just a joke to laugh about. Now they are integrated into our IDEs and part of our daily lives (except on stackexchange obviously).

20
  • 14
    Reasons why Stack Overflow does not allow AI generated answers have not changed since this post has been written. AI still gives wrong and misleading answers, people who come here want to get answers from actual experts and are not interested in AI wild guesses. And those who cannot participate in answering without using AI, should not be posting answers in the first place as they don't have the knowledge to write the answers themselves and are in no position to judge correctness of AI answers. Commented Sep 24 at 12:23
  • 3
    Also SE had experiment with AI answering bot and it didn't go well AI-generated Answers experiment on Stack Exchange sites that volunteered to participate Commented Sep 24 at 12:25
  • @Dalija Prasnikar That's very good additional information - especially the AI experiment! Thank you for the link! Commented Sep 24 at 12:26
  • 3
    "Now they are integrated into our IDEs and part of our daily lives (except on StackExchange obviously)." And even that IDE integration is janky, at best. The points against integrating AI into SE really haven't changed over the past 3 years, and AI hasn't solved any of the issues raised in those points yet. Just because you can get an AI to write some (questionable) pro-AI points, doesn't mean that that suddenly invalidates all issues SE has with AI. Commented Sep 24 at 13:01
  • @ Cerbrus obviously you need an experienced man in the middle (developer in the middle). I'd never trust an AI to have free run over a whole project and I am very anti-vibe coding. But small code suggestions and small refactors (on the method level). Sometimes it works quite well (for simple things and boilerplate). So, a human always needs to be in the loop - and you cannot trust it blindly ... because it will create bugs and/or get it completely wrong. Sometimes it gets it right... in my experience 50% of the time. Commented Sep 24 at 13:54
  • 6
    "Sometimes it works quite well". And that's exactly the problem. Sometimes is not good enough. Commented Sep 24 at 13:55
  • @ Cerbrus ... the same is true about humans. What is the average success rate of answers - and how often are they actually correct? I could even argue that an AI is more adept than a junior developer. BUT I guess the main issue though is that AI is much better at fooling us into trusting that it's an expert :) . Instead, it's not many humans who can be completely wrong yet look right :D . Commented Sep 24 at 13:58
  • 6
    I'm not interested in repeating the same arguments that have been discussed, in depth, in here... Commented Sep 24 at 14:01
  • @Cerbrus not much to further discuss... I already came up with your point before you did. AI is problematic because we hold it to a high standard than humans - but unlike humans - it's much better at fooling us with silly wrong hallucinated expert sounding - (incorrect yet complicated and correct-looking) answers. Not much else to add. Also - ppl can just ask LLMs directly - so let's keep the site for only human level interactions (aka you come here - you ask an expert with real knowledge). Really not much to add... so your non-interest comes at exactly the closing time of the discussion :D Commented Sep 24 at 14:04
  • 3
    There is plenty of research instead of anecdotal evidence that shows LLMs do not reason, they perform shortcut learning, and are not intelligent, its not a higher standard than humans. LLMs cannot tell you they don't know when they do not know, unlike most humans. These arguments have been mentioned before and are still very valid. Commented Sep 24 at 14:34
  • @Dr.Snoopy Actually, nobody fully understands how LLMs work inside. Even researchers call them black boxes or emergent systems. The notion that LLMs are “just autocomplete” is a strawman; modern LLMs incorporate many architectural and training innovations that go beyond naive n-gram completion including multi-layered neural networks. Though ok, many researchers do say that LLM reasoning is basically “shortcut learning,” statistical associations, or emergent heuristics. The only 2 points I fully agree with is: “LLMs cannot reliably say what they don't know". And that their reasoning is fragile Commented Sep 24 at 15:05
  • 6
    @Μenelaοs the fact that AI is better at producing incorrect yet complicated and correct-looking answers is one of the biggest problems (though hardly the only one). That combined with how easy it is to churn out massive numbers of AI-generated answers makes it much harder to moderate than low-quality human-written answers. Commented Sep 24 at 22:19
  • 2
    If people want AI answer then they can just ask AI themself, there is no need for someone else to copy paste AI answer into this site. Commented Sep 25 at 6:22
  • 1
    @Rainb you're either proposing a paradox, or you don't actually know what you're proposing. If you're saying "Given X is correct, is X incorrect?" - that's a paradox. If you're asking "If some code fulfils some arbitrary criteria, is the code wrong" - the answer can very well be "yes". But there is no one answer for the general case. In which case, your question is meaningless and seems you didn't think it through. Commented Nov 16 at 10:23
  • 1
    @Rainb Simple question, then - how do you prove an AI generated answer is correct or not. Constraints: you only know it's AI generated. Constraint 2: lots of people will post it and claim it's correct. Constraint 3: lots of people will post these answers in general. So, what is your solution for a scalable system that can handle 100+ answers a week. Commented Nov 16 at 20:07
-15

I feel the ban is good and should be permanent because we have to remember that ChatGPT is really just a more interactive search engine and its results, whilst clever, are still the result of scraping existing web pages - just like any other search engine.

By allowing ChatGPT answers on Stack Overflow (and other sites), we'd be creating an echo chamber whereby the answers it generates are simply based on its previous answers, which may not have been right in the first place.

4
  • 19
    Chat GPT is an AI Chatbot that got trained a while ago with large amounts of data from websites/books etc. It isn't a search engine and it doesn't scrape existing websites. It would probably be more accurate if it just copied code from a website. Commented Mar 3, 2023 at 12:31
  • 6
    @PhobosFerro You comment contradicts itself. If it was trained with data from websites then it's still dependent on the content of websites and will have to be updated to keep relevant. How it actually gets the new data may not be scraping but it will still be reading its own answers. Commented Mar 3, 2023 at 12:52
  • 6
    Do you have first-hand experience using it as an interactive search engine (not a rhetorical question)? Commented Mar 3, 2023 at 22:12
  • 1
    @PeterMortensen the branded "new Bing" does that. What it does is: user makes a search query, Microsoft does a regular Bing search, send parsed results as part o a prompt to OpenAI's GPT4, and finally it answers as an interactive chat search. Commented Mar 13, 2023 at 14:38
-17

Possible Improvement Idea - Improve how we humans upvote answers.

From reading the previous answers to this thread, it seems a lot of the motivation for the bad behavior of posting non-validated answers quickly, regardless of whether they were generated by ChatGPT, is motivated by "point farming". One way that users like us can help "fix" the problem is to not upvote nice and shiny looking answers and comments that we have not validated.

... We should only upvote answers that we have actually validated.

We should not upvote a response that we haven't tested and proved to work just because it looks like they know what they are talking about or just because it sounds good. That means I should first test whether the proposed answer is actually a valid answer and then come back to the answer and upvote it.

5
  • 1
    Specify "validated". How would you enforce this? How would you change hoe people vote? Commented Dec 8, 2023 at 11:56
  • 12
    "One way we that users like us can help "fix" the problem is to not upvote answers and comments that look nice and shiny but which we have not validated." The past over a decade of this site existing has shown this is not how users vote. A more interesting looking solution often gets more upvotes than a more thorough or even more correct one. I don't see how you expect the voting for all users to change. Saying "let's do it this way instead" doesn't actually accomplish that. Commented Dec 8, 2023 at 12:12
  • What do you mean by "One way we that users" (seems incomprehensible)? Commented Dec 9, 2023 at 0:02
  • 6
    Re "first test whether the proposed answer is actually a valid answer and then come back to the answer and upvote it": That would be great. No, really. It didn't happen before ChatGPT (there are likely a lot of undetected completely bogus answers on Stack Overflow posted before December 2022. For instance, the many code dumps without any explanation whatsoever. Say, copied from other completely unrelated answers on Stack Overflow or copied from elsewhere on the Internet (say, from the official documentation)) Commented Dec 9, 2023 at 0:05
  • 5
    I fully agree that is how it ideally should be. Yet I don’t see how it practically could be. Commented Dec 9, 2023 at 6:12
-17

I'm curious how this policy can co-exist with the now public stance that Stack Overflow is going to be training AI models on content here.

I understand a lot of work and thought have gone into this policy as there were significant negative repercussions, but "AI for me, and not for thee" seems a bit too much on the nose to be sustainable.

6
  • 31
    Posting of generated content is still banned. OverflowAI is not in conflict with this, since it's not about posting anything. With that said, it's widely agreed that the company has made poor choices in what projects they pursue. Commented Jan 9, 2024 at 14:28
  • 1
    Isn’t it pretty normal that the provider has more privileges than the users? Commented Jan 9, 2024 at 14:57
  • @VLAZ I head you, however no one should ignore the implied judgement the company has announced: We will make good use of AI tools. You will not make good use of AI tools. Over the next several quarters we all will see what really happens, but as an observer from another site, I felt the need to speak up when got news in a 2 week period of "put this banner up saying NO AI followed by a featured post COME GET AI". Commented Jan 9, 2024 at 16:28
  • @bmike again, I'm telling you the company has not been ignored. Commented Jan 9, 2024 at 16:37
  • @bmike because user input has never been the problem, right? Commented Jan 9, 2024 at 19:10
  • 7
    @bmike Except that a lot of the people posting AI content weren't making good use of it. Commented Jan 9, 2024 at 19:10
-17

Preface

In the past when searching for the answer to some problem with your favorite search engine, you could easily find the same answer many times, even though it was obviously wrong (not working).

That was (I suspect) due to the fact that many users just copied answers from elsewhere without citing where they got it from, or due to "re-framing", that is, some site just placed a new frame and some advertising around another site. Fortunately, the latter have vanished mostly meanwhile.

ChatGPT

I think ChatGPT and alike are impressive, but there are still severe problems: I have a (seemingly simple) programming problem that I couldn’t solve myself reading the documentation, so I used Google to search for solutions. I had tried a few, but none worked, so I tried ChatGPT.

ChatGPT provided a "solution" that looked similar to the ones I had seen before, but actually it didn't work either. So I told it that the solution did not work, and ChatGPT provided me with another "solution" (that also did not work).

I think the above clearly shows the problem with ChatGPT: It cannot tell where it got its wisdom from, and the answers may be incorrect without ChatGPT knowing.

Any use?

Still, I could imagine (monetary and legal issues left aside) that SO integrates ChatGPT for newly asked (or to be asked) questions:

  • It could be used to find similar (equivalent) questions before posting the question. I saw many questions about the same problem (like "find string X in input Y"), where the posters were unable to abstract the problem to the root problem, not even partially.
  • It could suggest an "answer" before posting the question: If ChatGPT could provide a correct answer, there isn't any need to post the question on SO (maybe because the question is rather trivial also).
10
  • 12
    You can already sign up to AI search. What's the point in having an AI suggested answer if your experience suggests that it simply doesn't work. Isn't that just a waste of their time? Commented Feb 7, 2024 at 11:10
  • There's no guarantee that answers do work, but there also is no guarantee that answers do not work. So the OP could give it a try. Commented Feb 7, 2024 at 11:41
  • 9
    Erm, but you did give it a try and it wasted your time. Commented Feb 7, 2024 at 13:02
  • 3
    "you could easily find the same answer many times, even though it was obviously wrong" i mean... if every answer you tried that worked for someone else doesn't work for you, isn't it more likely that means something is different in your environment or your implementation rather than what worked for n other people was also a wrong answer? Commented Feb 7, 2024 at 15:44
  • 4
    Overflow AI isn't supposed to generate new answers, it's supposed to present existing ones. So if your problem is all existing answers are solving a problem differently than you need them to... having Overflow AI find the answers for you isn't going to be any better than standard search. Commented Feb 7, 2024 at 15:49
  • 2
    The experience you describe is the exact reason ChatGPT answers are not allowed. They are almost always wrong. Essentially if you don’t have the knowledge necessary to validate them which is almost always true for anyone using ChatGPT to submit an answer to a question they cannot author themselves Commented Feb 8, 2024 at 14:13
  • @Kevin B No there actually exist nonsense answers that cannot work, but still people copy them. I once (in the times of MS-DOS) I knew someone swearing the command to start Turbo Pascal is exit, because when he had started a shell from Turbos Pascal, exit seemingly started Turbos Pascal from there seemingly. OK, this is an extreme example... Commented Feb 9, 2024 at 6:50
  • I'm not quite sure where you got the impression that i don't think nonsense answers exist. What I'm saying is if every answer you find is nonsense... there's another common denominator there that could be the actual problem. Commented Feb 9, 2024 at 6:52
  • I completely disagree with the last point of this answer: since this generated answer might very well contain absolute nonsense, nothing like this should be shown. Answer without this part, answer is not related to the ban: we already have suggested similar questions (however crappy they are), and changing algorithm of their suggestion is out of the scope here. Commented Feb 10, 2024 at 8:47
  • ChatGPT works as a highly unreliable, but sometimes useful, advisor. For example, it is good at guessing typos and SIAs (if you don't have the imagination for the guessing game) which otherwise makes something on the Internet incomprehensible. Commented Mar 3, 2024 at 0:18
-19

The more tricky question is: should paraphrases of GPT-whatsit-generated verbiage be banned?

2
  • 19
    Yes, just like with plagiarism, direct paraphrasing of AI generated content isn't permitted. We've detected and appropriately handled a substantial number of posts and users where there was considerable effort made to hide/obfuscate that the post content was AI generated. Is detection of such posts perfect? Of course, not. However, moderators are likely going to be less lenient for a user that has clearly demonstrated, by trying to hide/obfuscate, that they know they are doing something that's against the rules and at the same time caused moderators and users doing curation to spend more time. Commented Mar 29, 2023 at 17:43
  • 4
    whatsit: "(chiefly UK) A thing (used in a vague way to refer to something whose name one cannot recall, or that one is embarrassed to say)" Commented Mar 29, 2023 at 18:14
-19

I can totally understand that AI generated answers bring up potential risks in terms of correctness and useful content, since, especially in the universe of developers, answers of AIs are very often not only wrong, but also do not show best practices.

The view changes for me when it comes to the notion of banning AI from "enhancing" posts. Using AIs on comments which are intellectual source of individuals could potentially improve the vocabulary and grammar of many questions and answers on Stack Overflow. So it would in my opinion make more sense to encourage people to prepare questions/answers and assess improvements with AIs, rather than risking negative feedback due to misunderstandings or bad expressions.

6
  • 9
    The ban is a blanket ban because it is not feasible to distinguish generated from "enhanced" content. From that standpoint it simply didn’t matter how good the good case is, as long as the bad case is bad enough. Commented Jan 2, 2024 at 15:27
  • 2
    If you have a proposal for how we can tell "enhanced" posts from generated ones, with minimal human labor, then add that your answer, I'm sure the community would be interested to hear it. Until we have such a mechanism, we have to treat both the same. Commented Jan 3, 2024 at 0:29
  • Well, I think it is less "a thing" that needs control, but rather more important to raise awareness in how to use such tools. Using AIs can bring also positive impact and is not only a risk factor as long as people are aware about how to use it. I think Stackoverflow, as the worldwide leading platform for the development industry has an obligation to act as a role model since banning in general terms and "marking" AIs as evil is easy (with questionable outcome), but the more courageous approach would be in explaining, how such tools could be integrated into workflows and the community. Commented Jan 3, 2024 at 0:54
  • 2
    "could potentially improve the vocabulary and grammar" there are already tools out there for this which don't use generative AI... Commented Jan 3, 2024 at 6:47
  • 3
    @EmadEasa Feel free to explain or explore that in an appropriate format, perhaps a separate meta-Q&A. This Q&A is about the ban itself and not the appropriate place to raise awareness for what you have in mind. Commented Jan 3, 2024 at 9:16
  • Stack Overflow's purpose is to serve as a repository of high quality Q&A. I'm not sure why you think we're obligated to serve as an AI advocate on top of that, especially when doing so would compromise our actual purpose. Commented Jan 4, 2024 at 23:10
-23

I do think that having a markdown for an editor is a coder's way of adding to somehow documentation content, but I do feel that people, in general, have started to act and behave negatively when they see an answer that is well-structured because it is generated by AI. I seriously understand the business logic of a site that has all its activity based on how many responses it receives(be that even floating sometimes) but allowing people to discredit knowledge sharing should also harm the business of StackOverflow.

7
  • 2
    If you're referring to your most recent question being closed... that had little to nothing to do with AI... it's not a question... It's an article. There's no question for others to answer. Commented Mar 17 at 17:00
  • Would it have been different if it was part of the answer? Commented Mar 17 at 17:12
  • 1
    If you presented your own answer, rather than a guide from chatgpt, yes Commented Mar 17 at 17:12
  • 1
    I don't agree, also test it before you say it's irrelevant. It's a closed circle where people who vote don't even test things anymore. I had written that it was just the formatting created by it, but the tendency is to use irrelevant arguments for things not related to the content Commented Mar 17 at 17:17
  • 7
    "also test it before you say it's irrelevant" Please re-read both the question you are answering here (which states why testing AI-generated answers is infeasible), and the comment to which you are responding (which does not say anything about relevance). "I had written that it was just the formatting created by it, but the tendency is to use irrelevant arguments for things not related to the content" The problem is that LLMs, when asked to format something, often change its meaning. If you wrote it, then simply post what you wrote, and you'll be fine. Commented Mar 17 at 21:08
  • Thanks for the response Ryan. Isn't it infeasible to point out that I should stick to the discussion point made and the question of origin, considering all the responses here? I wanted to reiterate that I understand the business logic behind encouraging activity and engagement on the platform, but I worry that allowing a culture of negativity toward well-structured or AI-assisted answers could harm the community’s ability to share knowledge effectively. One possibility is to force revising, but as I've seen also from my last post, people visit a post, and that's it. Commented Mar 17 at 23:06
  • 5
    @MarinNikolli how well-structured AI answers are, is completely irrelevant. The problem is that AI output can not be trusted to be factually correct. Commented Mar 18 at 14:22
-24

I, personally, think that there's not much we, as a community, can do to stop AI-generated content from being used on the site in the long term. Furthermore, I think that the long-term is what we should be focusing on.

The future of AI in a different timeline

The current banhammer stopgap may work for now, but it's a simple solution requiring manpower, and it won't be viable forever. Who here remembers when Dragon Naturally Speaking came out on CD in the 90s? I was 8 or 9 when my grandfather got us a copy. We had an old beige microphone, and my friends and family were absolutely amazed at the speech recognition capability. They could dictate documents directly into Microsoft Works, and I could.... Well... Do what 9-year-olds do best: See how many curse words it could recognize... Even if you had to enunciate them.

Fast forward a couple of years and the same software could recognize multiple individuals in a single conversation and create a transcript including respective speakers. Fast forward a few more years and Microsoft has added "Train your computer to better understand you"1, so enunciation is no longer an obstacle.

That 10-year timespan I just covered would be easy to react to as a community. The problem is AI doesn't take 10 years, it takes 10 hours2. It's easy to spot right now (question rephrased to statement followed by bullet points for your vanilla ChatGPT) but what happens when we get the option to "Train your computer to better type like you"? The Insider Build of Windows 11 (Dev Channel) currently has a Copilot preview that can access your active tab3. I'm certain Microsoft Office access will be next, so It's only a few steps away at most in my opinion.

My opinion moving forward

My thought is this... The banhammer on AI-generated content that isn't cited should continue. However, we should move towards a system that accepts it, so long as it's clearly marked/cited as machine generated4 and there should rarely, if ever, be a top answer marked that is wholly AI-generated. As we move towards that system, the 30-day bans need to turn into 90-day bans. If we incorporate automated detection at some point, it should be human-reviewed, and 90 days should turn into 180 days. I do not see this as too harsh of a punishment so long as users have been explicitly warned.

I believe that if history (as a whole) has shown us anything, it's that we cannot simply ignore a problem, sweep it under the run, and expect it to go away. Band-aids are temporary, but AI is not. Humans have suppressed so many different things over the years... Catholics persecuted Presbyterians as heretics in the Middle Ages, Americans in the land of the free bought and sold black slaves to tend their households and farmlands, women were expected to quietly obey their husbands without a voice to vote until the 1900s, and Jews... Well, Jews have gotten the shaft since the dawn of time.

I'm not saying that AI deserves citizenship or human rights[5]. But as widespread as AI will inevitably become, and as integrated into our lives as it will we need to treat it that way.

TLDR:

We should standardize a boilerplate for AI content and require anyone incorporating it into their content to use it. We should continue the 30-day ban stopgap until a point where it's no longer needed. We should work towards a way of automating the recognition of AI-generated content that isn't cited, always require human review of this automation's output to reduce false positives and implement very steep consequences for those who violate the rules.

Again, all of this is just my opinion. I've been a lurker for 7 years, but as you can see from my reputation, I haven't been active until this past week (I've started to enjoy teaching and tutoring in IT). So, in the grand scheme of things, my 2 cents is literally just that, 2 cents. But I hope that at least a few of you will share somewhat similar opinions. Thanks for your time :)

Clarification on long-term viability and manpower

I do not necessarily think there will be a need to patrol AI-generated content forever. At least, I hope there isn't... However, dumping everything into implementing 30-day bans as stopgaps can't be the answer. Because if we're relying on humans to recognize AI-generated content, we will fail no matter what. I'm not necessarily saying we should absolutely do anything specific. I'm only saying that we need to look much further ahead than much of the discussion going on here.

I do not know what technology drives Stack Exchange on the backend. But the technology has to move forward to account for AI content. Whether you call this a forum, a wiki, an image board, a social media network, or anything else... The technology underneath needs to move forward. To stress this point again if we're relying on humans to recognize AI-generated content, we're going to fail no matter what

What's stopping users from simply not adding the boilerplate?

As Stack Overflow is built on a foundation of user trust, there's nothing that can be done to prevent this except moderating content. Though, in my opinion, adding the boilerplate at least lets the userbase know it's expected. I found out via a comment warning when skimming through questions.

But AI content can't reliably be detected?

I'm not asking for someone to look into this or share information related to site analytics or any other privileged information. However, I would postulate that greater than 65% of users who have been banned for posting AI generated content are newer users trying to boost their reputation. Furthermore, I would estimate that greater than 80% of those are using a vanilla ChatGPT based AI, of which greater than 95% are "non-precise" style (This high percentage is based on the difficulty of getting a reliable output to open ended questions.)

I want to point out an assumption in my argument. I'm looking at this from the point of view that if you know enough about AI to use anything outside of what's available on the mainstream channels, you likely have the experience to answer the questions without using AI content or to examine the content for accuracy before posting it. As mentioned in the previous section, I am basing this assumption off the foundation of trust.

If those numbers are remotely close, then the additional review queue should not require much additional overhead to patrol a large portion of violations. I can quite easily imagine a natural language string analysis algorithm combined with a user event timing algorithm that could pick out a relatively high percentage of violations. I'm sure there are many users way more talented than myself who could imagine the same in a far more optimized and efficient way.

An alternative

If patrolling content and review queues are too far-fetched, then maybe instead of targeting users, we should target posts (I thought this was mentioned in one of the comment threads, but I couldn't find it on a second look). Automated boilerplate addition to user posts if they are flagged as AI generated by an algorithm. A certain reputation level allows an individual to remove the boilerplate. A higher reputation level will automatically bypass the check on their posts.


If policing content isn't the answer, then decriminalization and regulation is.


[1] There was a step in between, where Microsoft let you correct its understanding with input rather than via prompt; See This MiPad Research Article

[2] Keep in mind that AI can't just pick up any task and do it quicker and better. Take for instance the following study in which it took an AI 924 hours to learn a game that would take humans less than 30 minutes:

  • Lake, Brenden & Ullman, Tomer & Tenenbaum, Joshua & Gershman, Samuel. (2016). Building Machines That Learn and Think Like People. CBMM Memo No. 046.

[3] Verified the knowledge is public before posting, See This Public Blog Post on Windows.com

[4] We should probably require citing the exact AI that generated it. I foresee that as an upcoming IP battle in the near future. Plus, as always, there are going to be biases on AI sources that could affect the community as a whole if we don't.

[5] Yet... But I can't convince myself to rule that out at some point in the future, the human brain is only so complex, and technology is surely, albeit slowly, getting there.

8
  • 6
    "We should standardize a boilerplate for AI content and require anyone incorporating it into their content to use it." and...if somebody doesn't? We'd still need the manpower that the start of your post claims is not viable forever. So...you want AI generated output to be embraced but we should still pour the same amount of effort into detecting "illegitimate" usage of AI generated content? Commented Aug 27, 2023 at 12:11
  • 21
    As with many other proposals to integrate/accommodate GenAI answers, I’m still missing the "why". What’s the point building a repository of things we can just generate? If GenAI reaches a point at which it can reliably generate answers then SO is obsolete and we - the users - can do better things with our time than nanny some algorithm that far outscales us anyway. Commented Aug 27, 2023 at 15:14
  • @VLAZ Edited for clarification of my stance. Commented Aug 27, 2023 at 15:25
  • 8
    Nothing you suggest is feasible. AI content can't reliably be detected, and yet another review queue isn't a solution either. Also, what's to prevent users from just not adding the "This is AI-generated" boilerplate? Commented Aug 27, 2023 at 18:19
  • @Cerbrus Updated answer to address your concerns, including an alternative to policing content. Commented Aug 28, 2023 at 0:13
  • Also, thank you to both of you who edited my answer. I am terrible with grammar and consistency in voice. I also didn't know that blockquotes were considered abusive when used for formatting. I always read through edits and try to learn from them, I really appreciate it. Commented Aug 28, 2023 at 0:15
  • 1
    I don't think your edits answer the questions above :( Commented Aug 28, 2023 at 0:43
  • 2
    Your edits don't answer my concerns... At all... Commented Aug 28, 2023 at 7:50
-25

In your reasoning for the general ban on "All use of generative AI", you have not explained how it could be "harmful" to the website if a user verifies an AI-generated answer for correctness before posting it (for example by testing the code), or partially uses it in reaching their own correct answer.

In other words, if the problem is the curation and moderation of bad answers, why limit the ways good answers can be generated? With the AI-generated content becoming more accurate every day, (a) even human experts are more likely to provide better and quicker answers if they use AI for assistance, (b) it will be increasingly hard to detect AI use in part or all of an answer, and (c) it will be easier for this platform to actually use AI to find and flag potentially bad/incorrect answers, whether generated by humans or AI, for human experts to review.

Therefore it seems more logical to help the "swamped volunteer-based quality curation infrastructure" by actually using AI to prioritize the reviewed content, and to utilize other hard solutions such as limiting the posting frequency, rather than announcing general bans that could end up being hypocritical and impossible to enforce.

16
  • 15
    There is no logic here to counter. The ban makes it clear it is a blanket ban because AI generated answers look correct no matter if they are. There is no harm assigned to verified answers, and the magical tooling by which you want verified and unverified answers to be separated simply does not exist. Commented Feb 10, 2024 at 5:03
  • 11
    Do you understand that LLMs do not and will not understand meaning of the text? And if you do, how you imagine "actually use AI to find and flag potentially bad/incorrect answers"? Commented Feb 10, 2024 at 8:38
  • @MisterMiyagi, AI generated answers "look correct" and are a lot of times correct. Of course not always, but that's true for human answers too. That's why you verify them! "There is no harm assigned to verified answers," There is a ban placed on them presumably because of a harm assigned to them. And there is no "magical tooling" that claim to exist. Commented Feb 14, 2024 at 23:41
  • 10
    The problem with "verify them" is that the amount of time, effort, and expertise it takes to verify an answer is often multiple orders of magnitude greater than the 10 seconds the poster spent blindly copy-pasting the question into ChatGPT. It's not sustainable to spend that much more effort removing bad content than it takes to create it, particularly when there are more people creating it than trying to remove it. Commented Feb 14, 2024 at 23:55
  • @markalex Do you understand what it means to "understand", before making any claim about LLMs' understanding? And does it matter if a calculator "understands" numbers, as long it produces correct answers? Depending on the model quality and task at hand, LLMs can be very likely to produce correct and useful outputs (so a blanket ban on them is nonsensical) and they can probabilistically classify other answers to be true or false. This can be used to increase the speed and accuracy of human validators, not to replace them. Commented Feb 14, 2024 at 23:57
  • 7
    @FNia Please cite sources for your claims (especially the claim that LLMs can probabilistically classify the truth of answers). Our overwhelming experience is that current LLM technology cannot reliably do any of those things, at least within the domain to which it's being applied here. If we enter a world in which the way people use LLMs does not generate large quantities of plausible-looking incorrect information, then we may reconsider the ban. That is not the world we live in, however; thus, the ban. Commented Feb 15, 2024 at 0:03
  • @RyanM The verification responsibility is on the poster. Yes, definitely ban the "10-second blind copy-pasted answers from ChatGPT", but that's not what this ban says. It simply doesn't care if you spent 10 seconds to paste it, or spent minutes or hours to verify and edit the answer before posting it. It bans all cases, which doesn't make sense. Commented Feb 15, 2024 at 0:08
  • 6
    Could you please explain how to tell whether a poster did verify it? (without verifying it ourselves; see my previous comment for why that doesn't work) Plenty of people claim to have verified their AI-generated answers, and yet their answers are still wrong. Commented Feb 15, 2024 at 0:12
  • There is no way to make sure the poster did verify it, and there is no way to make sure LLMs have not been used in part or all of the answer. You can only verify the answer itself for correctness, and LLMs can help with that. There are many studies on the use of LLMs for fact checking. It's definitely a new and evolving field, but the general understanding seems to be that "the best defense against [LLM] is [LLM] itself". I'll put some citations below. Commented Feb 15, 2024 at 1:01
  • 2
    - Defending Against Neural Fake News: arxiv.org/abs/1905.12616 - FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs: arxiv.org/html/2402.05904v1 - Are Large Language Models Good Fact Checkers: A Preliminary Study: arxiv.org/abs/2311.17355 - The perils and promises of fact-checking with large language models: frontiersin.org/articles/10.3389/frai.2024.1341697/full - (Probabilistic classification:) Generative AI Text Classification using Ensemble LLM Approaches: arxiv.org/pdf/2309.07755.pdf Commented Feb 15, 2024 at 1:16
  • 5
    @FNia "These are of course not ready solutions to deploy on this platform" Exactly. The tooling required for the proposed policy change does not practically exist - for all intents and purposes, they are functionally magic in any policy. Mind, that does not mean it cannot exist in the future, but it means any policy that relies on it cannot work today. SO does not exist to provide the focus for future AI research, it exists to be SO today. Mind, AI research is well able to set their own focus and not affected by the ban here. Commented Feb 15, 2024 at 5:13
  • 1
    @FNia, if or when tooling to factually check answers will exist, this very much will be a reason to reconsider this policy (and I'd personally say reconsider existence of SO in its current form). Until then, in light of extreme abuse by the generatedanswers, threatening usability of the site, this ban deemed to be the only viable solution. Commented Feb 15, 2024 at 8:04
  • 5
    If you have proposition, how to workaround mentioned abuse with existing right now technologies, please share it: many people would like to here it. But be warned, simply saying "community should go and verify every answer" wouldn't cut it. Commented Feb 15, 2024 at 8:08
  • 1
    Bogus answers (for example, likely most code dumps (answers without any explanation whatsoever)) and plagiarised answers were already a problem before ChatGPT. ChatGPT made it orders of magnitude worse. Commented Mar 3, 2024 at 0:28
  • 1
    We don’t have to show a harm to the community. Users themselves have proven time and time again that they will either plagiarize LLM content or not validate the content generated by LLM, resulting in additional work by the community to remove content that otherwise would never be acceptable. In the hours a user could spend to validate LLM garbage code they could just generate their own code to answer a question Commented Apr 16 at 0:51
-26

On the flipside?

I would like to argue that there is one class of AI writing that should be explicitly allowed.

Specifically, when an AI like ChatGPT gets a series of questions for which it cannot produce adequate answers, the AI should produce a composite question about it. This should also include statements of why it doesn't consider its own answers adequate. This should then be asked here, as an official post from the AI (as in the user might be "ChatGPT" or the like).

The point here is to increase the corpus of knowledge.


Having said all that, I do believe that answers or questions pasted from ChatGPT or the like should be banned.

20
  • 8
    And how exactly would an AI chatbot know it doesn't know something? Or why it doesn't know it? Often enough it just hallucinates an answer. The way to figure out it doesn't work is for an expert to have a look at it. If the expert finds the answer is nonsense, then they can write a question instead. Although, with that said - just because an AI tool is unable to properly answer a question, it does not mean the question is unique and never before seen. Commented Jan 29, 2024 at 18:34
  • Nothing of this seems to be suitable for SO with current quality of "understanding" provided by any of the models. What you describe sounds like human powered correction mechanism for models, and I don't see why this should be a part of SO or wider SE network. Commented Jan 29, 2024 at 18:47
  • 2
    Ironically that's what OverflowAI search is designed to do, isn't it? Find/summarize existing answers, and if there are none help you generate a question. Commented Jan 29, 2024 at 19:51
  • @VLAZ it would obviously require the AI have a fair bit of self-monitoring included in it. They may or may not be able to at this point. In any case, it's not for StackExchange / SO to do, but for the AI writers. The only part for StackExchange / SO is allowing the relevant user to exist and question. Commented Jan 29, 2024 at 22:32
  • Self-monitoring? you mean using an LLM to determine if an LLM is accurate? Commented Jan 29, 2024 at 22:33
  • 5
    @DavidG. There's no "May or may not". LLMs do not have any idea of the concept of "truth". They're utterly clueless. What you suggest it beyond impossible, as LLMs are simply not capable of interpreting "truth". Commented Jan 30, 2024 at 8:17
  • 3
    So, what you're suggesting doesn't match up with the current reality. You want AI generated questions to be allowed conditioned on the AI tool working not at all how it works as of today. You're free to suggest to all AI tool makers they should change their products to be able to do the introspection you expect of them. Once that is done, we can discuss changing what we do and do not allow for AI generated questions. Commented Jan 30, 2024 at 8:21
  • @Cerbrus Completely wrong. AIs do have an idea of "truth". With SO as a source, an accepted answer is more likely "truth" than a non-accepted one. An answer from a high status poster is more likely "truth" than from a low status poster. I suspect this is already used. Having said that, it probably doesn't know truth in the real world, so it shouldn't be trained on an RPG corpus to answer medical questions. Commented Jan 30, 2024 at 16:51
  • 2
    i mean, no, lol, there's a reason accepted answers are no longer pinned to the top. Commented Jan 30, 2024 at 17:00
  • 2
    I mean, no, lol, there's plenty of highly scored answers that are wrong. Commented Jan 30, 2024 at 17:03
  • 1
    The point is there is no idea of truth that can be trusted to just be correct, even for human provided content the viewer is responsible for determining that. Having the AI rely on metrics humans provided in the past but no longer are because the AI replaced the process that resulted in that feedback will be stale immediately. Commented Jan 30, 2024 at 17:06
  • 3
    An AI summary of an answer is an AI answer. Commented Jan 30, 2024 at 17:16
  • 5
    @DavidG. No. LLMs don't have a concept of truth. Don't strawman me with theories about AI in general. LLMs like ChatGPT don't comprehend. They don't understand, they don's interpret. There is literally no concept of technical accuracy in their process. That's the entire problem with LLMs! They can't know if what they generate is true or not. That's simply not part of how they work. [1/2] Commented Jan 30, 2024 at 22:13
  • 4
    @DavidG. LLMs like ChatGPT don't "do" "source". They can't weigh different training data differently unless explicitly trained to do so. It would make no sense for a generic LLM like ChatGPT to put excessive weight on SO sources, as that would result in lower quality output when generating stories. It doesn't have live access to data. It doesn't know who posted what. It doesn't know anything about scores. You're making so many incorrect assumptions about how LLMs work... Please look into how they generate output. [2/2] Commented Jan 30, 2024 at 22:16
  • 2
    "AI" solutions that are capable of citing sources aren't working like your standard LLM prompt. They start with a different kind of AI, not an LLM, to perform a more typical search before sending the results into an LLM as a prompt to generate the response. It'd be incorrect to claim the LLM is sourcing it's data, since the data that is allowing it to do the work it's doing isn't just the content in the prompt that the initial search found. Commented Jan 30, 2024 at 22:20
-28

It's just a suggestion. People will keep posting ChatGPT answers anyway. The problem is now that a user can't really differentiate them.

A solution I would suggest which is already mentioned here, but different, is not only to proactively post an answer from ChatGPT (or other models), but intentionally ranks them lower and banner them clearly that this answer is not yet checked by a human. Then a person can confirm, edits, or reject the answer, which will change the rating of the answer itself (this ChatGPT answer have been reviewed by X and rated as correct).

If an open source model is used, then Stack Overflow have even more data to train their own model which will beat all other AI models...

19
  • 19
    I think you're massively underestimating the effort that goes into training this kind of model... Also, why does the generated answer need te be on SO? Why can't a user that wants an AI-generated answer just go to said AI? Commented Sep 19, 2023 at 13:52
  • 4
    If the AI reliably produces correct answers - why build a library of them at all? If the AI does not reliably produce correct answers - how many and which answers should be posted? Critically, how should volunteers deal with the required massive content volume when we already have too few people for the fewer human generated answers? Commented Sep 19, 2023 at 19:52
  • Re "not yet checked by a human": This doesn't (and didn't) even happen with human-generated answers to any significant extend. Why would anyone spend time checking a code dump answer (no explanation whatsoever)? Yes, that is a rhetorical question. A code dump answer may be completely bogus or brilliant. It is difficult to judge unless significant time is spend to actual run and test the code. Commented Sep 20, 2023 at 21:03
  • cont' - Sample code dump answer, 1. Sample code dump answer, 2. Essentially a sample code dump answer, 3. Commented Sep 20, 2023 at 21:12
  • I must admit I do not actually dislike this idea as much as most other suggestions at face-value. I mean it's not what I would like, but if a compromise had to be struck, in the event where we cannot absolutely reliably ban ai generated answers, I would choose a somewhat "site-approved" or even better, "site-trained" ai answer generator. Commented Sep 21, 2023 at 0:34
  • Like let's see how good exactly can we train a model exclusively on SO content to answer duplicates. Commented Sep 21, 2023 at 0:34
  • @Cerbrus, the second question in your comment isn’t a relevant as it may seem. You ask why something can’, while of course an asker can and is not prevented from doing so regardless. Just as the asker could rtfm or get an answer elsewhere. It’s however about answerers using AI, who do that regardless of what the asker could do. Commented Sep 27, 2023 at 12:35
  • 3
    @Xartec Rowanto is arguing that SO should implement GPT to automate answers. That's what my "Why" is asking about. Why do we need to invest time in implementing generated crappy answers, when users can get that nonsense at the source, instead? Commented Sep 27, 2023 at 13:13
  • @Xartec "It’s however about answerers using AI" To the contrary. This answer is about SO itself using AI, which then - somehow - removes the issue/occurrence of answerers using AI and - somehow - offers a benefit to askers. Commented Sep 27, 2023 at 13:24
  • @MisterMiyagi that’s what his proposed solution is about. His answer is a solution adressing the problem as layed out in the second sentence. Commented Sep 27, 2023 at 20:57
  • @Cerberus “ Why do we need to invest time in implementing generated crappy answers, when users can get that nonsense at the source, instead?” This, again, relies solely on the false idea AI generated answers are crappy and nonsense. Which is is like building a house on quicksand, as chat gpt would say. The quality of the answers can be heavily improved by prompt engineering which would discourage answerers from using the same ai to produce sub par results. It would be a sensible way to introduce AI, an inevitable situation. Commented Sep 27, 2023 at 21:03
  • 3
    @Xartec they've already tried that, it failed spectacularly. Many other sites are similarly trying this, such as quora, and are having similarly questionable results. Why must SO include AI generated answers? What value would that provide? Surely the best person to have a conversation with a chatbot and weed out poor results is the person with the problem being solved, not some rep hunter on SO or SO itself. Commented Sep 27, 2023 at 21:06
  • 2
    @Xartec Yes, and we've been routinely deleting said answers for nearly a year now. Yes, we haven't gotten all of them, but we've gotten enough, and enough accounts are actioned against to keep the problem at bay. Your assertion that we cannot discern whether or not content was written by AI is quite provably false. Commented Sep 27, 2023 at 21:19
  • 2
    There is no magic prompt engineering masterpiece that will suddenly make any current or near release version of gpt capable of producing answers to new questions. At best it can summarize an existing answer to an already answered question, however even that becomes more likely to hallucinate than to provide an accurate answer the more you allow it to modify the content it is summarizing to fit the person looking for an answer. The user would be better off just getting the existing answer. Commented Sep 27, 2023 at 21:34
  • 2
    "The problem is now that a user can't really differentiate them." - It's trivial to identify ChatGPT generated content. The only way to handle ChatGPT content is to ban it. Commented Nov 1, 2023 at 13:44
-28

I fully respect the rules of Stack Overflow, but I want to highlight how technology, including language models like LLMs, plays a crucial role in the empowerment of people with disabilities. The goal of rights and regulations should be to promote a positive evolution of society. As a proof of transparency, wouldn't it be better to specify that a response has been assisted by an LLM?

Could we not offer an honest prompt for all Dys for example?


Thank you all for your comments. You can understand that for those who don't grasp how generative intelligence offers a significant advantage in terms of compensation for language disorders, if in the future the major evolution of artificial intelligence does not allow a site like Stack Overflow to make a difference, it is necessary in my opinion to accompany the transition and make it as transparent as possible.

It is obvious that many developers use generative algorithms without saying so, and with their improvement, it is quite possible that it will be difficult to detect them. Note that the fact that you have used an artificial intelligence is important data for future algorithms that will use the data generated by Stack Overflow.

Moreover, in language disorders for example, the contribution of artificial intelligence, with tools like generative AI, allows for a great improvement in language quality. It should be noted that many people in the world of science have this type of disorder, from Moses to Thomas Edison to Leonardo da Vinci, Einstein.

For this reason, I think it is not good to prohibit the use of generative artificial intelligence, particularly for disability and for the future evolution of this type of technology, and that it is more judicious to mention it. I understand that some people will never do it, but ethics is an individual question. Should everyone be punished for the fault of some and not others?

12
  • 8
    How can we trust users to add that "This is AI-generated" disclaimer to their answers? We can't. This has been suggested a lot, already, and frankly, just isn't reliable. Commented Mar 24, 2024 at 9:12
  • 9
    Yes, it would be better. But people don't do it. We could have a lot of nice things if people wouldn't sabotage things... Commented Mar 24, 2024 at 9:17
  • I understand your concerns regarding the reliability of AI assistance disclosures. However, with technological advancements, it will become increasingly difficult to differentiate between contributions generated by AI and those created by humans. This is also an issue of accessibility for some people. Commented Mar 24, 2024 at 9:38
  • 6
    AI has a distinguishable writing style, one that's not very common in how you'd normally answer on SO. That said, if an answer is of such (writing and factual) quality that it is indistinguishable from a completely human-written answer, there's not much of a problem. Commented Mar 24, 2024 at 12:45
  • 13
    I don't doubt that ChatGPT is helpful to some people with disabilities, but that doesn't change the fact that the answers it generates aren't reliable enough to be allowed on the site. Commented Mar 24, 2024 at 12:56
  • 9
    We understand that you think allowing generative AI is useful. However, there are many issues with generative AI that aren't solved yet, such as its factual inaccuracy. A lot of these concerns have been discussed in-depth on answers on this question, so I'd strongly suggest you read through that first. Your suggestion is far from new, but nobody came up with a viable way to make it workable, yet. Commented Mar 24, 2024 at 13:37
  • 5
    "It should be noted that many people in the world of science have this type of disorder, from Moses to Thomas Edison to Leonardo da Vinci, Einstein." Flat what. Commented Mar 24, 2024 at 13:40
  • 1
    Sorry, @MisterMiyagi, I don't understand where you're going with this ? Commented Mar 24, 2024 at 13:44
  • 3
    @gregoryl That's fitting, because I didn't understand where you were going with this. My comment was supposed to express that with quoting the most (but not only) outlandish and tangential part of the content you recently added. I'm literally unable to put my confusion into words, hence a flat what. Commented Mar 24, 2024 at 14:12
  • 1
    OK :-), I understand, but please realize that for me, it's complicated to explain without going into too much detail, a topic that you seem not to be familiar with, regarding health and the impact that AI can have on this issue. Commented Mar 24, 2024 at 14:21
  • 14
    @gregoryl Point is, your new content isn't going into detail but into meaningless (and questionable) tangents. You aren't actually explaining anything this way. If this is what AI writes for you, this is a good sign you should not use it. True, it may be difficult for some of us to express ourselves, but do not be fooled into thinking AI expresses yourself better than you do. It just seems fancier. Commented Mar 24, 2024 at 14:37
  • 8
    @gregoryl MisterMiyagi is trying to explain to you why this answer is poorly received. From "our" perspective, it looks like you're the one who doesn't want to hear that, do you understand that? Your suggestion is clear, but how that would work, with the issues LLMs have in mind, is something you need to explain. It's irrelevant what historical figures might have had some disability. Again, please read through the other suggestions on here. It's a lot, but that also means there's a lot of research and debate that has already gone into this. Commented Mar 24, 2024 at 15:11
-30

Instead of banning ChatGPT, Stack Overflow could consider adding a new button "Show AI Generated Answer(s)" to each question page. Such answers should be hidden by default, but if anybody is curious about what ChatGPT (or any other future AI tool of the day) has to say about it, they can check it out.

Also, such AI-generated answers could be compared to the "human"-generated answers with another AI tool to compute a similarity score, and if a "human"-generated answer is too similar to an AI-generated one, then such an answer should be banned (since the probability that such "human" answer has actually been produced using an AI tool would be high) and the user who posted it could be given some penalty.

So, this approach would kill two birds with one stone - people would be much less likely to post AI-generated answers since such answers would already be there automatically, and Stack Overflow would get an automated tool for detecting AI-generated answers and for punishing users who abuse the "no AI-generated answers" policy.

6
  • 13
    No? People aren't going to go through the rigor of ensuring that the answer they see or read is comprehensible or useful for their use case. We already have cases of people copying code from Stack Overflow and putting it into production. This doesn't make that better; it actually makes it orders of magnitude worse. Commented Feb 1, 2023 at 21:47
  • 12
    People already barely follow instructions like How to Ask and How to Answer. The problem we see with lots of AI-generated answers is that the people who post them do so blindly, without verifying that it answers the question or testing for bugs. I doubt asking these low-effort posters to put in more effort to self-identify (even though it's barely any effort to check a box) is going to have the effect you desire. Until such time as AI-generated answers are reliably correct, if people are interested in what the AI has to say, they can go talk to it at the AI's website. Commented Feb 2, 2023 at 0:08
  • I don't think this is a bad idea. You cannot stop anyone from looking for AI-generated answers. The point is not to be vindictive of AI generate responses but to let others decide if those are accurate or even helpful. In my experience apart from answering basic questions, pre-2021, ChatGTP has not provided any relevance. This data can also be useful in future to train better models. Commented Feb 7, 2023 at 22:37
  • 7
    if people want AI generated answers, there's a place for that. (it isn't here) Commented Feb 7, 2023 at 22:52
  • Not sure why this was downvoted but i can understand why when everyone are so angry and ban-happy already... This is the least-effort and also most realistic way of fixing the problem now and forever. It will always work too, because chatgpt will always produce grammatically correct answers(even if completely wrong) and a similarity algo should be feasible to implement(ie: not too complex). It is also the only answer where it saves SO users from cleaning up and saves SO from becoming more authoritarian(it already is too authoritarian for many users...) Commented Feb 24, 2023 at 7:30
  • This is a great suggestion. I wrote an answer similar to yours but with the added usage of a tag to help StackOverflow filter out or correct AI generated answers. Unfortunately, the senior members of this forum do not encourage any type of interaction with known AI answers. I understand their reasoning. The big question is how will they be able to tell the difference otherwise? More AI detection? Is OK to use AI to "detect" AI generated answers, but it's not OK to submit AI generated answers? Seems like a paradox to me. Commented Jun 5, 2023 at 15:03
-31

I honestly believe ChatGPT is a powerful tool, but in reality it doesn't give exactly what we want. All the answers posted are from real legends who put their hands on the code and tried the solution. I would say it would be disrespectful to mix AI answers with human answers because the AI is trained from human answers.

1
  • 16
    ChatGPT is insanely great and is a powerful tool, but it will also produce answers that are totally wrong and/or made up. It can not be trusted. Commented Mar 30, 2023 at 13:33
-34

Yes, it should be banned. To the question of "How do we identify those posts?", it should be considered that this problem is not new or unique to Stack Overflow. Plagiarism is a concern that spans broadly. An answer found in academia is to copy/paste answers back into ChatGPT and see if it responds to it as a continuation of a conversation; if it does, then flag it as AI-generated.

This is a potential technique, and one that scales. Multiple suggestions here imply that "you can tell it by looking at it", which isn't all that helpful because we can't expect people to reliably keep up with the potentially exponential flow of spam answers.

9
  • 24
    No, that's not at all accurate. ChatGPT just pretends. It can't recognize its own output. Commented Jan 18, 2023 at 1:07
  • 14
    The conversation chains in ChatGPT are separate. Every time you start a new chat, it keeps that session for that session only. I've seen nothing to imply that it persists any information cross-sessions. In fact, what I do for amusement is ask ChatGPT where my hometown is located and it gives me a new, amusing, and mostly wrong information every time. You can try to correct it but if I start a new session it goes back to the nonsensical information again. Commented Jan 18, 2023 at 7:38
  • 1
    Re "...is to copy/paste answers back into ChatGPT and it will respond to it as a continuation of the conversation": Interesting. Do you have some examples and/or references for that? Commented Jan 18, 2023 at 18:55
  • @PeterMortensen openai-openai-detector.hf.space Commented Jan 19, 2023 at 16:40
  • Also this: nypost.com/2022/12/26/… Commented Jan 19, 2023 at 16:42
  • My main point is that SO isn't the only one suffering from this issue, and to try and concoct a solution internally is wacky. Smart people are already very far down this road, we have search engines to identify who they are. Commented Jan 19, 2023 at 16:43
  • 7
    this just isn't really relevant to the conversation, tbh. We aren't expecting users to go out and look for chatgpt answers to report. That's not what this is about. Mods and other involved users are already on the case and already know how to find such answers and deal with them. Obviously, report things you think are chatgpt answers, but we don't need this to be a witch hunt. Commented Jan 19, 2023 at 16:51
  • 8
    @n8. If that is your main point, you might want to edit your answer to actually say so. Right now the prominent focus is on a single suggestion that doesn’t actually work as described. Commented Jan 19, 2023 at 18:35
  • 3
    To test this, I pasted one of my own more extensive answers into ChatGPT and asked if it was ChatGPT's? It said it was. So the problem remains the same as always, ChatGPT confidently lies when it answers questions -- any and all questions. Commented Mar 25, 2024 at 0:57
-35

Banning all ChatGPT answers is a good temporary move, as it opens a time window where it could be discussed thoroughly, but it, IMO, shouldn't be permanent, as it could also help if used according to its capabilities and by acknowledging its limitations.

Consider the following situation:

Someone sees a question they know the answer to.

They don't want to worry about the structure of the answer, so they use ChatGPT with a prompt that directs it towards the correct answer of the problem (for example, if the question is "How do I remove and get the last element of an array?", a possible prompt might be "Using the array.pop method, write a Stack Overflow answer to the question ..."

They then check and verify the answer to see if ChatGPT has done any mistakes, and either direct ChatGPT to correct the answer, or correct it themselves.

They then post the answer.

Is this helpful to SO? I would assume it is as the user who asked their question gets an answer that works, and the answerer spends less time formulating and explaining the answer and more time worrying about the correctness of the answer. As ChatGPT is a language model, here it would have been used correctly according to its capabilities (language and not programming - the programming knowledge comes from the answerer).

Should this be banned permanently? Permanently banning all ChatGPT answers means this should be banned as well, even though it actually is helpful to the Q&A format we have going on here.

Bad ChatGPT answers are just bad answers, and I don't think we should have another rule specifically for ChatGPT. Spamming good-looking but bad answers with AI tools and abandoning them to "see the numbers go up" should be the behavior that is banned.

11
  • 9
    The problem is that users have been proven to be untrustworthy, and have been going for quantity over quality when using the AI to generate answers. Besides, answers that are simple enough for the AI to consistently answer correctly, generally have excellent due targets on SO... TL;DR: There's nothing of value to SO, generated from CGPT. Commented Dec 13, 2022 at 16:44
  • 4
    The use-case described by this answer would be a great way to use ChatGPT. Unfortunately, it's not how it's currently used. On the long run, we have to find a way how responsible use of AI can be allowed while still having methods to prevent flooding the site with garbage. Commented Dec 13, 2022 at 17:41
  • 1
    I had a similar thought: What if you have the skills and willingness to validate the AI-generated answer, even if you didn't come up with it? Such good-faith use of a chat AI by an actual expert would be completely indistinguishable from a human answer, and such use would be unenforceable. We're being asked here to self-enforce this ban on ourselves. If you don't want to self-enforce this ban on yourself, others may never notice, but you're crossing a different line here by publicly advocating that others not self-enforce themselves. Commented Dec 13, 2022 at 18:36
  • 11
    There really isn't much value in carving out an exception for cases where the tool is used in a way such that it won't be recognizable as chatgpt and is actually producing valuable content, people using it have already proven that they're overwhelmingly unwilling to take those steps. People who are takin those steps aren't spamming the site with nonsense posts, but they're still potentially posting plagiarized content. Commented Dec 13, 2022 at 18:45
  • 1
    "Is this helpful to SO?" To answer the question in the answer: Yes it is helpful. Caveat: only a minority will actually do it like this. But otherwhise it's a valid idea. Commented Dec 13, 2022 at 19:48
  • 2
    The main problem though, with this suggestion of augmented answering, is that you never know when the chat bot will randomly introduce fake or false information to support its possibly correct larger point. These nuanced false points might seem harmless if for example we are talking about resolving the argument of who won the last World Cup. The chat bot would perhaps make a comment like "Argentina won the 2022 World Cup. It was their third time matching up against France in the finals, and only the second time Messi won". That last part, unless you are truly informed, may seem real. Commented Dec 24, 2022 at 7:28
  • 4
    In code, it can be dangerous to use those extra parts, especially since nuance is often very important (for example a wrong conversion to let's say, oh I don't know, feet to meters #mars). Perhaps with things of a more lenient nature, these white lies don't particularly harm anything, but when people's savings, kids, or lives are at stake, we can't just lightly gloss over such glaring inaccuracy. Commented Dec 24, 2022 at 7:28
  • @TravisJ of course I am not suggesting people to blindly copy and paste those answers directly from ChatGPT, rather here, the tool would be used just as a glorified sentence generator, and the output it generates should then be left to the human operator to check. Commented Dec 27, 2022 at 22:25
  • 4
    @kahveciderin - Unfortunately a very large sample size of the minority have proven they cannot be trusted to use ChartGPT to help them submit an answer. They have proven they do not have the knowledge to confirm the answer content is (correct, accurate, ect.). That sample size has been extrapolated across multiple communities. Commented Dec 28, 2022 at 12:31
  • Main Problem with this Scenario I think, ... is that 'ChatGPT' doesn't check for DUPLICATES, oops...! Commented Jan 15, 2023 at 3:17
  • I am against permanent ban, but not for the reasons given here. Everyone should be allowed to learn from their mistakes. It is as simple as that. Banning for a long time - sure. It needs to hurt to sink in. Permanent banning is needlessly extreme. Commented Mar 3, 2023 at 14:20
-38

Why not have all new questions include an automated answer by an official Stack Overflow ChatGPT account, with a clear indication that this is the ChatGPT response? Maybe even show the user the ChatGPT answer before the question is posted, to reduce duplicate/low-quality questions.

This way, it just gets the ChatGPT controversy out of the way... ironically, by embracing it. If the answer works, then great. If it doesn't work, well now at least there is a Stack Overflow sanctioned answer written by ChatGPT to compare new answers against. But if there's already a ChatGPT answer, why would anyone answer it with another ChatGPT answer?

If the problem is users abusing questions with quick, low quality answers... well that's a different problem; those users will always exist.

18
  • 9
    Because, to quote the question post: Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers. Commented Mar 30, 2023 at 0:50
  • 27
    I don't get where you people are getting "the ChatGPT answer". There are multiple requests for this but they all fail at that point because ChatGPT doesn't give one answer. It generates statistically plausible text which can be different answers. Even if we assume that SO will show a ChatGPT answer at asking time and/or after posting a question, that answer need not be the same as you'd get from generating again. The claim that it would discourage others from posting a ChatGPT generated answer is basically unbacked by anything as it's not just the same text spewed out. Commented Mar 30, 2023 at 4:44
  • 12
    For instance, here are two different ChatGPT responses to your suggestion. They're similar-ish, but about 20 seconds of editing could eliminate the most obvious parts. Also, an answer being posted already definitely doesn't stop people from posting the same general idea again. Commented Mar 30, 2023 at 4:52
  • 14
    Self-contradictory (and wrong) within the same sentence: "Is the correct spelling "StackOverflow"?". Response: "Yes, that is correct. "Stack Overflow" is a popular question and answer website for programmers, and the correct spelling is "StackOverflow."" Commented Mar 30, 2023 at 13:40
  • 3
    And it only wants to please you (right or wrong): "Are you sure about the spelling?". Response: "I apologize for the confusion. While "StackOverflow" is a common way of writing the name of the website, the correct spelling is actually "Stack Overflow" with a space between the two words. Thank you for bringing this to my attention." Commented Mar 30, 2023 at 14:03
  • 1
    I'd support this. Seeing a GPT generated "here is an AI's best attempt" next to user created content would be good for the site. Commented May 3, 2023 at 19:56
  • 2
    @Yakk-AdamNevraumont Good, how? Users asking questions generally don't have the best understanding of the subject, and a convincingly written wrong answer would do more harm that good. Commented Jun 5, 2023 at 14:10
  • @Cerbrus 1. It sometimes isn't wrong, 2. It competes with non-official AI answers, 3. It can provide a basis for someone's correct answer. Also, labeling it as "AI's best guess" also gives a comparison between what humans answer and what AIs answer. Commented Jun 5, 2023 at 15:56
  • 1
    @Yakk-AdamNevraumont if one wants AI's best guess, they can go to a resource that already provides that. It makes no sense to add a limited version of that to SO. Commented Jun 5, 2023 at 15:59
  • I like this idea. Having an AI user who would answer questions would be a fine way of training a model. The model could be refined by using up/down votes and commenting on its answers. Users could allow or ban the AI account from providing answers. Commented Jun 5, 2023 at 16:40
  • @KevinB Except, why not keep the eyeballs here? Commented Jun 5, 2023 at 17:03
  • 4
    "It sometimes isn't wrong" Well isn't that great! Dude, that's not even close to being good enough. Commented Jun 6, 2023 at 7:22
  • 2
    I would support an experiment to provide a ChatGPT-generated answer on the site as an option for the asker, but that doesn't mean it should be posted as an actual answer for future visitors to the question. When it works, it would probably help reduce the number of trivial and duplicate questions. (Flabbergastingly, that's apparently not what the company wants, probably because that would reduce advertising revenue; but it would help maintain the quality of the site, and reduce the need for manual administration.) Commented Jun 13, 2023 at 6:13
  • 1
    @jei also, just from purely logical standpoint the suggestion makes zero sense. Tell me what is "an AI answer"? This is not a rhetorical question - if you believe there is a single canonical, unique answer than AI will generate - well you're wrong. Which is also the same flaw in this suggestion here. Feed the same question to a gen AI system multiple times and you can get multiple different responses. Not because of information changing or model updates - the answers simply aren't definitely repeatable. Commented Jul 29 at 8:13
  • 1
    @jei so, how about because the idea fails on multiple points that I spent two comments detailing? Every aspect of the suggestion is flawed. I don't simply disagree, I do actually have facts to back it up. Again: multiple people have posted AI generated answers to the same questions. Regardless there was already an AI generated post there already. In general the posters AI generated are not concerned with quality, but simply with posting content for rep farming or other purposes. Thus the preventative measure described here not only couldn't work, it didn't from the data we already had. Commented Jul 29 at 8:25

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.