Skip to main content
17 events
when toggle format what by license comment
Jun 2 at 21:09 comment added Marshall Eubanks Even here, where you'd expect to see an above-average rate of hype skepticism, you see a great many posters enthralled by the spectacle of this most convincing Mechanical Turk. But of course, regurgitating aggregated arguments "for" or "against" frequently-debated topics is where these models excel. Whereas, determining the empirically correct answer is an absolutely orthogonal problem. But that won't stop rubes from rubing, and before we know it there will be a paucity of human-generated content, and with nothing left of value for AI to mine, human technology will collapse in upon itself.
May 12 at 6:34 comment added Neil S3ntence Correct. The only benefit I see for users (questioners, not answerers) is to have "better" / "curated" AI answers here, compared to what they'd get straight from the horse's mouth over at the GPTs. If AIs generate content on SO and also use this as a reference source, we have a self-referencing feedback loop as someone pointed out correctly. Which might not be a good thing. I phrase it carefully on purpose because in theory, better-structured questions could mean higher-quality answers (I observe thhat in my own GPT use).
Oct 17, 2023 at 14:58 comment added VLAZ @andrewpate I don't see why users who want to see AI generated content won't just go to the AI generator directly. Why should this be on SO?
Oct 17, 2023 at 14:56 comment added andrew pate One option might be for users to have the option to show AI generated content or not. That setting could be off by default, but if users want AI generated content they they have the freedom and choice to enable the ability to see it. Freedom. It also encourages users to create an account if they can set preferences.
Oct 17, 2023 at 14:48 comment added andrew pate ChatGPT seems to give fairly useful responses, but not always correct.. Sometimes if you chat with it rather than question-response its second or third attempt is usually good. What I propose is ChatGPT content should be shown or tagged as being probably answered by ChatGPT .. That way it still can contribute but answer users are aware it was generated by AI so can be more cautious until it gets upvotes .. If it can provide an answer at least as good as a junior dev then it's probably more helpful than harmful.
Sep 20, 2023 at 10:04 comment added Shayne @AndreyBienkowski which could prove pesky, since we dont yet know how to succesfully filter out ChatGPT content (Most of the "GPT detector" tools out there function roughly as well as a coinflip and its causing catastrophes, since students apparently keep getting accused of cheating who aint)
Jun 2, 2023 at 13:32 comment added Andrey Bienkowski When (not if) a substantial amount of ChatGPT (or other AI) generated text accumulates on the internet the researchers training new models will likely have to use automated tools to filter it out from the training corpus.
Feb 21, 2023 at 10:52 history edited bad_coder CC BY-SA 4.0
Made punctuation regular.
Feb 8, 2023 at 21:53 comment added d3hero23 @HereticMonkey simple solution is to require people to tag the post as a gpt answer... not allow it to be voted as a best answer and finally ban people from posting that consistently post false or incomplete answers.
Jan 25, 2023 at 17:44 comment added Todd A. Jacobs "I'd much rather see AIs working in service of human judgement and synthesis than the other way around" is a sub-specialty called Explainable Artificial Intelligence (XAI). That science is in its infancy. Don't expect it soon.
Jan 9, 2023 at 18:56 comment added Tim D To address @Dims 's question, there is a bit of a philosophical question of what is the technology for? SO has been deliberately designed to help humans curate knowledge by other humans, hence the badges, reputation, profile decorations, etc. Humans posting Chat GPT answers upends all of that. If you want an answer from Chat GPT, go ask Chat GPT; it's easy enough. Participants getting their answers from Chat GPT are undermining the reputation system; SO leadership was right to ban it.
Jan 8, 2023 at 17:23 comment added Arne Babenhauserheide Very much this, yes. We can’t have models consuming their own output as target data, else they will bias their own output.
Dec 29, 2022 at 21:10 comment added Heretic Monkey @boldnik The number of incorrect answers being posted is the problem. Blowing off the problem by saying "SO should adapt [I'm assuming you meant adapt (change to work with the new tech) as opposed to adopt (implying SO should start using AI)], not reject." just doesn't scale. People can copy/paste a question into ChatGPT and copy/paste the answer back far faster than others can come up with well-suited, thoughtful, complete, correct answers (not to mention find duplicates or check and verify the incorrect answers).
Dec 29, 2022 at 19:28 history edited Tim D CC BY-SA 4.0
Added my second concern and a link to information on how it works.
Dec 18, 2022 at 23:36 comment added Dims What is bad with it?
Dec 9, 2022 at 1:37 comment added magnump0 mmmm... nnaah... I'd vote for high penalty to users who are posting answers which do not work. AI is inevitable, so it's a question of SO to adopt, not reject.
Dec 5, 2022 at 16:14 history answered Tim D CC BY-SA 4.0