Skip to main content
21 events
when toggle format what by license comment
Feb 8, 2024 at 14:15 comment added Security Hound This answer clearly demonstrates the lack of knowledge gained by using ChatGPT and understanding how its responses are generated. You can tell ChatGPT it’s wrong and it will apologize, even if it’s right, and generate a new response to correct its mistakes. Even then on some subjects even if it’s wrong it will ignore your correction. ChatGPT is useless.
Jan 30, 2024 at 22:20 comment added user400654 "AI" solutions that are capable of citing sources aren't working like your standard LLM prompt. They start with a different kind of AI, not an LLM, to perform a more typical search before sending the results into an LLM as a prompt to generate the response. It'd be incorrect to claim the LLM is sourcing it's data, since the data that is allowing it to do the work it's doing isn't just the content in the prompt that the initial search found.
Jan 30, 2024 at 22:16 comment added Cerbrus @DavidG. LLMs like ChatGPT don't "do" "source". They can't weigh different training data differently unless explicitly trained to do so. It would make no sense for a generic LLM like ChatGPT to put excessive weight on SO sources, as that would result in lower quality output when generating stories. It doesn't have live access to data. It doesn't know who posted what. It doesn't know anything about scores. You're making so many incorrect assumptions about how LLMs work... Please look into how they generate output. [2/2]
Jan 30, 2024 at 22:13 comment added Cerbrus @DavidG. No. LLMs don't have a concept of truth. Don't strawman me with theories about AI in general. LLMs like ChatGPT don't comprehend. They don't understand, they don's interpret. There is literally no concept of technical accuracy in their process. That's the entire problem with LLMs! They can't know if what they generate is true or not. That's simply not part of how they work. [1/2]
Jan 30, 2024 at 17:16 comment added user400654 An AI summary of an answer is an AI answer.
Jan 30, 2024 at 17:09 comment added David G. And as I wrote in the answer, I don't support AI answers.
Jan 30, 2024 at 17:06 comment added user400654 The point is there is no idea of truth that can be trusted to just be correct, even for human provided content the viewer is responsible for determining that. Having the AI rely on metrics humans provided in the past but no longer are because the AI replaced the process that resulted in that feedback will be stale immediately.
Jan 30, 2024 at 17:04 comment added David G. Which is why I use the therm "likely".
Jan 30, 2024 at 17:03 comment added user400654 I mean, no, lol, there's plenty of highly scored answers that are wrong.
Jan 30, 2024 at 17:02 comment added David G. OK, I should have mentioned high scoring answers are also more like "truth" than low scoring ones.
Jan 30, 2024 at 17:00 comment added user400654 i mean, no, lol, there's a reason accepted answers are no longer pinned to the top.
Jan 30, 2024 at 16:57 comment added David G. @VLAZ I'm saying the policy should allow, and encourage, AIs whose developers want to improve the knowledge base. It may be that they don't do it now because they haven't been encouraged to do it. I feel it would be better for them to be encouraged than for them to have to come hat in hand.
Jan 30, 2024 at 16:51 comment added David G. @Cerbrus Completely wrong. AIs do have an idea of "truth". With SO as a source, an accepted answer is more likely "truth" than a non-accepted one. An answer from a high status poster is more likely "truth" than from a low status poster. I suspect this is already used. Having said that, it probably doesn't know truth in the real world, so it shouldn't be trained on an RPG corpus to answer medical questions.
Jan 30, 2024 at 8:21 comment added VLAZ So, what you're suggesting doesn't match up with the current reality. You want AI generated questions to be allowed conditioned on the AI tool working not at all how it works as of today. You're free to suggest to all AI tool makers they should change their products to be able to do the introspection you expect of them. Once that is done, we can discuss changing what we do and do not allow for AI generated questions.
Jan 30, 2024 at 8:17 comment added Cerbrus @DavidG. There's no "May or may not". LLMs do not have any idea of the concept of "truth". They're utterly clueless. What you suggest it beyond impossible, as LLMs are simply not capable of interpreting "truth".
Jan 29, 2024 at 22:33 comment added user400654 Self-monitoring? you mean using an LLM to determine if an LLM is accurate?
Jan 29, 2024 at 22:32 comment added David G. @VLAZ it would obviously require the AI have a fair bit of self-monitoring included in it. They may or may not be able to at this point. In any case, it's not for StackExchange / SO to do, but for the AI writers. The only part for StackExchange / SO is allowing the relevant user to exist and question.
Jan 29, 2024 at 19:51 comment added user400654 Ironically that's what OverflowAI search is designed to do, isn't it? Find/summarize existing answers, and if there are none help you generate a question.
Jan 29, 2024 at 18:47 comment added markalex Nothing of this seems to be suitable for SO with current quality of "understanding" provided by any of the models. What you describe sounds like human powered correction mechanism for models, and I don't see why this should be a part of SO or wider SE network.
Jan 29, 2024 at 18:34 comment added VLAZ And how exactly would an AI chatbot know it doesn't know something? Or why it doesn't know it? Often enough it just hallucinates an answer. The way to figure out it doesn't work is for an expert to have a look at it. If the expert finds the answer is nonsense, then they can write a question instead. Although, with that said - just because an AI tool is unable to properly answer a question, it does not mean the question is unique and never before seen.
Jan 29, 2024 at 18:15 history answered David G. CC BY-SA 4.0