Update: Sept 24, 2025
We have released various improvements and updates today:
- stackoverflow.ai now uses the most recent model to improve accuracy where SO & SE don't have valid answers.
- Improved accuracy via search result relevancy by removing negatively scored content from search results.
- Improved responses to help users understand what stackoverflow.ai can be used for whey they ask questions like, "what are you" and "what can you do".
- Improved response loading UX so that elements don't jump around.
- Updated the response structure to provide more context from SO & SE content, remove redundancy, and general quality improvements.
Update: Sept 12, 2025
The second iteration of RAG+LLM is available now, with some big updates:
- We now use blockquotes instead of text quotes to accommodate more context from SO & SE sources and display code blocks properly. This means users will see more of unfiltered content from authors.
- Citations are simple and numeric to get out of the reader's way but provide a clear link to the source cards
- Every response includes a link to communities to explore and post questions if stackoverflow.ai wasn't enough help. This is the first iteration of an off-ramp to the community but we plan to make it dynamic (ideally suggesting communities) and more prominent.
Update: Sept 5, 2025
Effective today, stackoverflow.ai has been updated with the following changes, many of which address concerns expressed in the initial responses to this post.
- Conversational context is retained during further queries in a session
- Quotes, sources and inline citations are displayed more clearly and consistently
- The sections in a response may be more varied, depending on the query
- When a query is submitted, the view is auto-scrolled to the top of the response
Original post
Today (September 2, 2025) the stackoverflow.ai experiment gets a substantial update.
The origin story
stackoverflow.ai, also linked as “AI Assist” in the left navigation, was launched as a beta on July 9, 2025. Its goals included:
- A new way to get started on Stack Overflow. The tool can help developers get unblocked instantly with answers to their technical problems, while helping them learn along the way and providing a path into the community.
- A familiar, natural language experience that anyone who has interacted with genAI chatbots would expect, but further enriched with clear connections to trusted and verified Stack Overflow knowledge.
- A user-friendly interface with conversational search and discovery.
- A path, when the genAI tool isn’t providing the solution they need, to bring their question to the Stack Overflow community via the latest question asking experience, including Staging Ground.
stackoverflow.ai was built as “LLM-first”:
- Submit the user’s query to an LLM and display the LLM’s response to the user
- Analyze the response from the LLM and search SO & SE for relevant content
This resolved the two issues from the our past Retrieval Augmented Generation (RAG)-exclusive approach (irrelevant results & lack of results) and we’re seeing diverse usage of stackoverflow.ai, from traditional technical searches (help with error messages, how to build certain functions, what code snippets do) to comparing different approaches and libraries, to asking for helping architecting and structuring apps, to learning about different libraries and concepts.
The community identified a significant issue in that this was not providing appropriate attribution to Stack creators. This was a consequence of the “LLM-first” approach where the response was not rooted in source content and LLMs cannot return attribution reliably.
What’s changed?
We shifted to a hybrid approach that helps developers get answers instantly, learn along the way, and provide a path into the largest community of technology enthusiasts.
A response is created using multiple steps via RAG + multiple rounds of LLM processing:
- When a user searches, the search is executed across SO & SE for relevant content and includes the use of a re-ranker.
- Relevant quotes are pulled from the top results from SO & SE, including attribution.
- We created an AI Agent to act as an “answer auditor”: it reads the user’s search, the quotes from SO & SE content, and analyzes for correctness and comprehensiveness in order to supplement it with knowledge from the LLM.
- If the search does not find any relevant content from SO & SE, the AI Agent is instructed to answer the user’s question as best it can using the LLM.
The interface design and answer presentation has been updated to make this source distinction clear to users, including what parts of the answer are from SO & SE and what is from the LLM. We conducted a research study with network moderators which was a key part of developing this design, so many thanks to those who participated in that.
Our goal with rolling out this update is that stackoverflow.ai is different from other AI tools because it prioritizes trusted, community-verified knowledge before other sources, then the LLM fills in any knowledge gaps to provide a complete answer with clear sources. And there is still a path into the community to get more help or dive deeper (this feature is coming soon).
In this Stack Overflow Podcast episode, we take a deeper dive into how we developed this approach. At the bottom of this post, you’ll see some visuals of a query and response. Here are some details to go with those images.
Figure A (Initial response) - Sources are now presented up top, expanded by default, so you can see where the answer comes from without having to click. Licensing information is displayed as well. The response itself uses direct quotes from trusted community contributions, with every piece of linked content traceable to its origin.
Figure B (Scrolled lower, with hover) - The inline citations are more than just footnotes. Hover over them to see a popover with a clear link back to the source (this feature is coming soon). If you still don’t find what you need, we’ll be offering an “ask the community” pathway, linking to the question-asking flow.
What’s next?
We know that some in the community will be concerned that this is still not enough to showcase the full scope of the human community behind this information. In this new world, human-centered sources of knowledge are obscured, and this is one step toward counteracting that. The old front door was Google, and while that was not without its challenges, at least the search results page had clear links to the sources. We’re working now to build a new front door that meets new expectations and still provides a way in.
We’re confident in this direction based on what we’re seeing so far. The majority of queries are technical, so they are the type of users that we want to capture. There is more positive feedback than negative coming back within the product. The demographic also shows that it's a different set of users than stackoverflow.com, so there are good signs here for acquiring new community members over time.
We’ll continue to work on creating clear roads into the community — for example, we expect to iterate on the design of that “ask the community” pathway. That said, there’s probably limits to what this interface can showcase while remaining focused on the user’s specific need of the moment.
Upcoming work in the near term will be focused on:
- accuracy
- consistency
- context retention
- loading time
- determining the best way to connect users to the question asking flow
- figure B shows the intended first iteration of this path, with a link above the input field, not present in today’s release
Please continue to provide feedback here, and as you try out the experience, through the thumbs-up/down & free text option as well.













