Playbook series: Right-fit tooling

November 14, 2025 // 5 min read

image

The pace of AI innovation demands a strategy—not a shopping spree.

Published via GitHub Executive Insights | Authored by Matt Nigh, Program Manager Director of AI for Everyone

Choosing the right AI tools is one of the biggest challenges for leaders today. The market is overwhelming, and common pitfalls can paralyze your organization.

In our Playbook for building an AI-powered workforce, we introduced the core principles for building an AI-native organization. This guide dives deeper into one of the most critical components: building a "right-fit" AI tooling strategy. We'll show you how to move beyond not knowing where to begin and create a curated portfolio of vetted, high-impact tools that empower your teams and accelerate your company's AI vision.

A portfolio approach to AI tools

Your tooling strategy should be guided by a simple principle: make it easy for employees to do the right thing. This means providing a core set of powerful, vetted, and well-supported tools that can help them get their work done, whether they’re engineers, designers, operations, HR, etc. This curated portfolio becomes the default, safe choice for everyone.

At GitHub, we think about our toolset in three key categories:

  • Core enterprise tools: These are the "green light" tools that are fully vetted, have enterprise-grade security, and are approved for a wide range of data types. They are the foundation of your AI-powered workforce. Think of tools like GitHub Copilot, Microsoft 365 Copilot, or Google's Gemini for Workspace. They provide a secure, default starting point for everyone.
  • First-party tools: These are the AI-powered products and features you build yourself. They create a powerful feedback loop between your employees and your product teams, accelerating development and ensuring your products meet real-world needs. For us these tools are primarily the Github Copilot portfolio of products and VS Code.
  • Department-specific tools: Not every need will be met by your core tools. A sales team might need a specialized AI-powered CRM feature, while a marketing team might benefit from an advanced analytics tool. Empowering departments to select these tools is key, but it requires a lightweight yet rigorous approval process to manage risk.

This model doesn’t require a full security review for every single tool, but it should build confidence that you are creating a solid foundation —and that you can enable smart, decentralized decisions to scale without compromising safety.

Evaluating new tools

When a new AI tool shows up on your radar, you need a fast, structured way to decide whether it’s worth exploring, adopting, or investing in. Here’s a simple framework you can use.

Tips for evaluating AI tools

When a new tool is submitted, evaluate it against a consistent set of criteria. Frame them as questions to guide the review:

  1. What problem does it solve? (Functionality): Is it a niche tool or a broad platform? Does its core function address a high-value business need?
  2. Is it a standalone solution or part of a platform? (Ecosystem): Does the tool integrate with your existing systems? A tool that fits into your current tech stack is often more valuable than a standalone product that creates another data silo.
  3. Where does our data go? (Data sensitivity & flow): This is the most critical question. What data does the tool access? Where is it stored? Is it used to train public models? You must have a clear understanding of the data lifecycle.
  4. Does it meet our security standards? (Security & compliance): Does the tool support single sign-on (SSO)? What are its data retention policies? Does it comply with relevant regulations like GDPR or CCPA?
  5. What is the expected benefit and/or return on investment? (Value & ROI): How will this tool impact the business? While direct revenue attribution can be difficult, consider proxies for value. Will it significantly speed up a common workflow? Does it reduce a specific type of risk? A tool with a high price tag may be well worth it if it delivers substantial time savings for a large team.

The evaluation workflow

  1. Submission form: Create a simple intake form where employees can submit new tools, answering the five questions above.
  2. Triage workflow: Designate a small review committee (e.g., from IT, Security, and a business unit lead) to triage submissions weekly or bi-weekly.
  3. Security and compliance review: For tools that pass triage, conduct a lightweight review using predefined risk checklists to speed up this step.
  4. Pilot and feedback: For promising tools, set up a short pilot with clear success criteria. Collect feedback and security signals before a full rollout.
  5. Decision and documentation: Publish all decisions, rationales, and tier placements in a central tool registry to maintain visibility and accountability.

Running a pilot

A successful pilot is more than just giving people access to a new tool; it's a structured experiment designed to answer key questions about a tool's value, usability, and risks before making a long-term commitment.

1. Assemble your pilot group

The right testers provide the right feedback.

  • For broad, enterprise-wide tools: Lean on your established group of "AI advocates" or internal champions. These enthusiastic early adopters are often more resilient to the rough edges of new technology and can provide insightful, high-signal feedback.
  • For department-specific tools: The pilot group should be the team that will actually use the tool daily. A new AI feature for a CRM should be tested by the sales team, not IT. This ensures the feedback is grounded in real-world workflows.

2. Structure for success

A well-defined structure is key to getting clear results.

  • Define success metrics: What does success look like? Is it saving three hours per week per user, increasing lead conversion by 5%, or achieving an 80% user satisfaction score? Set clear, measurable goals from the start.
  • Time-box the pilot: A typical pilot should run for 2-4 weeks. This is long enough for users to form habits and provide meaningful feedback, but short enough to maintain momentum.
  • Provide lightweight training: Host a quick kickoff session to demonstrate the tool's core functionality and explain the pilot's goals. Record the session to serve as an initial training resource. Don't assume users will figure it out on their own.

3. Gather actionable feedback

Good feedback is specific and contextual.

  • Use a mix of channels: Combine a dedicated chat channel for real-time questions, short weekly surveys to track sentiment, and a final wrap-up session or detailed survey to capture comprehensive thoughts.
  • Ask specific questions: Go beyond whether or not someone liked using the tool. Ask questions like, "Describe a specific task where this tool saved you time," "Where did the tool get in your way or slow you down?" and "What's the one thing you wish this tool could do that it can't?"
  • Look for qualitative and quantitative signals: A satisfaction score of 7/10 is useful, but a direct quote about how the tool streamlined a frustrating, manual process is often more powerful when making a final decision.

From reactive to empowerment

Implementing a structured evaluation and piloting process isn't just about managing risk; it's about building a culture of intentional innovation. By empowering your employees with the right tools, not just the most tools, you create a resilient AI strategy that can adapt as quickly as the market itself.

This process keeps evaluation fast and scalable while maintaining the rigor needed to protect the business. The goal is to move from reactive decision-making to a proactive state of empowerment, accelerating AI adoption responsibly and unlocking your team's full potential.


Want to learn more about the strategic role of AI and other innovations at GitHub? Explore Executive Insights for more thought leadership on the future of technology and business.

Tags