The GitHub Blog https://github.blog/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Thu, 19 Feb 2026 02:02:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 The GitHub Blog https://github.blog/ 32 32 153214340 How AI is reshaping developer choice (and Octoverse data proves it) https://github.blog/ai-and-ml/generative-ai/how-ai-is-reshaping-developer-choice-and-octoverse-data-proves-it/ Thu, 19 Feb 2026 17:00:00 +0000 https://github.blog/?p=93955 AI is rewiring developer preferences through convenience loops. Octoverse 2025 reveals how AI compatibility is becoming the new standard for technology choice.

The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.

]]>

You know that feeling when a sensory trigger instantly pulls you back to a moment in your life? For me, it’s Icy Hot. One whiff and I’m back to 5 a.m. formation time in the army. My shoulders tense. My body remembers. It’s not logical. It’s just how memory works. We build strong associations between experiences and cues around them. Those patterns get encoded and guide our behavior long after the moment passes.

That same pattern is happening across the software ecosystem as AI becomes a default part of how we build. For example, we form associations between convenience and specific technologies. Those loops influence what developers reach for, what they choose to learn, and ultimately, which technologies gain momentum.

Octoverse 2025 data illustrates this in real time. And it’s not subtle. 

In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever. That’s the headline. But the deeper story is what it signals: AI isn’t just speeding up coding. It’s reshaping which languages, frameworks, and tools developers choose in the first place.

A chart showing the top 10 programming languages on GitHub from 2023 to 2025. TypeScript rises to #1 in 2025, overtaking Python and JavaScript, which move to #2 and #3 respectively. Other top languages include Java, C#, PHP, Shell, C++, HCL, and Go. The chart tracks ranking changes over time on a dark background with colored lines representing each language.

The convenience loop is how memory becomes behavior

When a task or process goes smoothly, your brain remembers. Convenience captures attention. Reduced friction becomes a preference—and preferences at scale can shift ecosystems.

Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what “easy” means.

When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead. The language adoption data shows this behavioral shift:

That last one matters. We didn’t suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost. 

This is what Octoverse is really showing us: developer choice is shifting toward  technologies that work best with the tools we’re already using.

The technical reason behind the shift

There are concrete, technical reasons AI performs better with strongly typed languages.

Strongly typed languages give AI much clearer constraints. In JavaScript, a variable could be anything. In TypeScript, declaring x: string immediately eliminates all non-string operations. That constraint matters. Constraints help AI generate more reliable, contextually correct code. And developers respond to that reliability.

That effect compounds when you look at AI model integration across GitHub. Over 1.1 million public repositories now use LLM SDKs. This is mainstream adoption, not fringe experimentation. And it’s concentrating around the languages and frameworks that work best with AI.

A line and area chart titled ‘Cumulative count of public projects using generative AI model SDKs,’ showing rapid growth from 2021 to 2025. The curve starts near zero and climbs steeply to over 1.1 million repositories by 2025, illustrating the widespread adoption of LLM and AI model SDKs. The chart features a purple-to-pink gradient fill on a dark background with geometric ribbons on the left.

Moving fast without breaking your architecture 

AI tools are amplifying developer productivity in ways we haven’t seen before. The question is how to use them strategically. The teams getting the best results aren’t fighting the convenience loop. They’re designing their workflows to harness it while maintaining the architectural standards that matter.

For developers and teams

Establish patterns before you generate. AI is fantastic at following established patterns, but struggles to invent them cleanly. If you define your first few endpoints or components with strong structure, Copilot will follow those patterns. Good foundations scale. Weak ones get amplified.

Use type systems as guardrails, not crutches. TypeScript reduces errors, but passing type checks isn’t the same as expressing correct business logic. Use types to bound the space of valid code, not as your primary correctness signal.

Test AI-generated code harder, not less. There’s a temptation to trust AI output because it “looks right” and passes initial checks. Resist that. Don’t skip testing.

For engineering leaders

Recognize the velocity jump and prepare for its costs. AI-assisted development often produces a 20–30 percent increase in throughput. That’s a win. But higher throughput means architectural drift can accumulate faster without the right guardrails.

Standardize before you scale. Document patterns. Publish template repositories. Make your architectural decisions explicit. AI tools will mirror whatever structures they see.

Track what AI is generating, not just how much. The Copilot usage metrics dashboard (now in public preview for Enterprise) lets you see beyond acceptance rates. You can track daily and weekly active users, agent adoption percentages, lines of code added and deleted, and language and model usage patterns across your organization. The dashboard answers a critical question: how well are teams using AI? 

Use these metrics to identify patterns. If you’re seeing high agent adoption but code quality issues in certain teams, that’s a signal those teams need better prompt engineering training or stricter review standards. If specific languages or models correlate with higher defect rates, that’s data you can act on. The API provides user-level granularity for deeper analysis, so you can build custom dashboards that track the metrics that matter most to your organization.

Invest in architectural review capacity. As developers become more productive, senior engineering time becomes more valuable, not less. Someone must ensure the system remains coherent as more code lands faster.

Make architectural decisions explicit and accessible. AI learns from context. ADRs, READMEs, comments, and well-structured repos all help AI generate code aligned with your design principles.

What the Octoverse 2025 findings mean for you

The technology choices you make today are shaped by forces you may not notice: convenience, habit, AI-assisted flow, and how much friction each stack introduces..

💡 Pro tip: Look at the last three technology decisions you made. Language for a new project, framework for a feature, tool for your workflow. How much did AI tooling support factor into those choices? If the answer is “not much,” I’d bet it factored in more than you realized.

AI isn’t just changing how fast we code. It’s reshaping the ecosystem around which tools work best with which languages. Once those patterns set in, reversing them becomes difficult.

If you’re choosing technologies without considering AI compatibility, you’re setting yourself up for future friction. If you’re building languages or frameworks, AI support can’t be an afterthought.

Here’s a challenge

Next time you start a project, notice which technologies feel “natural” to reach for. Notice when AI suggestions feel effortless and when they don’t. Those moments of friction and flow are encoding your future preferences right now.

Are you choosing your tools consciously, or are your tools choosing themselves through the path of least resistance?

We’re all forming our digital “Icy Hot” moments. The trick is being aware of them.

Looking to stay one step ahead? Read the latest Octoverse report and try the Copilot usage metrics dashboard.

The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.

]]>
93955
What to expect for open source in 2026 https://github.blog/open-source/maintainers/what-to-expect-for-open-source-in-2026/ Wed, 18 Feb 2026 18:41:42 +0000 https://github.blog/?p=93939 Let’s dig into the 2025’s open source data on GitHub to see what we can learn about the future.

The post What to expect for open source in 2026 appeared first on The GitHub Blog.

]]>

Over the years (decades), open source has grown and changed along with software development, evolving as the open source community becomes more global.

But with any growth comes pain points. In order for open source to continue to thrive, it’s important for us to be aware of these challenges and determine how to overcome them.

To that end, let’s take a look at what Octoverse 2025 reveals about the direction open source is taking. Feel free to check out the full Octoverse report, and make your own predictions.

Growth that’s global in scope

In 2025, GitHub saw about 36 million new developers join our community. While that number alone is huge, it’s also important to see where in the world that growth comes from. India added 5.2 million developers, and there was significant growth across Brazil, Indonesia, Japan, and Germany. 

What does this mean? It’s clear that open source is becoming more global than it was before. It also means that oftentimes, the majority of developers live outside the regions where the projects they’re working on originated. This is a fundamental shift. While there have always been projects with global contributors, it’s now starting to become a reality for a greater number of projects.

Given this global scale, open source can’t rely on contributors sharing work hours, communication strategies, cultural expectations, or even language. The projects that are going to thrive are the ones that support the global community.

One of the best ways to do this is through explicit communication maintained in areas like contribution guidelines, codes of conduct, review expectations, and governance documentation. These are essential infrastructure for large projects that want to support this community. Projects that don’t include these guidelines will have trouble scaling as the number of contributors increases across the globe. Those that do provide them will be more resilient, sustainable, and will provide an easier path to onboard new contributors.

The double-edged sword of AI

AI has had a major role in accelerating global participation over 2025. It’s created a pathway that makes it easier for new developers to enter the coding world by dramatically lowering the barrier to entry. It helps contributors understand unfamiliar codebases, draft patches, and even create new projects from scratch. Ultimately, it has helped new developers make their first contributions sooner.

However, it has also created a lot of noise, or what is called “AI slop.” AI slop is a large quantity of low-quality—and oftentimes inaccurate—contributions that don’t add value to the project. Or they are contributions that would require so much work to incorporate, it would be faster to implement the solution yourself. 

This makes it harder than ever to maintain projects and make sure they continue moving forward in the intended direction. Auto-generated issues and pull requests increase volume without always increasing the quality of the project. As a result, maintainers need to spend more time reviewing contributions from developers with vastly variable levels of skill. In a lot of cases, the amount of time it takes to review the additional suggestions has risen faster than the number of maintainers.

Even if you remove AI slop from the equation, the sheer volume of contributions has grown, potentially to unmanageable levels. It can feel like a denial of service attack on human attention.

This is why maintainers have been asking: how do you sift through the noise and find the most important contributions? Luckily, we’ve added some tools to help. There are also a number of open source AI projects specifically trying to address the AI slop issue. In addition, maintainers have been using AI defensively, using it to triage issues, detect duplicate issues, and handle simple maintenance like the labeling of issues. By helping to offload some of the grunt work, it gives maintainers more time to focus on the issues that require human intervention and decision making.

Expect the open source projects that continue to expand and grow over the next year to be those that incorporate AI as part of the community infrastructure. In order to deal with this quantity of information, AI cannot be just a coding assistant. It needs to find ways to ease the pressure of being a maintainer and find a way to make that work more scalable.

Record growth is healthy, if it’s planned for

On the surface, record global growth looks like success. But this influx of newer developers can also be a burden. The sheer popularity of projects that cover basics, such as contributing your first pull request to GitHub, shows that a lot of these new developers are very much in their infancy in terms of comfort with open source. There’s uncertainty about how to move forward and how to interact with the community. Not to mention challenges with repetitive onboarding questions and duplicate issues.

This results in a growing gap between the number of participants in open source projects and the number of maintainers with a sense of ownership. As new developers grow at record rates, this gap will widen.

The way to address this is going to be less about having individuals serving as mentors—although that will still be important. It will be more about creating durable systems that show organizational maturity. What does this mean? While not an exhaustive list, here are some items:

  • Having a clear, defined path to move from contributor to reviewer to maintainer. Be aware that this can be difficult without a mentor to help guide along this path.
  • Shared governance models that don’t rely on a single timezone or small group of people.
  • Documentation that provides guidance on how to contribute and the goals of the project.

By helping to make sure that the number of maintainers keeps relative pace with the number of contributors, projects will be able to take advantage of the record growth. This does create an additional burden on the current maintainers, but the goal is to invest in a solid foundation that will result in a more stable structure in the future. Projects that don’t do this will have trouble functioning at the increased global scale and might start to stall or see problems like increased technical debt.

But what are people building?

It can’t be denied that AI was a major focus—about 60% of the top growing projects were AI focused. However, there were several that had nothing to do with AI. These projects (e.g., Home Assistant, VS Code, Godot) continue to thrive because they meet real needs and support broad, international communities.

A list of the fastest-growing open source projects by contribution: zen-browser/desktop, cline/cline, vllm-project/vllm, astral-sh/uv, microsoft/vscode, infiniflow/ragflow, sgl-project/sglang, continuedev/continue, comfyanonymous/ComfyUI, and home-assistant/core.

Just as the developer space is growing on a global scale, the same can be said about the projects that garner the most interest. These types of projects that support a global community and address their needs are going to continue to be popular and have the most support. 

This just continues to reinforce how open source is really embracing being a global phenomenon as opposed to a local one.

What this year will likely hold

Open source in 2026 won’t be defined by a single trend that emerged over 2025. Instead, it will be shaped by how the community responds to the pressures identified over the last year, particularly with the surge in AI and an explosively growing global community.

For developers, this means that it’s important to invest in processes as much as code. Open source is scaling in ways that would have been impossible to imagine a decade ago, and the important question going forward isn’t how much it will grow—it’s how can you make that growth sustainable.

Read the full Octoverse report >

The post What to expect for open source in 2026 appeared first on The GitHub Blog.

]]>
93939
Securing the AI software supply chain: Security results across 67 open source projects https://github.blog/open-source/maintainers/securing-the-ai-software-supply-chain-security-results-across-67-open-source-projects/ Tue, 17 Feb 2026 19:00:00 +0000 https://github.blog/?p=93831 Learn how The GitHub Secure Open Source Fund helped 67 critical AI‑stack projects accelerate fixes, strengthen ecosystems, and advance open source resilience.

The post Securing the AI software supply chain: Security results across 67 open source projects appeared first on The GitHub Blog.

]]>

Modern software is built on open source projects. In fact, you can trace almost any production system today, including AI, mobile, cloud, and embedded workloads, back to open source components. These components are the invisible infrastructure of software: the download that always works, the library you never question, the build step you haven’t thought about in years, if ever.

A few examples:

  • curl moves data for billions of systems, from package managers to CI pipelines.
  • Python, pandas, and SciPy sit underneath everything from LLM research to ETL workflows and model evaluation.
  • Node.js, LLVM, and Jenkins shape how software is compiled, tested, and shipped across industries.

When these projects are secure, teams can adopt automation, AI‑enhanced tooling, and faster release cycles without adding risk or slow down development. When they aren’t, the blast radius crosses project boundaries, propagating through registries, clouds, transitive dependencies, and production systems, including AI systems, that react far faster than traditional workflows.

Securing this layer is not only about preventing incidents; it’s about giving developers confidence that the systems they depend on—whether for model training, CI/CD, or core runtime behavior—are operating on hardened, trustworthy foundations. Open source is shared industrial infrastructure that deserves real investment and measurable outcomes.

That is the mission of the GitHub Secure Open Source Fund: to secure open source projects that underpin the digital supply chain, catalyze innovation, and are critical to the modern AI stack. 

We do this by directly linking funding to verified security outcomes and by giving maintainers resources, hands‑on security training, and a security community where they can raise their highest‑risk concerns and get expert feedback. 

Why securing critical open source projects matters 

A single production service can depend on hundreds or even thousands of transitive dependencies. As Log4Shell demonstrated, when one widely used project is compromised, the impact is rarely confined to a single application or company.

Investing in the security of widely used open source projects does three things at once:

  • It reinforces that security is a baseline requirement for modern software, not optional labor.
  • It gives maintainers time, resources, and support to perform proactive security work.
  • It reduces systemic risk across the global software supply chain.

This security work benefits everyone who writes, ships, or operates code, even if they never interact directly with the projects involved. That gap is exactly what the GitHub Secure Open Source Fund was built to close. In Session 1 & 2, 71 projects made significant security improvements. In Session 3, 67 open source projects delivered concrete security improvements to reduce systemic risk across the software supply chain.


Session 3, by the numbers

  • 67 projects
  • 98 maintainers
  • $670,000 in non-dilutive funding powered by GitHub Sponsors
  • 99% of projects completed the program with core GitHub security features enabled

Real security results across all sessions:

  • 138 projects
  • 219 maintainers
  • 38 countries represented by participating projects
  • $1.38M in non-dilutive funding powered by GitHub Sponsors
  • 191 new CVEs Issued
  • 250+ new secrets prevented from being leaked
  • 600+ leaked secrets were detected and resolved
  • Billions of monthly downloads powered by alumni projects

Plus, in just the last 6 months:

  • 500+ CodeQL alerts fixed
  • 66 secrets blocked

Where security work happened in Session 3

Session 3 focused on improving security across the systems developers rely on every day. The projects below are grouped by the role they play in the software ecosystem.

Core programming languages and runtimes 🤖

CPython • Himmelblau • LLVM Node.js • Rustls

These projects define how software is written and executed. Improvements here flow downstream to entire ecosystems.

This group includes CPython, Node.js, LLVM, Rustls, and related tooling that shapes compilation, execution, and cryptography at scale.

Quote from Node: GitHub SOSF trailblazed critical security knowledge for Open Source in the AI era.

For example, improvements to CPython directly benefit millions of developers who rely on Python for application development, automation, and AI workloads. LLVM maintainers identified security improvements that complement existing investments and reduce risk across toolchains used throughout the industry.

When language runtimes improve their security posture, everything built on top of them inherits that resilience.

Python quote: This program made it possible to enhance Python's security, directly benefitting millions of developers.

Web, networking, and core infrastructure libraries 📚

Apache APISIXcurlevcc kgatewayNettyquic-gourllib3 Vapor

These projects form the connective tissue of the internet. They handle HTTP, TLS, APIs, and network communication that nearly every application depends on.

This group includes curl, urllib3, Netty, Apache APISIX, quic-go, and related libraries that sit on the hot path of modern software.

Quote from curl: The program brings together security best practices in a concise, actionable form to give us assurance we're on the right track.

Build systems, CI/CD, and release tooling 🧰

Apache AirflowBabel FoundryGitoxideGoReleaserJenkinsJupyter Docker Stacks node-lru-cacheoapi-codegen PyPI / Warehouserimraf  • webpack

Compromising build tooling compromises the entire supply chain. These projects influence how software is built, tested, packaged, and shipped.

Session 3 included projects such as Jenkins, Apache Airflow, GoReleaser, PyPI Warehouse, webpack, and related automation and release infrastructure.

Maintainers in this category focused on securing workflows that often run with elevated privileges and broad access. Improvements here help prevent tampering before software ever reaches users.

Quote from Webpack: We've greatly enhanced our security to protect web applications against threats.

Data science, scientific computing, and AI foundations 📊

ACI.devArviZCocoIndexOpenBB PlatformOpenMetadata OpenSearchpandasPyMCSciPyTraceRoot

These projects sit at the core of modern data analysis, research, and AI development. They are increasingly embedded in production systems as well as research pipelines.

Projects such as pandas, SciPy, PyMC, ArviZ, and OpenSearch participated in Session 3. Maintainers expanded security coverage across large and complex codebases, often moving from limited scanning to continuous checks on every commit and release.

Many of these projects also engaged deeply with AI-related security topics, reflecting their growing role in AI workflows.

Quote from SciPy: The program took us from 0 to security scans on every line of code, on every commit, and on every release.

Developer tools and productivity utilities ⚒️

AssertJ ArduPilot AsyncAPI Initiative BevycalibreDIGITfabric.jsImageMagickjQueryjsoupMastodonMermaidMockoonp5.jspython-benedictReact Starter KitSeleniumSphinxSpyderssh_configThunderbird for AndroidTwo.jsxyflowYii framework

These projects shape the day-to-day experience of writing, testing, and maintaining software.

The group includes tools such as Selenium, Sphinx, ImageMagick, calibre, Spyder, and other widely used utilities that appear throughout development and testing environments.

Improving security here reduces the risk that developer tooling becomes an unexpected attack vector, especially in automated or shared environments.

Quote from Mermaid: We're not just well equipped for security; we're equipped to lift others up with the same knowledge.

Identity, secrets, and security frameworks 🔒

external-secretsHelmet.jsKeycloakKeyshadeOauth2 (Ruby)varlockWebAuthn (Go)

These projects form the backbone of authentication, authorization, secrets management, and secure configuration.

Session 3 participants included projects such as Keycloak, external-secrets, oauth2 libraries, WebAuthn tooling, and related security frameworks.

Maintainers in this group often reported shifting from reactive fixes to systematic threat modeling and long-term security planning, improving trust for every system that depends on them.

Quote from Keyshade: The GitHub SOSF was invaluable, helping us strengthen our security approach and making us more confident and effective organization-wide.

Security as shared infrastructure

One of the most durable outcomes of the program was a shift in mindset.

Maintainers moved security from a stretch goal to a core requirement. They shifted from reactive patching to proactive design, and from isolated work to shared practice. Many are now publishing playbooks, sharing incident response exercises, and passing lessons on to their contributor communities.

That is how security scales: one-to-many.

What’s next: Help us make open source more secure 

Securing open source is basic maintenance for the internet. By giving 67 heavily used projects real funding, three focused weeks, and direct help, we watched maintainers ship fixes that now protect millions of builds a day. This training, taught by the GitHub Security Lab and top cybersecurity experts, allows us to go beyond one-on-one education and enable one-to-many impact. 

For example, many maintainers are working to make their playbooks public. The incident-response plans they rehearsed are forkable. The signed releases they now ship flow downstream to every package manager and CI pipeline that depends on them.

Join us in this mission to secure the software supply chain at scale. 

  • Projects and maintainers: Apply now to the GitHub Secure Open Source Fund and help make open source safer for everyone. Session 4 begins April 2026. If you write code, rely on open source, or want the systems you depend on to remain trustworthy, we encourage you to apply.
  • Funding and Ecosystem Partners: Become a Funding or Ecosystem Partner and support a more secure open source future. Join us on this mission to secure the software supply chain at scale!

Thank you to all of our partners

We couldn’t do this without our incredible network of partners. Together, we are helping secure the open source ecosystem for everyone! 

Funding Partners: Alfred P. Sloan Foundation, American Express, Chainguard, Datadog, Herodevs, Kraken, Mayfield, Microsoft, Shopify, Stripe, Superbloom, Vercel, Zerodha, 1Password

A decorative image showing GitHub Secure Open Source Fund, powered by GitHub Sponsors. Logos below are: Alfred P. Sloan Foundation, American Express, chainguard, Datadog, herdevs, Kraken, Microsoft, Mayfield, Shopify, stripe, superbloom, Vercel, 1Password, Zerodha

Ecosystem Partners: Atlantic Council, Ecosyste.ms, CURIOSS, Digital Data Design Institute Lab for Innovation Science, Digital Infrastructure Insights Fund, Microsoft for Startups, Mozilla, OpenForum Europe, Open Source Collective, OpenUK, Open Technology Fund, OpenSSF, Open Source Initiative, OpenJS Foundation, University of California, OWASP, Santa Cruz OSPO, Sovereign Tech Agency, SustainOSS

A collage of ecosystem partners: OWASP, ecosyste.ms, curioss, Digital Data Design Institute, Digital Infrastructure Insights Fund, Mozilla, Open Forum Europe, Open Source Collective, Open UK, Microsoft for Startups, Open SSF, Open Source Initiative, Open JS Foundation, OSPO, Open Technology Fund, URA, Sovereign Tech Agency, Sustain, and Atlantic Council.

The post Securing the AI software supply chain: Security results across 67 open source projects appeared first on The GitHub Blog.

]]>
93831
Automate repository tasks with GitHub Agentic Workflows   https://github.blog/ai-and-ml/automate-repository-tasks-with-github-agentic-workflows/ Fri, 13 Feb 2026 14:00:00 +0000 https://github.blog/?p=93730 Discover GitHub Agentic Workflows, now in technical preview. Build automations using coding agents in GitHub Actions to handle triage, documentation, code quality, and more.

The post Automate repository tasks with GitHub Agentic Workflows   appeared first on The GitHub Blog.

]]>

Imagine visiting your repository in the morning and feeling calm because you see:

  • Issues triaged and labelled
  • CI failures investigated with proposed fixes
  • Documentation has been updated to reflect recent code changes.
  • Two new pull requests that improve testing await your review.

All of it visible, inspectable, and operating within the boundaries you’ve defined.

That’s the future powered by GitHub Agentic Workflows: automated, intent-driven repository workflows that run in GitHub Actions, authored in plain Markdown and executed with coding agents. They’re designed for people working in GitHub, from individuals automating a single repo to teams operating at enterprise or open-source scale.

At GitHub Next, we began GitHub Agentic Workflows as an investigation into a simple question: what does repository automation with strong guardrails look like in the era of AI coding agents? A natural place to start was GitHub Actions, the heart of scalable repository automation on GitHub. By bringing automated coding agents into actions, we can enable their use across millions of repositories, while keeping decisions about when and where to use them in your hands.

GitHub Agentic Workflows are now available in technical preview. In this post, we’ll explain what they are and how they work. We invite you to put them to the test, to explore where repository-level AI automation delivers the most value.

Graphic showing quotes from customers. 'Home Assistant has thousands of open issues. No human can track what's trending or which problems affect the most users. I've built GitHub Agentic Workflows that analyze issues and surface what matters: that's the kind of judgment amplification that actually helps maintainers.'- Franck Nijhof, lead of the Home Assistant project, one of the top projects on GitHubby contributor countAgentic workflows also allow maintainers and community to experiment with repository automation together. 'Adopting GitHub’s Agentic Workflows has lowered the barrier for experimentation with AI tooling, making it significantly easier for staff, maintainers and newcomers alike. Inside of CNCF, we are benefiting from improved documentation automation along with improving team reporting across the organization. This isn't just a technical upgrade for our community, it’s part of a cultural shift that empowers our ecosystem to innovate faster with AI and agentic tooling.'- Chris Aniszczyk, CTO of the Cloud Native Computing Foundation (CNCF), whose mission is to make cloud native computing ubiquitous across the worldEnterprises are seeing similar benefits at scale. 'With GitHub Agentic Workflows, we’re able to expand how we apply agents to real engineering work at scale, including changes that span multiple repositories. The flexibility and built-in controls give us confidence to leverage Agentic Workflows across complex systems at Carvana.'- Alex Devkar, Senior Vice President, Engineering and Analytics, at Carvana

AI repository automation: A revolution through simplicity 

The concept behind GitHub Agentic Workflows is straightforward: you describe the outcomes you want in plain Markdown, add this as an automated workflow to your repository, and it executes using a coding agent in GitHub Actions.

This brings the power of coding agents into the heart of repository automation. Agentic workflows run as standard GitHub Actions workflows, with added guardrails for sandboxing, permissions, control, and review. When they execute, they can use different coding agent engines—such as Copilot CLI, Claude Code, or OpenAI Codex—depending on your configuration.

The use of GitHub Agentic Workflows makes entirely new categories of repository automation and software engineering possible, in a way that fits naturally with how developer teams already work on GitHub. All of them would be difficult or impossible to accomplish traditional YAML workflows alone:

  1. Continuous triage: automatically summarize, label, and route new issues.
  2. Continuous documentation: keep READMEs and documentation aligned with code changes.
  3. Continuous code simplificationrepeatedly identify code improvements and open pull requests for them.
  4. Continuous test improvementassess test coverage and add high-value tests.
  5. Continuous quality hygiene: proactively investigate CI failures and propose targeted fixes.
  6. Continuous reportingcreate regular reports on repository health, activity, and trends.

These are just a few examples of repository automations that showcase the power of GitHub Agentic Workflows. We call this Continuous AI: the integration of AI into the SDLC, enhancing automation and collaboration similar to continuous integration and continuous deployment (CI/CD) practices.

GitHub Agentic Workflows and Continuous AI are designed to augment existing CI/CD rather than replace it. They do not replace build, test, or release pipelines, and their use cases largely do not overlap with deterministic CI/CD workflows. Agentic workflows run on GitHub Actions because that is where GitHub provides the necessary infrastructure for permissions, logging, auditing, sandboxed execution, and rich repository context.

In our own usage at GitHub Next, we’re finding new uses for agentic workflows nearly every day. Throughout GitHub, teams have been using agentic workflows to create custom tools for themselves in minutes, replacing chores with intelligence or paving the way for humans to get work done by assembling the right information, in the right place, at the right time. A new world of possibilities is opening for teams and enterprises to keep their repositories healthy, navigable, and high-quality.

Let’s talk guardrails and control 

Designing for safety and control is non-negotiable. GitHub Agentic Workflows implements a defense-in-depth security architecture that protects against unintended behaviors and prompt-injection attacks.

Workflows run with read-only permissions by default. Write operations require explicit approval through safe outputs, which map to pre-approved, reviewable GitHub operations such as creating a pull request or adding a comment to an issue. Sandboxed execution, tool allowlisting, and network isolation help ensure that coding agents operate within controlled boundaries.

Guardrails like these make it practical to run agents continuously, not just as one-off experiments. See our security architecture for more details.

One alternative approach to agentic repository automation is to run coding agent CLIs, such as Copilot or Claude, directly inside a standard GitHub Actions YAML workflow. This approach often grants these agents more permission than is required for a specific task. In contrast, GitHub Agentic Workflows run coding agents with read-only access by default and rely on safe outputs for GitHub operations, providing tighter constraints, clearer review points, and stronger overall control.

A simple example: A daily repo report  

Let’s look at an agentic workflow which creates a daily status report for repository maintainers.

In practice, you will usually use AI assistance to create your workflows. The easiest way to do this is with an interactive coding agent. For example, with your favorite coding agent, you can enter this prompt:

Generate a workflow that creates a daily repo status report for a maintainer. Use the instructions at https://github.com/github/gh-aw/blob/main/create.md

The coding agent will interact with you to confirm your specific needs and intent, write the Markdown file, and check its validity. You can then review, refine, and validate the workflow before adding it to your repository.

This will create two files in .github/workflows

  • daily-repo-status.md (the agentic workflow)  
  • daily-repo-status.lock.yml (the corresponding agentic workflow lock file, which is executed by GitHub Actions) 

The file daily-repo-status.md will look like this: 

--- 
on: 
  schedule: daily 
 
permissions: 
  contents: read 
  issues: read 
  pull-requests: read 
 
safe-outputs: 
  create-issue: 
    title-prefix: "[repo status] " 
    labels: [report] 
 
tools: 
  github: 
---  
 
# Daily Repo Status Report 
 
Create a daily status report for maintainers. 
 
Include 
- Recent repository activity (issues, PRs, discussions, releases, code changes) 
- Progress tracking, goal reminders and highlights 
- Project status and recommendations 
- Actionable next steps for maintainers 
 
Keep it concise and link to the relevant issues/PRs.

This file has two parts: 

  1. Frontmatter (YAML between --- markers) for configuration 
  2. Markdown instructions that describe the job in natural language in natural language

The Markdown is the intent, but the trigger, permissions, tools, and allowed outputs are spelled out up front.

If you prefer, you can add the workflow to your repository manually: 

  1. Create the workflow: Add  daily-repo-status.md with the frontmatter and instructions.
  2. Create the lock file:  
    • gh extension install github/gh-aw  
    • gh aw compile
  3. Commit and push: Commit and push files to your repository.
  4. Add any required secrets: For example, add a token or API key for your coding agent.

Once you add this workflow to your repository, it will run automatically or you can trigger it manually using GitHub Actions. When the workflow runs, it creates a status report issue like this:

Screenshot of a GitHub issue titled "Daily Repo Report - February 9, 2026" showing key highlights, including 2 new releases, 1,737 commits from 16 contributors, 100 issues closed with 190 new issues opened, 50 pull requests merged from 93 opened pull requests, and 5 code quality issues opened.

What you can build with GitHub Agentic Workflows 

If you’re looking for further inspiration Peli’s Agent Factory is a guided tour through a wide range of workflows, with practical patterns you can adapt, remix, and standardize across repos.

A useful mental model: if repetitive work in a repository can be described in words, it might be a good fit for an agentic workflow.

If you’re looking for design patterns, check out ChatOps, DailyOps, DataOps, IssueOps, ProjectOps, MultiRepoOps, and Orchestration.

Uses for agent-assisted repository automation often depend on particular repos and development priorities. Your team’s approach to software development will differ from those of other teams. It pays to be imaginative about how you can use agentic automation to augment your team for your repositories for your goals.

Practical guidance for teams 

Agentic workflows bring a shift in thinking. They work best when you focus on goals and desired outputs rather than perfect prompts. You provide clarity on what success looks like, and allow the workflow to explore how to achieve it. Some boundaries are built into agentic workflows by default, and others are ones you explicitly define. This means the agent can explore and reason, but its conclusions always stay within safe, intentional limits.

You will find that your workflows can range from very general (“Improve the software”) to very specific (“Check that all technical documentation and error messages for this educational software are written in a style suitable for an audience of age 10 or above”). You can choose the level of specificity that’s appropriate for your team.

GitHub Agentic Workflows use coding agents at runtime, which incur billing costs. When using Copilot with default settings, each workflow run typically incurs two premium requests: one for the agentic work and one for a guardrail check through safe outputs. The models used can be configured to help manage these costs. Today, automated uses of Copilot are associated with a user account. For other coding agents, refer to our documentation for details. Here are a few more tips to help teams get value quickly:

  • Start with low-risk outputs such as comments, drafts, or reports before enabling pull request creation.
  • For coding, start with goal-oriented improvements such as routine refactoring, test coverage, or code simplification rather than feature work.
  • For reports, use instructions that are specific about what “good” looks like, including format, tone, links, and when to stop.
  • Agentic workflows create an agent-only, sub-loop that’s able to be autonomous because agents are acting under defined terms. But it’s important that humans stay in the broader loop of forward progress in the repository, through reports, issues, and pull requests. With GitHub Agentic Workflows, pull requests are never merged automatically, and humans must always review and approve.
  • Treat the workflow Markdown as code. Review changes, keep it small, and evolve it intentionally.

Continuous AI works best if you use it in conjunction with CI/CD. Don’t use agentic workflows as a replacement for GitHub Actions YAML workflows for CI/CD. This approach extends continuous automation to more subjective, repetitive tasks that traditional CI/CD struggle to express.

Build the future of automation with us   

GitHub Agentic Workflows are available now in technical preview and are a collaboration between GitHub, Microsoft Research, and Azure Core Upstream. We invite you to try them out and help us shape the future of repository automation.

We’d love for you to be involved! Share your thoughts in the Community discussion, or join us (and tons of other awesome makers) in the #agentic-workflows channel of the GitHub Next Discord. We look forward to seeing what you build with GitHub Agentic Workflows. Happy automating!

Try GitHub Agentic Workflows in a repo today! Install gh-aw, add a starter workflow or create one using AI, and run it. Then, share what you build (and what you want next)

The post Automate repository tasks with GitHub Agentic Workflows   appeared first on The GitHub Blog.

]]>
93730
Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. https://github.blog/open-source/maintainers/welcome-to-the-eternal-september-of-open-source-heres-what-we-plan-to-do-for-maintainers/ Thu, 12 Feb 2026 20:14:11 +0000 https://github.blog/?p=93789 Open source is hitting an “Eternal September.” As contribution friction drops, maintainers are adapting with new trust signals, triage approaches, and community-led solutions.

The post Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. appeared first on The GitHub Blog.

]]>

Open collaboration runs on trust. For a long time, that trust was protected by a natural, if imperfect filter: friction.

If you were on Usenet in 1993, you’ll remember that every September a flood of new university students would arrive online, unfamiliar with the norms, and the community would patiently onboard them. Then mainstream dial-up ISPs became popular and a continuous influx of new users came online. It became the September that never ended.

Today, open source is experiencing its own Eternal September. This time, it’s not just new users. It’s the sheer volume of contributions.

When the cost to contribute drops

In the era of mailing lists contributing to open source required real effort. You had to subscribe, lurk, understand the culture, format a patch correctly, and explain why it mattered. The effort didn’t guarantee quality, but it filtered for engagement. Most contributions came from someone who had genuinely engaged with the project.

It also excluded people. The barrier to entry was high. Many projects worked hard to lower it in order to make open source more welcoming.

A major shift came with the pull request. Hosting projects on GitHub, using pull requests, and labeling “Good First Issues” reduced the friction needed to contribute. Communities grew and contributions became more accessible.

That was a good thing.

But friction is a balancing act. Too much keeps people and their ideas out, too little friction can strain the trust open source depends on.

Today, a pull request can be generated in seconds. Generative AI makes it easy for people to produce code, issues, or security reports at scale. The cost to create has dropped but the cost to review has not.

It’s worth saying: most contributors are acting in good faith. Many want to help projects they care about. Others are motivated by learning, visibility, or the career benefits of contributing to widely used open source. Those incentives aren’t new and they aren’t wrong.

The challenge is what happens when low-quality contributions arrive at scale. When volume accelerates faster than review capacity, even well-intentioned submissions can overwhelm maintainers. And when that happens, trust, the foundation of open collaboration, starts to strain.

The new scale of noise

It is tempting to frame “low-quality contributions” or “AI slop” contributions as a unique recent phenomenon. It isn’t. Maintainers have always dealt with noisy inbound.

  • The Linux kernel operates under a “web of trust” philosophy and formalized its SubmittingPatches guide and introduced the Developer Certificate of Origin (DCO) in 2004 for a reason.
  • Mozilla and GNOME built formal triage systems around the reality that most incoming bug reports needed filtering before maintainers invested deeper time.
  • Automated scanners: Long before GenAI, maintainers dealt with waves of automated security and code quality reports from commercial and open source scanning tools.

The question from maintainers has often been the same: “Are you really trying to help me, or just help yourself?

Just because a tool—whether a static analyzer or an LLM—makes it easy to generate a report or a fix, it doesn’t mean that contribution is valuable to the project. The ease of creation often adds a burden to the maintainer because there is an imbalance of benefit. The contributor maybe gets the credit (or the CVE, or the visibility), while the maintainer gets the maintenance burden.

Maintainers are feeling that directly. For example:

  • curl ended its bug bounty program after AI-generated security reports exploded, each taking hours to validate.
  • Projects like Ghostty are moving to invitation-only contribution models, requiring discussion before accepting code contributions.
  • Multiple projects are adopting explicit rules about AI-generated contributions.

These are rational responses to an imbalance.

What we’re doing at GitHub

At GitHub, we aren’t just watching this happen. Maintainer sustainability is foundational to open source, and foundational to us. As the home of open source, we have a responsibility to help you manage what comes through the door.

We are approaching this from multiple angles: shipping immediate relief now, while building toward longer-term, systemic improvements. Some of this is about tooling. Some is about creating clearer signals so maintainers can decide where to spend their limited time.

Features we’ve already shipped

  • Repo-level pull request controls: Gives maintainers the option to limit pull request creation to collaborators or disable pull requests entirely. While the introduction of the pull request was fundamental to the growth of open source, maintainers should have the tools they need to manage their projects.
  • Pinned comments on issues: You can now pin a comment to the top of an issue from the comment menu.
  • Banners to reduce comment noise: Experience fewer unnecessary notifications with a banner that encourages people to react or subscribe instead of leaving noise like “+1” or “same here.”
  • Pull request performance improvements: Pull request diffs have been optimized for greater responsiveness and large pull requests in the new files changed experience respond up to 67% faster.
  • Faster issue navigation: Easier bug triage thanks to significantly improved speeds when browsing and navigating issues as a maintainer.
  • Temporary interaction limits: You can temporarily enforce a period of limited activity for certain users on a public repository.

Plus, coming soon: pull request deletion from the UI. This will remove spam or abusive pull requests so repositories can stay more manageable.

These improvements focus on reducing review overhead.

Exploring next steps

We know that walls don’t build communities. As we explore next steps, our focus is on giving maintainers more control while helping protect what makes open source communities work.

Some of the directions we’re exploring in consultation with maintainers include:

  • Criteria-based gating: Requiring a linked issue before a pull request can be opened, or defining rules that contributions must meet before submission.
  • Improved triage tools: Potentially leveraging automated triage to evaluate contributions against a project’s own guidelines (like CONTRIBUTING.md) and surface which pull requests should get your attention first.

These tools are meant to support decision-making, not replace it. Maintainers should always remain in control.

We are also aware of tradeoffs. Restrictions can disproportionately affect first-time contributors acting in good faith. That’s why these controls are optional and configurable.

The community is building ladders

One of the things I love most about open source is that when the community hits a wall, people build ladders. We’re seeing a lot of that right now.

Maintainers across the ecosystem are experimenting with different approaches. Some projects have moved to invitation-only workflows. Others are building custom GitHub Actions for contributor triage and reputation scoring.

Mitchell Hashimoto’s Vouch project is an interesting example. It implements an explicit trust management system where contributors must be vouched for by trusted maintainers before they can participate. It’s experimental and some aspects will be debated, but it fits a longer lineage, from Advogato’s trust metric to Drupal’s credit system to the Linux kernel’s Signed-off-by chain.

At the same time, many communities are investing heavily in education and onboarding to widen who can contribute while setting clearer expectations. The Python community, for example, emphasizes contributor guides, mentorship, and clearly labeled entry points. Kubernetes pairs strong governance with extensive documentation and contributor education, helping new contributors understand not just how to contribute, but what a useful contribution looks like.

These approaches aren’t mutually exclusive. Education helps good-faith contributors succeed. Guardrails help maintainers manage scale.

There is no single correct solution. That’s why we are excited to see maintainers building tools that match their project’s specific values. The tools communities build around the platform often become the proving ground for what might eventually become features. So we’re paying close attention.

Building community, not just walls

We also need to talk about incentives. If we only build blocks and bans, we create a fortress, not a bazaar.

Right now, the concept of “contribution” on GitHub still leans heavily toward code authorship. In WordPress, they use manually written “props” credit given not just for code, but for writing, reproduction steps, user testing, and community support. It recognizes the many forms of contribution that move a project forward.

We want to explore how GitHub can better surface and celebrate those contributions. Someone who has consistently triaged issues or merged documentation PRs has proven they understand your project’s voice. These are trust signals we should be surfacing to help you make decisions faster.

Tell us what you need

We’ve opened a community discussion to gather feedback on the directions we’re exploring: Exploring Solutions to Tackle Low-Quality Contributions on GitHub.

We want to hear from you. Share what is working for your projects, where the gaps are, and what would meaningfully improve your experience maintaining open source.

Open source’s Eternal September is a sign of something worth celebrating: more people want to participate than ever before. The volume of contributions is only going to grow — and that’s a good thing. But just as the early internet evolved its norms and tools to sustain community at scale, open source needs to do the same. Not by raising the drawbridge, but by giving maintainers better signals, better tools, and better ways to channel all that energy into work that moves their projects forward.

Let’s build that together.

The post Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. appeared first on The GitHub Blog.

]]>
93789
GitHub availability report: January 2026 https://github.blog/news-insights/company-news/github-availability-report-january-2026/ Wed, 11 Feb 2026 23:12:34 +0000 https://github.blog/?p=93780 In January, we experienced two incidents that resulted in degraded performance across GitHub services.

The post GitHub availability report: January 2026 appeared first on The GitHub Blog.

]]>
In January, we experienced two incidents that resulted in degraded performance across GitHub services.

January 13 09:38 UTC (lasting 46 minutes)

On January 13, 2026, from 09:25 to 10:11 UTC, GitHub Copilot experienced a service outage with error rates averaging 18% and peaking at 100%. This impacted chat features across Copilot Chat, VS Code, JetBrains IDEs, and other dependent products. The incident was triggered by a configuration error introduced during a model update and was initially mitigated by rolling back the change. A secondary recovery phase extended until 10:46 UTC due to upstream provider Open AI experiencing degraded availability for GPT‑4.1 model.

We have completed a detailed root‑cause review and are implementing stronger monitors, improved test environments, and tighter configuration safeguards to prevent recurrence and accelerate detection and mitigation of future issues.

January 15 16:56 UTC (lasting 1 hour and 40 minutes)

On January 15, 2026, between 16:40 UTC and 18:20 UTC, we observed increased latency and timeouts across issues, pull requests, notifications, actions, repositories, API, account login, and an internal service, Alive, that powers live updates on GitHub. An average 1.8% of combined web and API requests saw failure, peaking briefly at 10% early on. The majority of impact was observed for unauthenticated users, but authenticated users were impacted as well.

This was caused by an infrastructure update to some of our data stores. Upgrading this infrastructure to a new major version resulted in unexpected resource contention, leading to distributed impact in the form of slow queries and increased timeouts across services that depend on these datasets. We mitigated this by rolling back to the previous stable version.

We are working to improve our validation process for these types of upgrades to catch issues that only occur under high load before full release, improve detection time, and reduce mitigation times in the future.

Looking ahead 

Please note that the incidents that occurred on February 9, 2026, will be included in next month’s February Availability Report. In the meantime, you can refer to incident report on the GitHub Status site for more details.


Follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the engineering section on the GitHub Blog.

The post GitHub availability report: January 2026 appeared first on The GitHub Blog.

]]>
93780
Continuous AI in practice: What developers can automate today with agentic CI https://github.blog/ai-and-ml/generative-ai/continuous-ai-in-practice-what-developers-can-automate-today-with-agentic-ci/ Thu, 05 Feb 2026 17:00:00 +0000 https://github.blog/?p=93626 Think of Continuous AI as background agents that operate in your repository for tasks that require reasoning.

The post Continuous AI in practice: What developers can automate today with agentic CI appeared first on The GitHub Blog.

]]>

Software engineering has always included work that’s repetitive, necessary, and historically difficult to automate. This isn’t because it lacks values, but because it resists deterministic rules. 

Continuous integration (CI) solved part of this by handling tests, builds, formatting, and static analysis—anything that can be described with deterministic rules. CI excels when correctness can be expressed unambiguously: a test passes or fails, a build succeeds or doesn’t, a rule is violated or isn’t. 

But CI is intentionally limited to problems that can be reduced to heuristics and rules. 

For most teams, the hardest work isn’t writing code. It’s everything that requires judgment around that code: reviewing changes, keeping documentation accurate, managing dependencies, tracking regressions, maintaining tests, monitoring quality, and responding to issues that only surface after code ships. 

But a lot of engineering work goes into work that requires interpretation, synthesis, and context, rather than deterministic validation. And an increasing share of engineering tasks fall into a category CI was never designed to handle: work that depends on understanding intent. 

“Any task that requires judgment goes beyond heuristics,” says Idan Gazit, head of GitHub Next, which works on research and development initiatives.

Any time something can’t be expressed as a rule or a flow chart is a place where AI becomes incredibly helpful.

Idan Gazit, head of GitHub Next

This is why GitHub Next has been exploring a new pattern: Continuous AI, or background agents that operate in your repository the way CI jobs do, but only for tasks that require reasoning instead of rules.

Why CI isn’t enough anymore

CI isn’t failing. It’s doing exactly what it was designed to do. 

CI is designed for binary outcomes. Tests pass or fail. Builds succeed or don’t. Linters flag well-defined violations. That works well for rule-based automation.

But many of the hardest and most time-consuming parts of engineering are judgment-heavy and context-dependent. 

Consider these scenarios: 

  • A docstring says one thing, but the implementation says another.
  • Text passes accessibility linting but is still confusing to users.
  • A dependency adds a new flag, altering behavior without a major version bump.
  • A regex is compiled inside a loop, tanking performance in subtle ways.
  • UI behavior changes are only visible when interacting with the product.

These problems are about whether intent still holds. 

“The first era of AI for code was about code generation,” Idan explains. “The second era involves cognition and tackling the cognitively heavy chores off of developers.”

This is the gap Continuous AI fills: not more automation, but a different class of automation. CI handles deterministic work. Continuous AI applies where correctness depends on reasoning, interpretation, and intent. 

What Continuous AI actually means

Continuous AI is not a new product or CI replacement. Traditional CI remains essential. 

Continuous AI is a pattern:

Continuous AI = natural-language rules + agentic reasoning, executed continuously inside your repository.

In practice, Continuous AI means expressing in plain language what should be true about your code, especially when that expectation cannot be reduced to rules or heuristics. An agent then evaluates the repository and produces artifacts a developer can review: suggested patches, issues, discussions, or insights.

Developers rarely author agentic workflows in a single pass. In practice, they collaborate with an agent to refine intent, add constraints, and define acceptable outputs. The workflow emerges through iteration, not a single sentence. 

For example: 

  • “Check whether documented behavior matches implementation, explain any mismatches, and propose a concrete fix.”
  • “Generate a weekly report summarizing project activity, emerging bug trends, and areas of increased churn.”
  • “Flag performance regressions in critical paths.”
  • “Detect semantic regressions in user flows.”

These workflows are not defined by brevity. They combine intent, constraints, and permitted outputs to express expectations that would be awkward or impossible to encode as deterministic rules. 

“In the future, it’s not about agents running in your repositories,” Idan says. “It’s about being able to presume you can cheaply define agents for anything you want off your plate permanently.”

Think about what your work looks like when you can delegate more of it to AI, and what parts of your work you want to retain: your judgment, your taste.

Idan Gazit, head of GitHub Next

Guardrails by design: Permissions and Safe Outputs

In our work, we define agentic workflows with safety as a first principle. By default, agents operate with read-only access to repositories. They cannot create issues, open pull requests, or modify content unless explicitly permitted. 

We call this Safe Outputs, which provides a deterministic contract for what an agent is allowed to do. When defining a workflow, developers specify exactly which artifacts an agent may produce, such as opening a pull request or filing an issue, and under what constraints. 

Anything outside those boundaries is forbidden. 

This model assumes agents can fail or behave unexpectedly. Outputs are sanitized, permissions are explicit, and all activity is logged and auditable. The blast radius is deterministic. 

This isn’t “AI taking over software development.” It’s AI operating within guardrails developers explicitly define. 

Why natural language complements YAML

As we’ve developed this, we’ve heard a common question: why not just extend CI with more rules? 

When a problem can be expressed deterministically, extending CI is exactly the right approach. YAML, schemas, and heuristics remain the correct tools for those jobs. 

But many expectations cannot be reduced to rules without losing meaning. 

Idan puts it simply: “There’s a larger class of chores and tasks we can’t express in heuristics.

A rule like “whenever documentation and code diverge, identify and fix it” cannot be expressed in a regex or schema. It requires understanding semantics and intent. A natural-language instruction can express that expectation clearly enough for an agent to reason over it. 

Natural language doesn’t replace YAML, but instead complements it. CI remains the foundation. Continuous AI expands automation into commands CI was never designed to cover. 

Developers stay in the loop, by design

Agentic workflows don’t make autonomous commits. Instead, they can create the same kinds of artifacts developers would (pull requests, issues, comments, or discussions) depending on what the workflow is permitted to do.

Pull requests remain the most common outputs because they align with how developers already review and reason about change. 

“The PR is the existing noun where developers expect to review work,” Idan says. “It’s the checkpoint everyone rallies around.”

That means:

  • Agents don’t merge code
  • Developers retain full control
  • Everything is visible and reviewable

Developer judgment remains the final authority. Continuous AI helps scale that judgment across a codebase. 

How GitHub Next is experimenting with these ideas

The GitHub Next prototype (or you can find the repository at gh aw) uses a deliberately simple pattern:

  1. Write an agentic workflow
  2. Compile it into a GitHub Action
  3. Push it
  4. Let an agent run on any GitHub Actions trigger (pull requests, pushes, issues, comments, or schedules) 

Nothing is hidden; everything is transparent and visible.

“You want an action to look for style violations like misplaced brackets, that’s heuristics,” Idan explains. “But when you want deeper intent checks, you need AI.” 

What Continuous AI can automate today

These aren’t theoretical examples. GitHub Next has tested these patterns in real repositories.

1. Fix mismatches between documentation and behavior

This is one of the hardest problems for CI because it requires understanding intent.

An agentic workflow can:

  • Read a function’s docstring
  • Compare it to the implementation
  • Detect mismatches
  • Suggest updates to either the code or the docs
  • Open a pull request

Idan calls this one of the most meaningful categories of work Continuous AI can address: “You don’t want to worry every time you ship code if the documentation is still right. That wasn’t possible to automate before AI.”

2. Generate ongoing project reports with reasoning

Maintainers and managers spend significant time answering the same questions repeatedly: What changed yesterday? Are bugs trending up or down? Which parts of the codebase are most active? 

Agentic workflows can generate recurring reports that pull from multiple data sources (issues, pull requests, commits, and CI results), and apply reasoning on top. 

For example, an agent can: 

  • Summarize daily or weekly activity 
  • Highlight emerging bug trends
  • Correlate recent changes with test failures
  • Surfaces areas of increased churn

The value isn’t the report itself. It’s the synthesis across multiple data sources that would otherwise require manual analysis. 

3. Keep translations up to date automatically

Anyone who has worked with localized applications knows the pattern: Content changes in English, translations fall behind, and teams batch work late in the cycle (often right before a release).

An agent can:

  • Detect when English text changes
  • Re-generate translations for all languages
  • Open a single pull request containing the updates

The workflow becomes continuous, not episodic. Machine translations might not be perfect out of the box, but having a draft translation ready for review in a pull request makes it that much easier to engage help from professional translators or community contributors.

4. Detect dependency drift and undocumented changes

Dependencies often change behavior without changing major versions. New flags appear. Defaults shift. Help output evolves.

In one demo, an agent:

  • Installed dependencies
  • Inspected CLI help text
  • Diffed it against previous days
  • Found an undocumented flag
  • Filed an issue before maintainers even noticed

This requires semantic interpretation, not just diffs, which is why classical CI cannot handle it. 

“This is the first harbinger of the new phase of AI,” Idan says. “We’re moving from generation to reasoning.”

5. Automated test-coverage burn down

In one experiment:

  • Test coverage went from ~5% to near 100%
  • 1,400+ tests were written
  • Across 45 days
  • For about ~$80 worth of tokens

And because the agent produced small pull requests daily, developers reviewed changes incrementally.

6. Background performance improvements

Linters and analyzers don’t always catch performance pitfalls that depend on understanding the code’s intent.

Example: compiling a regex inside a function call so it compiles on every invocation.

An agent can:

  • Recognize the inefficiency
  • Rewrite the code to pre-compile the regex
  • Open a pull request with an explanation

Small things add up, especially in frequently called code paths.

7. Automated interaction testing (using agents as deterministic play-testers)

This was one of the more creative demos from Universe: using agents to play a simple platformer game thousands of times to detect UX regressions.

Strip away the game, and the pattern is widely useful:

  • Onboarding flows
  • Multi-step forms
  • Retry loops
  • Input validation
  • Accessibility patterns under interaction

Agents can simulate user behavior at scale and compare variants.

How to build your first agentic workflow

Developers don’t need a new CI system or separate infrastructure to try this. The GitHub Next prototype (gh aw) uses a simple pattern:

1. Write a natural-language rule in a Markdown file

For example:

---
on: daily
permissions: read
safe-outputs:
  create-issue:
    title-prefix: "[news] "
---
Analyze the recent activity in the repository and:
- create an upbeat daily status report about the activity
- proviate an agentic task description to improve the project based on the activity.
Create an issue with the report.

2. Compile it into an action

gh aw compile daily-team-status

This generates a GitHub Actions workflow.

3. Review the YAML

Nothing is hidden. You can see exactly what the agent will do.

4. Push to your repository

The agentic workflow begins executing in response to repository events or on a schedule you define, just like any other action.

5. Review the issue it creates

Patterns to watch next

While still early, several trends are already emerging in developer workflows:

Pattern 1: Natural-language rules will become a part of automation

Developers will write short English rules that express intent:

  • “Keep translations current”
  • “Flag performance regressions”
  • “Warn on auth patterns that look unsafe”

Pattern 2: Repositories will begin hosting a fleet of small agents

Not one general agent, but many small ones with each responsible for one chore, one check, or one rule of thumb.

Pattern 3: Tests, docs, localization, and cleanup will shift into “continuous” mode

This mirrors the early CI movement: Not replacing developers, but changing when chores happen from “when someone remembers” to “every day.”

Pattern 4: Debuggability will win over complexity

Developers will adopt agentic patterns that are transparent, auditable, and diff-based—not opaque systems that act without visibility.

What developers should take away

“Custom agents for offline tasks, that’s what Continuous AI is,” Idan says. “Anything you couldn’t outsource before, you now can.”

More precisely: many judgment-heavy chores that were previously manual can now be made continuous.

This requires a mental shift, like moving from owning files to streaming music.

“You already had all the music,” Idan says. “But suddenly the player is helping you discover more.”

Start with one small workflow

Continuous AI is not an all-or-nothing paradigm. You don’t need to overhaul your pipeline. Start with something small:

  • Translate strings
  • Add missing tests
  • Check for docstring drift
  • Detect dependency changes
  • Flag subtle performance issues

Each of these is something agents can meaningfully assist with today.

Identify the recurring judgment-heavy tasks that quietly drain attention, and make those tasks continuous instead of episodic.

If CI automated rule-based work over the past decade, Continuous AI may do the same for select categories of judgment-based work, when applied deliberately and safely.

Explore Continuous AI Actions and frameworks >

The post Continuous AI in practice: What developers can automate today with agentic CI appeared first on The GitHub Blog.

]]>
93626
Pick your agent: Use Claude and Codex on Agent HQ  https://github.blog/news-insights/company-news/pick-your-agent-use-claude-and-codex-on-agent-hq/ Wed, 04 Feb 2026 17:00:19 +0000 https://github.blog/?p=93566 Claude by Anthropic and OpenAI Codex are now available in public preview on GitHub and VS Code with a Copilot Pro+ or Copilot Enterprise subscription. Here's what you need to know and how to get started today.

The post Pick your agent: Use Claude and Codex on Agent HQ  appeared first on The GitHub Blog.

]]>

Context switching equals friction in software development. Today, we’re removing some of that friction with the latest updates to Agent HQ which lets you run coding agents from multiple providers directly inside GitHub and your editor, keeping context, history, and review attached to your work.

Copilot Pro+ and Copilot Enterprise users can now run multiple coding agents directly inside GitHub, GitHub Mobile, and Visual Studio Code (with Copilot CLI support coming soon). That means you can use agents like GitHub Copilot, Claude by Anthropic, and OpenAI Codex (both in public preview) today.

With Codex, Claude, and Copilot in Agent HQ, you can move from idea to implementation using different agents for different steps without switching tools or losing context. 

We’re bringing Claude into GitHub to meet developers where they are. With Agent HQ, Claude can commit code and comment on pull requests, enabling teams to iterate and ship faster and with more confidence. Our goal is to give developers the reasoning power they need, right where they need it.

Katelyn Lesse, Head of Platform, Anthropic

From faster code to better decisions 

Agent HQ also lets you compare how different agents approach the same problem, too. You can assign multiple agents to a task, and see how Copilot, Claude, and Codex reason about tradeoffs and arrive at different solutions.  

In practice, this helps you surface issues earlier by using agents for different kinds of review:  

  • Architectural guardrails: Ask one or more agents to evaluate modularity and coupling, helping identify changes that could introduce unintended side effects. 
  • Logical pressure testing: Use another agent to hunt for edge cases, async pitfalls, or scale assumptions that could cause problems in production. 
  • Pragmatic implementation: Have a separate agent propose the smallest, backward-compatible change to keep the blast radius of a refactor low.

This method of working moves your reviews and thinking to strategy over syntax. 

Our collaboration with GitHub has always pushed the frontier of how developers build software. The first Codex model helped power Copilot and inspired a new generation of AI-assisted coding. We share GitHub’s vision of meeting developers wherever they work, and we’re excited to bring Codex to GitHub and VS Code. Codex helps engineers work faster and with greater confidence—and with this integration, millions more developers can now use it directly in their primary workspace, extending the power of Codex everywhere code gets written.

Alexander Embiricos, OpenAI 

Why running agents on GitHub matters 

GitHub is already where code lives, collaboration happens, and decisions are reviewed, governed, and shipped. 

Making coding agents native to that workflow, rather than external tools, makes them even more useful at scale. Instead of copying and pasting context between tools, documents, and threads, all discussion and proposed changes stay attached to the repository itself. 

With Copilot, Claude, and Codex working directly in GitHub and VS Code, you can: 

  • Explore tradeoffs early: Run agents in parallel to surface competing approaches and edge cases before code hardens. 
  • Keep context attached to the work: Agents operate inside your repository, issues, and pull requests instead of starting from stateless prompts. 
  • Avoid new review processes: Agent-generated changes show up as draft pull requests and comments, reviewed the same way you’d review a teammate’s work. 

There are no new dashboards to learn, and no separate AI workflows to manage. Everything runs inside the environments you already use. 

Built for teams, not just individuals 

These workflows don’t just benefit individual developers. Agent HQ gives you org-wide visibility and systematic control over how AI interacts with your codebase: 

  • Agent controls: Manage access and security policies in one place, allowing enterprise admins to define which agents and models are permitted across the organization. 
  • Code quality checks: GitHub Code Quality (in public preview) extends Copilot’s security checks to evaluate the maintainability and reliability impact of changed code, helping ensure “LGTM” reflects long-term code health. 
  • Automated first-pass review: We have integrated a code review step directly into the Copilot’s workflow, allowing Copilot to address initial problems before a developer ever sees the code. 
  • Impact metrics: Use the Copilot metrics dashboard (in public preview) to track usage and impact across your entire organization, providing clear traceability for agent-generated work. 
  • Security and auditability: Maintain full control with audit logging and enterprise-grade access management, ensuring agents work with your security posture instead of against it. 

This allows teams to adopt agent-based workflows without sacrificing code quality, accountability, or trust. 

More agents coming soon 

Access to Claude and Codex will soon expand to more Copilot subscription types. In the meantime, we’re actively working with partners, including Google, Cognition, and xAI to bring more specialized agents into GitHub, VS Code, and Copilot CLI workflows. 

Read the docs to get started >

The post Pick your agent: Use Claude and Codex on Agent HQ  appeared first on The GitHub Blog.

]]>
93566
What the fastest-growing tools reveal about how software is being built https://github.blog/news-insights/octoverse/what-the-fastest-growing-tools-reveal-about-how-software-is-being-built/ Tue, 03 Feb 2026 17:00:00 +0000 https://github.blog/?p=93551 What languages are growing fastest, and why? What about the projects that people are interested in the most? Where are new developers cutting their teeth? Let’s take a look at Octoverse data to find out.

The post What the fastest-growing tools reveal about how software is being built appeared first on The GitHub Blog.

]]>

In 2025, software development crossed a quiet threshold. In our latest Octoverse report, we found that the fastest-growing languages, tools, and open source projects on GitHub are no longer about shipping more code. Instead, they’re about reducing friction in a world where AI is helping developers build more, faster.

By looking at some of the areas of fastest growth over the past year, we can see how developers are adapting through: 

  • The programming languages that are growing most in AI-assisted development workflows.
  • The tools that win when speed and reproducibility matter.
  • The areas where new contributors are showing up (and what helps them stick).

Rather than catalog trends, we want to focus on what those signals mean for how software is being built today and what choices you might consider heading into 2026. 

The elephant in the room: Typescript is the new #1

In August 2025, TypeScript became the most-used language on GitHub, overtaking Python and JavaScript for the first time. Over the past year, TypeScript added more than one million contributors, which was the largest absolute growth of any language on GitHub. 

A chart showing the top 10 programming languages on GitHub from 2023 to 2025. TypeScript rises to #1 in 2025, overtaking Python and JavaScript, which move to #2 and #3 respectively. Other top languages include Java, C#, PHP, Shell, C++, HCL, and Go. The chart tracks ranking changes over time on a dark background with colored lines representing each language.

Python also continued to grow rapidly, adding roughly 850,000 contributors (+48.78% YoY), while JavaScript grew more slowly (+24.79%, ~427,000 contributors). Together, TypeScript and Python both significantly outpaced JavaScript in both total and percentage growth. 

This shift signals more than a preference change. Typed languages are increasingly becoming the default for new development, particularly as AI-assisted coding becomes routine. Why is that?

In practice, a significant portion of the failures teams encounter with AI-generated code surface as type mismatches, broken contracts, or incorrect assumptions between components. Stronger type systems act as early guardrails: they can help catch errors sooner, reduce review churn, and make AI-generated changes easier to reason about before code reaches production. 

If you’re going to be using AI in your software design, which more and more developers are doing on a daily basis, strongly typed languages are your friend.

Here’s what this means in practice: 

  • If you’re starting a new project today, TypeScript is increasingly becoming the default (especially for teams using AI in daily development).
  • If you’re introducing AI-assisted workflows into an existing JavaScript codebase, adding types may reduce friction more than switching models or tools.

Python is key for AI

Contributor counts show who is using a language. Repository data shows what that language is being used to build. 

When we look specifically at AI-focused repositories, Python stands apart. As of August 2025, nearly half of all new AI projects on GitHub were built primarily in Python. 

A chart listing the most commonly used programming languages in AI-tagged projects on GitHub in 2025. Python ranks first with 582,000 repositories (+50.7% year over year), followed by JavaScript with 88,000 (+24.8%), TypeScript with 86,000 (+77.9%), Shell with 9,000 (+324%), and C++ with 7,800 (+11%). The chart includes brief descriptions of each language’s role in AI development, displayed on a blue gradient background with green geometric ribbon graphics.

This matters because AI projects now account for a disproportionate share of open source momentum. Six of the ten fastest-growing open source projects by contributors in 2025 were directly focused on AI infrastructure or tooling.

A table listing the fastest-growing open source projects on GitHub in 2025 by contributors. The top ten are zen-browser/desktop, cline/cline, vllm-project/vllm, astral-sh/uv, microsoft/vscode, infiniflow/ragflow, sgl-project/sglang, continuedev/continue, comfyanonymous/ComfyUI, and home-assistant/core. Growth rates range from 2,301% to 6,836%, with most projects marked as AI-focused. Displayed on a blue gradient background with the GitHub Octoverse ribbon graphic.

Python’s role here isn’t new, but it is evolving. The data suggests a shift from experimentation toward production-ready AI systems, with Python increasingly anchoring packaging, orchestration, and deployment rather than living only in notebooks. 

Moreover, Python is likely to continue to grow in 2026, as AI continues to gain support and additional projects.

Here’s what this means in practice:

  • Python remains the backbone of applied AI work from training and inference to orchestration.
  • Production-focused Python skills such as packaging, typing, CI, and containerization are becoming more important than exploratory scripting alone. 

A deeper look at the top open source projects

Looking across the fastest-growing projects, a clear pattern emerges: developers are optimizing for speed, control, and predictable outcomes. 

Many of the fastest-growing tools emphasize performance and minimalism. Projects like astral-sh/uv, a package and project manager, focus on dramatically faster Python package management. This reflects a growing intolerance for slow feedback loops and non-deterministic environments. 

Having just one of these projects could be an anomaly, but having multiple indicates a clear trend. This trend aligns closely with AI-assisted workflows where iteration speed and reproducibility directly impact developer productivity. 

Here’s what this means in practice: 

  • Fast installs and deterministic builds increasingly matter as much as feature depth.
  • Tools that reduce “works on my machine” moments are winning developer mindshare.

Where first-time open source contributors are showing up

As the developer population grows, understanding where first-time contributors show up (and why) becomes increasingly important. 

A chart showing the open source projects that attracted the most first-time contributors on GitHub in 2025. The top ten are microsoft/vscode, firstcontributions/first-contributions, home-assistant/core, slackblitz/bolt.new, flutter/flutter, zen-browser/desktop, is-a-dev/register, vllm-project/vllm, comfyanonymous/ComfyUI, and ollama/ollama. Displayed on a blue gradient background with green 3D ribbon graphics.

Projects like VS Code and First Contributions continued to top the list over the last year, reflecting both the scale of widely used tools and the persistent need for low-friction entry points into open source (notably, we define contributions as any content-generating activity on GitHub).

Despite this growth, basic project governance remains uneven across the ecosystem. README files are common, but contributor guides and codes of conduct are still relatively rare even as first-time contributions increase.

This gap represents one of the highest-leverage improvements maintainers and open source communities can make. The fact that most of the projects on this list have detailed documentation on what the project is and how to contribute shows the importance of this guidance.

Here’s what this means in practice: 

  • Clear documentation lowers the cost of contribution more than new features.
  • Contributor guides and codes of conduct can help convert curiosity into sustained participation.
  • Improving project hygiene is often the fastest way to grow a contributor base.

Putting it all together

Taken together, these trends point to a shift in what developers value and how they choose tools. 

AI is no longer a separate category of development. It’s shaping the languages teams use, which tools gain traction, and which projects attract contributors. 

Typed languages like TypeScript are becoming the default for reliability at scale, while Python remains central to AI-driven systems as they move from prototypes into production. 

Across the ecosystem, developers are rewarding tools that minimize friction with faster feedback loops, reproducible environments, and clearer contribution paths.

Developers and teams that optimize for speed, clarity, and reliability are shaping how software is being built.

As a reminder, you can check out the full 2025 Octoverse report for more information and make your own conclusions. There’s a lot of good data in there, and we’re just scratching the surface of what you can learn from it.

The post What the fastest-growing tools reveal about how software is being built appeared first on The GitHub Blog.

]]>
93551
How to maximize GitHub Copilot’s agentic capabilities https://github.blog/ai-and-ml/github-copilot/how-to-maximize-github-copilots-agentic-capabilities/ Mon, 02 Feb 2026 17:00:00 +0000 https://github.blog/?p=93542 A senior engineer's guide to architecting and extending Copilot's real-world applications.

The post How to maximize GitHub Copilot’s agentic capabilities appeared first on The GitHub Blog.

]]>

Modern engineering work rarely lives in a single file. Real systems evolve across years of incrementally layered decisions—some good, some accidental. A single feature request (“Add tagging to notes,” “Refactor the validation layer,” “Support a new consumer on our API”) often touches controllers, domain models, repositories, migrations, tests, documentation, and deployment strategy.

Copilot’s agentic capabilities don’t replace your judgment in these situations—they amplify it. When used well, Copilot becomes a partner in system design, refactoring, modernization, and multi-file coordination.

This guide focuses on architecture-aware, multi-step workflows used every day by staff engineers, but written to be accessible for earlier-career engineers who want to understand how senior engineers think—and how Copilot can accelerate their own growth.

It draws on four GitHub Skills exercises (linked below), and builds toward a complete, real-world scenario: extending a small modular Notes Service with a tagging subsystem, refactoring a validation layer, designing a safe migration, and modernizing tests.


Before you start

You’ll get the most out of this guide if you have:

  • GitHub Copilot with agent mode enabled
  • Some familiarity with service-layer architectures (Node, Python, Go—language doesn’t matter
  • Copy a GitHub Skills exercise template to your handle or organization (use the green “Copy Exercise” button)
  • A willingness to let Copilot propose solutions—and the judgment to inspect and challenge them

If you’re earlier in your career, don’t worry. Each section explains why these patterns matter and how to practice them safely.


Using Copilot for system design and decomposition (not just scaffolding)

Senior engineers rarely begin by writing code. They begin by identifying boundaries: domain logic, data access, interfaces, and how modules should interact.

Copilot agent mode can help by revealing structural issues and proposing architectures.

Prompt:

Analyze this service and propose a modular decomposition with domain, infrastructure, and interface layers.

Identify anti-patterns, coupling issues, and potential failure points.

You’ll typically get back:

  • Proposed module boundaries
  • Cross-layer coupling concerns
  • Async/transaction pitfalls
  • Duplication or tight weaving of responsibilities
  • Testability and observability implications

This transforms Copilot from an autocomplete tool into a design reviewer.

You can push further by asking it to compare architectures:

Compare hexagonal architecture vs. layered architecture for this codebase.

Recommend one based on the constraints here. Include tradeoffs.

Want to try it yourself? Use these proposals as starting points.

Building a modular service using agentic workflows

Once boundaries are defined, Copilot can coordinate changes across modules.

Prompt:

Implement the domain, controller, and repository layers as distinct modules.

Use dependency inversion to reduce coupling.

Document assumptions and contracts for each module.

Copilot will typically generate:

  • Domain model interfaces
  • Repository abstractions
  • Controller logic calling domain services
  • A short Markdown summary describing each module

For earlier-career engineers, this provides exposure to real engineering patterns. For senior engineers, it provides leverage and reduces boilerplate overhead.

Feature work with architectural awareness (example: tagging subsystem)

Adding a tagging subsystem is a deceptively simple request with meaningful architectural implications.

Even this single feature forces decisions across the system: 

  • Data modeling: embedded tags vs. normalized tables vs. many-to-many relationships
  • Search behavior: how tags affect indexing, filtering, and relevance
  • API contracts: whether tags are first-class resources or an implementation detail
  • Validation boundaries: where constraints and invariants are enforced
  • Migration and rollout: additive vs. breaking changes and rollback strategy

Before touching code, ask Copilot to map the impact.

Prompt

Propose the architectural changes required to add a tagging subsystem.

Identify migration needs, cross-cutting concerns, caching or indexing implications, and potential regressions.

Copilot may identify:

  • Tag–note relationships (one-to-many or many-to-many)
  • Migration strategy
  • Impact to search logic
  • Required test updates
  • Changes in validation logic
  • Implications on external API consumers

This is the staff-level lens that Copilot can help junior developers adopt.

Then implement it:

Implement the tagging domain model, schema changes, repository updates, and controller logic.

Update tests and documentation. Show each change as a diff.

Example output (simplified)

Migration example:

ALTER TABLE notes ADD COLUMN tags TEXT DEFAULT '[]';

Domain model example:

export interface Tag {
  id: string;
  label: string;
}

export interface Note {
  id: string;
  title: string;
  body: string;
  tags: Tag[];
}

Controller update (partial):

await noteService.addTag(noteId, { label: req.body.label });

This is where agent mode shines: coordinating multiple files with consistent intent.

Schema migrations and safe rollout strategies

At senior levels, the hardest part isn’t writing SQL. It’s designing a change that is:

  • Backward compatible
  • Reversible
  • Safe under load
  • Transparent to dependent systems

Ask Copilot to reason about this:

Prompt:

Generate an additive, backward-compatible schema migration to support the tagging subsystem.

Describe the rollback plan, compatibility window, and expected impact to existing clients.

This forces Copilot to consider:

  • Mon-breaking additive fields
  • Optional fields vs. required fields
  • Whether a dual-read or dual-write strategy is needed
  • Safe rollback procedures
  • API versioning implications

If you’re earlier in your career, this offers lessons on how safe migrations are designed. And if you’re more experienced, this gives you a repeatable workflow for multi-step schema evolution.

Advanced refactoring with agentic workflows

Let’s perform a real cross-module refactor: extracting validation out of controllers into a domain service.

Prompt:

Create a step-by-step refactor plan to extract validation logic into a domain service.

Identify affected modules and required test updates.

Copilot may output something like:

  1. Introduce domain validationService
  2. Move validation logic from controller to service
  3. Update controllers to use new service
  4. Update repository logic where validation assumptions leak
  5. Update domain tests
  6. Update integration tests

Execute in incremental steps

Prompt:

Execute steps 1–3 only. Stop before controller rewrites.

Provide detailed diffs and call out risky areas.

This is a low-blast-radius refactor, modeled directly in the IDE.

Modernizing test strategy

Instead of asking Copilot “write tests,” ask it to assess the entire suite.

Prompt:

Analyze the current test suite and identify systemic gaps.

Recommend a modernization plan including contract, integration, and domain-layer tests.

Then implement contract tests:

describe("NotesRepository contract", () => {
  test("create + fetch returns a fully hydrated note object", async () => {
    const note = await notesRepo.create({ title: "Test", body: "…" });
    const fetched = await notesRepo.get(note.id);

    expect(fetched).toMatchObject({ title: "Test" });
    expect(fetched.id).toBeDefined();
  });
});

This elevates testing into an architectural concern.

A complete end-to-end workflow

Bringing it all together, here’s a real sequence you might run with Copilot:

  1. Ask Copilot to analyze the existing architecture: identify hazards, modularization opportunities
  2. Define module boundaries: domain, repository, controller layers
  3. Add tagging subsystem: architectural assessment to implementation to tests to doc updates
  4. Create a backward-compatible migration: additive schema to rollback plan
  5. Perform a targeted refactor: validation layer extraction
  6. Modernize tests: contract + integration + domain tests

This workflow is architecturally realistic—and a model for how Copilot becomes a system-level collaborator.

What agent mode is not for

It’s important to clarify that agent mode is not ideal for:

  • Altering domain invariants without human review
  • Redesigning cross-service ownership boundaries
  • Replacing logic driven by institutional knowledge
  • Large sweeping rewrites across hundreds of files
  • Debugging deep runtime issues

Copilot should support your decision-making, not replace it.

Where to go next

Here’s where GitHub Skills comes in—not as “beginner content,” but as a set of guided, self-contained labs that reinforce the patterns above. 

Even senior engineers will benefit: These exercises are structured so you can reliably recreate complex workflows and test Copilot’s behavior in controlled environments.

Explore GitHub Skills >

The post How to maximize GitHub Copilot’s agentic capabilities appeared first on The GitHub Blog.

]]>
93542