Looking to stay one step ahead? Read the latest Octoverse report and try the Copilot usage metrics dashboard.
The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.
]]>You know that feeling when a sensory trigger instantly pulls you back to a moment in your life? For me, it’s Icy Hot. One whiff and I’m back to 5 a.m. formation time in the army. My shoulders tense. My body remembers. It’s not logical. It’s just how memory works. We build strong associations between experiences and cues around them. Those patterns get encoded and guide our behavior long after the moment passes.
That same pattern is happening across the software ecosystem as AI becomes a default part of how we build. For example, we form associations between convenience and specific technologies. Those loops influence what developers reach for, what they choose to learn, and ultimately, which technologies gain momentum.
Octoverse 2025 data illustrates this in real time. And it’s not subtle.
In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever. That’s the headline. But the deeper story is what it signals: AI isn’t just speeding up coding. It’s reshaping which languages, frameworks, and tools developers choose in the first place.

When a task or process goes smoothly, your brain remembers. Convenience captures attention. Reduced friction becomes a preference—and preferences at scale can shift ecosystems.
Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what “easy” means.
When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead. The language adoption data shows this behavioral shift:
That last one matters. We didn’t suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost.
This is what Octoverse is really showing us: developer choice is shifting toward technologies that work best with the tools we’re already using.
There are concrete, technical reasons AI performs better with strongly typed languages.
Strongly typed languages give AI much clearer constraints. In JavaScript, a variable could be anything. In TypeScript, declaring x: string immediately eliminates all non-string operations. That constraint matters. Constraints help AI generate more reliable, contextually correct code. And developers respond to that reliability.
That effect compounds when you look at AI model integration across GitHub. Over 1.1 million public repositories now use LLM SDKs. This is mainstream adoption, not fringe experimentation. And it’s concentrating around the languages and frameworks that work best with AI.

AI tools are amplifying developer productivity in ways we haven’t seen before. The question is how to use them strategically. The teams getting the best results aren’t fighting the convenience loop. They’re designing their workflows to harness it while maintaining the architectural standards that matter.
Establish patterns before you generate. AI is fantastic at following established patterns, but struggles to invent them cleanly. If you define your first few endpoints or components with strong structure, Copilot will follow those patterns. Good foundations scale. Weak ones get amplified.
Use type systems as guardrails, not crutches. TypeScript reduces errors, but passing type checks isn’t the same as expressing correct business logic. Use types to bound the space of valid code, not as your primary correctness signal.
Test AI-generated code harder, not less. There’s a temptation to trust AI output because it “looks right” and passes initial checks. Resist that. Don’t skip testing.
Recognize the velocity jump and prepare for its costs. AI-assisted development often produces a 20–30 percent increase in throughput. That’s a win. But higher throughput means architectural drift can accumulate faster without the right guardrails.
Standardize before you scale. Document patterns. Publish template repositories. Make your architectural decisions explicit. AI tools will mirror whatever structures they see.
Track what AI is generating, not just how much. The Copilot usage metrics dashboard (now in public preview for Enterprise) lets you see beyond acceptance rates. You can track daily and weekly active users, agent adoption percentages, lines of code added and deleted, and language and model usage patterns across your organization. The dashboard answers a critical question: how well are teams using AI?
Use these metrics to identify patterns. If you’re seeing high agent adoption but code quality issues in certain teams, that’s a signal those teams need better prompt engineering training or stricter review standards. If specific languages or models correlate with higher defect rates, that’s data you can act on. The API provides user-level granularity for deeper analysis, so you can build custom dashboards that track the metrics that matter most to your organization.
Invest in architectural review capacity. As developers become more productive, senior engineering time becomes more valuable, not less. Someone must ensure the system remains coherent as more code lands faster.
Make architectural decisions explicit and accessible. AI learns from context. ADRs, READMEs, comments, and well-structured repos all help AI generate code aligned with your design principles.
The technology choices you make today are shaped by forces you may not notice: convenience, habit, AI-assisted flow, and how much friction each stack introduces..
💡 Pro tip: Look at the last three technology decisions you made. Language for a new project, framework for a feature, tool for your workflow. How much did AI tooling support factor into those choices? If the answer is “not much,” I’d bet it factored in more than you realized.
AI isn’t just changing how fast we code. It’s reshaping the ecosystem around which tools work best with which languages. Once those patterns set in, reversing them becomes difficult.
If you’re choosing technologies without considering AI compatibility, you’re setting yourself up for future friction. If you’re building languages or frameworks, AI support can’t be an afterthought.
Next time you start a project, notice which technologies feel “natural” to reach for. Notice when AI suggestions feel effortless and when they don’t. Those moments of friction and flow are encoding your future preferences right now.
Are you choosing your tools consciously, or are your tools choosing themselves through the path of least resistance?
We’re all forming our digital “Icy Hot” moments. The trick is being aware of them.
Looking to stay one step ahead? Read the latest Octoverse report and try the Copilot usage metrics dashboard.
The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.
]]>The post What to expect for open source in 2026 appeared first on The GitHub Blog.
]]>Over the years (decades), open source has grown and changed along with software development, evolving as the open source community becomes more global.
But with any growth comes pain points. In order for open source to continue to thrive, it’s important for us to be aware of these challenges and determine how to overcome them.
To that end, let’s take a look at what Octoverse 2025 reveals about the direction open source is taking. Feel free to check out the full Octoverse report, and make your own predictions.
In 2025, GitHub saw about 36 million new developers join our community. While that number alone is huge, it’s also important to see where in the world that growth comes from. India added 5.2 million developers, and there was significant growth across Brazil, Indonesia, Japan, and Germany.
What does this mean? It’s clear that open source is becoming more global than it was before. It also means that oftentimes, the majority of developers live outside the regions where the projects they’re working on originated. This is a fundamental shift. While there have always been projects with global contributors, it’s now starting to become a reality for a greater number of projects.
Given this global scale, open source can’t rely on contributors sharing work hours, communication strategies, cultural expectations, or even language. The projects that are going to thrive are the ones that support the global community.
One of the best ways to do this is through explicit communication maintained in areas like contribution guidelines, codes of conduct, review expectations, and governance documentation. These are essential infrastructure for large projects that want to support this community. Projects that don’t include these guidelines will have trouble scaling as the number of contributors increases across the globe. Those that do provide them will be more resilient, sustainable, and will provide an easier path to onboard new contributors.
AI has had a major role in accelerating global participation over 2025. It’s created a pathway that makes it easier for new developers to enter the coding world by dramatically lowering the barrier to entry. It helps contributors understand unfamiliar codebases, draft patches, and even create new projects from scratch. Ultimately, it has helped new developers make their first contributions sooner.
However, it has also created a lot of noise, or what is called “AI slop.” AI slop is a large quantity of low-quality—and oftentimes inaccurate—contributions that don’t add value to the project. Or they are contributions that would require so much work to incorporate, it would be faster to implement the solution yourself.
This makes it harder than ever to maintain projects and make sure they continue moving forward in the intended direction. Auto-generated issues and pull requests increase volume without always increasing the quality of the project. As a result, maintainers need to spend more time reviewing contributions from developers with vastly variable levels of skill. In a lot of cases, the amount of time it takes to review the additional suggestions has risen faster than the number of maintainers.
Even if you remove AI slop from the equation, the sheer volume of contributions has grown, potentially to unmanageable levels. It can feel like a denial of service attack on human attention.
This is why maintainers have been asking: how do you sift through the noise and find the most important contributions? Luckily, we’ve added some tools to help. There are also a number of open source AI projects specifically trying to address the AI slop issue. In addition, maintainers have been using AI defensively, using it to triage issues, detect duplicate issues, and handle simple maintenance like the labeling of issues. By helping to offload some of the grunt work, it gives maintainers more time to focus on the issues that require human intervention and decision making.
Expect the open source projects that continue to expand and grow over the next year to be those that incorporate AI as part of the community infrastructure. In order to deal with this quantity of information, AI cannot be just a coding assistant. It needs to find ways to ease the pressure of being a maintainer and find a way to make that work more scalable.
On the surface, record global growth looks like success. But this influx of newer developers can also be a burden. The sheer popularity of projects that cover basics, such as contributing your first pull request to GitHub, shows that a lot of these new developers are very much in their infancy in terms of comfort with open source. There’s uncertainty about how to move forward and how to interact with the community. Not to mention challenges with repetitive onboarding questions and duplicate issues.
This results in a growing gap between the number of participants in open source projects and the number of maintainers with a sense of ownership. As new developers grow at record rates, this gap will widen.
The way to address this is going to be less about having individuals serving as mentors—although that will still be important. It will be more about creating durable systems that show organizational maturity. What does this mean? While not an exhaustive list, here are some items:
By helping to make sure that the number of maintainers keeps relative pace with the number of contributors, projects will be able to take advantage of the record growth. This does create an additional burden on the current maintainers, but the goal is to invest in a solid foundation that will result in a more stable structure in the future. Projects that don’t do this will have trouble functioning at the increased global scale and might start to stall or see problems like increased technical debt.
It can’t be denied that AI was a major focus—about 60% of the top growing projects were AI focused. However, there were several that had nothing to do with AI. These projects (e.g., Home Assistant, VS Code, Godot) continue to thrive because they meet real needs and support broad, international communities.

Just as the developer space is growing on a global scale, the same can be said about the projects that garner the most interest. These types of projects that support a global community and address their needs are going to continue to be popular and have the most support.
This just continues to reinforce how open source is really embracing being a global phenomenon as opposed to a local one.
Open source in 2026 won’t be defined by a single trend that emerged over 2025. Instead, it will be shaped by how the community responds to the pressures identified over the last year, particularly with the surge in AI and an explosively growing global community.
For developers, this means that it’s important to invest in processes as much as code. Open source is scaling in ways that would have been impossible to imagine a decade ago, and the important question going forward isn’t how much it will grow—it’s how can you make that growth sustainable.
The post What to expect for open source in 2026 appeared first on The GitHub Blog.
]]>The post Securing the AI software supply chain: Security results across 67 open source projects appeared first on The GitHub Blog.
]]>Modern software is built on open source projects. In fact, you can trace almost any production system today, including AI, mobile, cloud, and embedded workloads, back to open source components. These components are the invisible infrastructure of software: the download that always works, the library you never question, the build step you haven’t thought about in years, if ever.
A few examples:
When these projects are secure, teams can adopt automation, AI‑enhanced tooling, and faster release cycles without adding risk or slow down development. When they aren’t, the blast radius crosses project boundaries, propagating through registries, clouds, transitive dependencies, and production systems, including AI systems, that react far faster than traditional workflows.
Securing this layer is not only about preventing incidents; it’s about giving developers confidence that the systems they depend on—whether for model training, CI/CD, or core runtime behavior—are operating on hardened, trustworthy foundations. Open source is shared industrial infrastructure that deserves real investment and measurable outcomes.
That is the mission of the GitHub Secure Open Source Fund: to secure open source projects that underpin the digital supply chain, catalyze innovation, and are critical to the modern AI stack.
We do this by directly linking funding to verified security outcomes and by giving maintainers resources, hands‑on security training, and a security community where they can raise their highest‑risk concerns and get expert feedback.
A single production service can depend on hundreds or even thousands of transitive dependencies. As Log4Shell demonstrated, when one widely used project is compromised, the impact is rarely confined to a single application or company.
Investing in the security of widely used open source projects does three things at once:
This security work benefits everyone who writes, ships, or operates code, even if they never interact directly with the projects involved. That gap is exactly what the GitHub Secure Open Source Fund was built to close. In Session 1 & 2, 71 projects made significant security improvements. In Session 3, 67 open source projects delivered concrete security improvements to reduce systemic risk across the software supply chain.
Real security results across all sessions:
Plus, in just the last 6 months:
Session 3 focused on improving security across the systems developers rely on every day. The projects below are grouped by the role they play in the software ecosystem.
CPython • Himmelblau • LLVM • Node.js • Rustls
These projects define how software is written and executed. Improvements here flow downstream to entire ecosystems.
This group includes CPython, Node.js, LLVM, Rustls, and related tooling that shapes compilation, execution, and cryptography at scale.

For example, improvements to CPython directly benefit millions of developers who rely on Python for application development, automation, and AI workloads. LLVM maintainers identified security improvements that complement existing investments and reduce risk across toolchains used throughout the industry.
When language runtimes improve their security posture, everything built on top of them inherits that resilience.

Apache APISIX• curl• evcc • kgateway• Netty• quic-go• urllib3 • Vapor
These projects form the connective tissue of the internet. They handle HTTP, TLS, APIs, and network communication that nearly every application depends on.
This group includes curl, urllib3, Netty, Apache APISIX, quic-go, and related libraries that sit on the hot path of modern software.

Apache Airflow • Babel • Foundry • Gitoxide • GoReleaser • Jenkins • Jupyter Docker Stacks • node-lru-cache • oapi-codegen • PyPI / Warehouse • rimraf • webpack
Compromising build tooling compromises the entire supply chain. These projects influence how software is built, tested, packaged, and shipped.
Session 3 included projects such as Jenkins, Apache Airflow, GoReleaser, PyPI Warehouse, webpack, and related automation and release infrastructure.
Maintainers in this category focused on securing workflows that often run with elevated privileges and broad access. Improvements here help prevent tampering before software ever reaches users.

ACI.dev • ArviZ • CocoIndex • OpenBB Platform • OpenMetadata • OpenSearch • pandas • PyMC • SciPy • TraceRoot
These projects sit at the core of modern data analysis, research, and AI development. They are increasingly embedded in production systems as well as research pipelines.
Projects such as pandas, SciPy, PyMC, ArviZ, and OpenSearch participated in Session 3. Maintainers expanded security coverage across large and complex codebases, often moving from limited scanning to continuous checks on every commit and release.
Many of these projects also engaged deeply with AI-related security topics, reflecting their growing role in AI workflows.

AssertJ • ArduPilot • AsyncAPI Initiative • Bevy • calibre • DIGIT • fabric.js • ImageMagick • jQuery • jsoup • Mastodon • Mermaid • Mockoon • p5.js • python-benedict • React Starter Kit • Selenium • Sphinx• Spyder • ssh_config• Thunderbird for Android • Two.js • xyflow • Yii framework
These projects shape the day-to-day experience of writing, testing, and maintaining software.
The group includes tools such as Selenium, Sphinx, ImageMagick, calibre, Spyder, and other widely used utilities that appear throughout development and testing environments.
Improving security here reduces the risk that developer tooling becomes an unexpected attack vector, especially in automated or shared environments.

external-secrets • Helmet.js • Keycloak • Keyshade • Oauth2 (Ruby) • varlock • WebAuthn (Go)
These projects form the backbone of authentication, authorization, secrets management, and secure configuration.
Session 3 participants included projects such as Keycloak, external-secrets, oauth2 libraries, WebAuthn tooling, and related security frameworks.
Maintainers in this group often reported shifting from reactive fixes to systematic threat modeling and long-term security planning, improving trust for every system that depends on them.


One of the most durable outcomes of the program was a shift in mindset.
Maintainers moved security from a stretch goal to a core requirement. They shifted from reactive patching to proactive design, and from isolated work to shared practice. Many are now publishing playbooks, sharing incident response exercises, and passing lessons on to their contributor communities.
That is how security scales: one-to-many.
Securing open source is basic maintenance for the internet. By giving 67 heavily used projects real funding, three focused weeks, and direct help, we watched maintainers ship fixes that now protect millions of builds a day. This training, taught by the GitHub Security Lab and top cybersecurity experts, allows us to go beyond one-on-one education and enable one-to-many impact.
For example, many maintainers are working to make their playbooks public. The incident-response plans they rehearsed are forkable. The signed releases they now ship flow downstream to every package manager and CI pipeline that depends on them.
Join us in this mission to secure the software supply chain at scale.
We couldn’t do this without our incredible network of partners. Together, we are helping secure the open source ecosystem for everyone!
Funding Partners: Alfred P. Sloan Foundation, American Express, Chainguard, Datadog, Herodevs, Kraken, Mayfield, Microsoft, Shopify, Stripe, Superbloom, Vercel, Zerodha, 1Password

Ecosystem Partners: Atlantic Council, Ecosyste.ms, CURIOSS, Digital Data Design Institute Lab for Innovation Science, Digital Infrastructure Insights Fund, Microsoft for Startups, Mozilla, OpenForum Europe, Open Source Collective, OpenUK, Open Technology Fund, OpenSSF, Open Source Initiative, OpenJS Foundation, University of California, OWASP, Santa Cruz OSPO, Sovereign Tech Agency, SustainOSS

The post Securing the AI software supply chain: Security results across 67 open source projects appeared first on The GitHub Blog.
]]>The post Automate repository tasks with GitHub Agentic Workflows appeared first on The GitHub Blog.
]]>Imagine visiting your repository in the morning and feeling calm because you see:
All of it visible, inspectable, and operating within the boundaries you’ve defined.
That’s the future powered by GitHub Agentic Workflows: automated, intent-driven repository workflows that run in GitHub Actions, authored in plain Markdown and executed with coding agents. They’re designed for people working in GitHub, from individuals automating a single repo to teams operating at enterprise or open-source scale.
At GitHub Next, we began GitHub Agentic Workflows as an investigation into a simple question: what does repository automation with strong guardrails look like in the era of AI coding agents? A natural place to start was GitHub Actions, the heart of scalable repository automation on GitHub. By bringing automated coding agents into actions, we can enable their use across millions of repositories, while keeping decisions about when and where to use them in your hands.
GitHub Agentic Workflows are now available in technical preview. In this post, we’ll explain what they are and how they work. We invite you to put them to the test, to explore where repository-level AI automation delivers the most value.
The concept behind GitHub Agentic Workflows is straightforward: you describe the outcomes you want in plain Markdown, add this as an automated workflow to your repository, and it executes using a coding agent in GitHub Actions.
This brings the power of coding agents into the heart of repository automation. Agentic workflows run as standard GitHub Actions workflows, with added guardrails for sandboxing, permissions, control, and review. When they execute, they can use different coding agent engines—such as Copilot CLI, Claude Code, or OpenAI Codex—depending on your configuration.
The use of GitHub Agentic Workflows makes entirely new categories of repository automation and software engineering possible, in a way that fits naturally with how developer teams already work on GitHub. All of them would be difficult or impossible to accomplish traditional YAML workflows alone:
These are just a few examples of repository automations that showcase the power of GitHub Agentic Workflows. We call this Continuous AI: the integration of AI into the SDLC, enhancing automation and collaboration similar to continuous integration and continuous deployment (CI/CD) practices.
GitHub Agentic Workflows and Continuous AI are designed to augment existing CI/CD rather than replace it. They do not replace build, test, or release pipelines, and their use cases largely do not overlap with deterministic CI/CD workflows. Agentic workflows run on GitHub Actions because that is where GitHub provides the necessary infrastructure for permissions, logging, auditing, sandboxed execution, and rich repository context.
In our own usage at GitHub Next, we’re finding new uses for agentic workflows nearly every day. Throughout GitHub, teams have been using agentic workflows to create custom tools for themselves in minutes, replacing chores with intelligence or paving the way for humans to get work done by assembling the right information, in the right place, at the right time. A new world of possibilities is opening for teams and enterprises to keep their repositories healthy, navigable, and high-quality.
Designing for safety and control is non-negotiable. GitHub Agentic Workflows implements a defense-in-depth security architecture that protects against unintended behaviors and prompt-injection attacks.
Workflows run with read-only permissions by default. Write operations require explicit approval through safe outputs, which map to pre-approved, reviewable GitHub operations such as creating a pull request or adding a comment to an issue. Sandboxed execution, tool allowlisting, and network isolation help ensure that coding agents operate within controlled boundaries.
Guardrails like these make it practical to run agents continuously, not just as one-off experiments. See our security architecture for more details.
One alternative approach to agentic repository automation is to run coding agent CLIs, such as Copilot or Claude, directly inside a standard GitHub Actions YAML workflow. This approach often grants these agents more permission than is required for a specific task. In contrast, GitHub Agentic Workflows run coding agents with read-only access by default and rely on safe outputs for GitHub operations, providing tighter constraints, clearer review points, and stronger overall control.
Let’s look at an agentic workflow which creates a daily status report for repository maintainers.
In practice, you will usually use AI assistance to create your workflows. The easiest way to do this is with an interactive coding agent. For example, with your favorite coding agent, you can enter this prompt:
Generate a workflow that creates a daily repo status report for a maintainer. Use the instructions at https://github.com/github/gh-aw/blob/main/create.md
The coding agent will interact with you to confirm your specific needs and intent, write the Markdown file, and check its validity. You can then review, refine, and validate the workflow before adding it to your repository.
This will create two files in .github/workflows:
daily-repo-status.md (the agentic workflow) daily-repo-status.lock.yml (the corresponding agentic workflow lock file, which is executed by GitHub Actions) The file daily-repo-status.md will look like this:
---
on:
schedule: daily
permissions:
contents: read
issues: read
pull-requests: read
safe-outputs:
create-issue:
title-prefix: "[repo status] "
labels: [report]
tools:
github:
---
# Daily Repo Status Report
Create a daily status report for maintainers.
Include
- Recent repository activity (issues, PRs, discussions, releases, code changes)
- Progress tracking, goal reminders and highlights
- Project status and recommendations
- Actionable next steps for maintainers
Keep it concise and link to the relevant issues/PRs.
This file has two parts:
--- markers) for configuration The Markdown is the intent, but the trigger, permissions, tools, and allowed outputs are spelled out up front.
If you prefer, you can add the workflow to your repository manually:
daily-repo-status.md with the frontmatter and instructions.gh extension install github/gh-aw gh aw compileOnce you add this workflow to your repository, it will run automatically or you can trigger it manually using GitHub Actions. When the workflow runs, it creates a status report issue like this:

If you’re looking for further inspiration Peli’s Agent Factory is a guided tour through a wide range of workflows, with practical patterns you can adapt, remix, and standardize across repos.
A useful mental model: if repetitive work in a repository can be described in words, it might be a good fit for an agentic workflow.
If you’re looking for design patterns, check out ChatOps, DailyOps, DataOps, IssueOps, ProjectOps, MultiRepoOps, and Orchestration.
Uses for agent-assisted repository automation often depend on particular repos and development priorities. Your team’s approach to software development will differ from those of other teams. It pays to be imaginative about how you can use agentic automation to augment your team for your repositories for your goals.
Agentic workflows bring a shift in thinking. They work best when you focus on goals and desired outputs rather than perfect prompts. You provide clarity on what success looks like, and allow the workflow to explore how to achieve it. Some boundaries are built into agentic workflows by default, and others are ones you explicitly define. This means the agent can explore and reason, but its conclusions always stay within safe, intentional limits.
You will find that your workflows can range from very general (“Improve the software”) to very specific (“Check that all technical documentation and error messages for this educational software are written in a style suitable for an audience of age 10 or above”). You can choose the level of specificity that’s appropriate for your team.
GitHub Agentic Workflows use coding agents at runtime, which incur billing costs. When using Copilot with default settings, each workflow run typically incurs two premium requests: one for the agentic work and one for a guardrail check through safe outputs. The models used can be configured to help manage these costs. Today, automated uses of Copilot are associated with a user account. For other coding agents, refer to our documentation for details. Here are a few more tips to help teams get value quickly:
Continuous AI works best if you use it in conjunction with CI/CD. Don’t use agentic workflows as a replacement for GitHub Actions YAML workflows for CI/CD. This approach extends continuous automation to more subjective, repetitive tasks that traditional CI/CD struggle to express.
GitHub Agentic Workflows are available now in technical preview and are a collaboration between GitHub, Microsoft Research, and Azure Core Upstream. We invite you to try them out and help us shape the future of repository automation.
We’d love for you to be involved! Share your thoughts in the Community discussion, or join us (and tons of other awesome makers) in the #agentic-workflows channel of the GitHub Next Discord. We look forward to seeing what you build with GitHub Agentic Workflows. Happy automating!
Try GitHub Agentic Workflows in a repo today! Install gh-aw, add a starter workflow or create one using AI, and run it. Then, share what you build (and what you want next).
The post Automate repository tasks with GitHub Agentic Workflows appeared first on The GitHub Blog.
]]>The post Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. appeared first on The GitHub Blog.
]]>Open collaboration runs on trust. For a long time, that trust was protected by a natural, if imperfect filter: friction.
If you were on Usenet in 1993, you’ll remember that every September a flood of new university students would arrive online, unfamiliar with the norms, and the community would patiently onboard them. Then mainstream dial-up ISPs became popular and a continuous influx of new users came online. It became the September that never ended.
Today, open source is experiencing its own Eternal September. This time, it’s not just new users. It’s the sheer volume of contributions.
In the era of mailing lists contributing to open source required real effort. You had to subscribe, lurk, understand the culture, format a patch correctly, and explain why it mattered. The effort didn’t guarantee quality, but it filtered for engagement. Most contributions came from someone who had genuinely engaged with the project.
It also excluded people. The barrier to entry was high. Many projects worked hard to lower it in order to make open source more welcoming.
A major shift came with the pull request. Hosting projects on GitHub, using pull requests, and labeling “Good First Issues” reduced the friction needed to contribute. Communities grew and contributions became more accessible.
That was a good thing.
But friction is a balancing act. Too much keeps people and their ideas out, too little friction can strain the trust open source depends on.
Today, a pull request can be generated in seconds. Generative AI makes it easy for people to produce code, issues, or security reports at scale. The cost to create has dropped but the cost to review has not.
It’s worth saying: most contributors are acting in good faith. Many want to help projects they care about. Others are motivated by learning, visibility, or the career benefits of contributing to widely used open source. Those incentives aren’t new and they aren’t wrong.
The challenge is what happens when low-quality contributions arrive at scale. When volume accelerates faster than review capacity, even well-intentioned submissions can overwhelm maintainers. And when that happens, trust, the foundation of open collaboration, starts to strain.
It is tempting to frame “low-quality contributions” or “AI slop” contributions as a unique recent phenomenon. It isn’t. Maintainers have always dealt with noisy inbound.
The question from maintainers has often been the same: “Are you really trying to help me, or just help yourself?“
Just because a tool—whether a static analyzer or an LLM—makes it easy to generate a report or a fix, it doesn’t mean that contribution is valuable to the project. The ease of creation often adds a burden to the maintainer because there is an imbalance of benefit. The contributor maybe gets the credit (or the CVE, or the visibility), while the maintainer gets the maintenance burden.
Maintainers are feeling that directly. For example:
These are rational responses to an imbalance.
At GitHub, we aren’t just watching this happen. Maintainer sustainability is foundational to open source, and foundational to us. As the home of open source, we have a responsibility to help you manage what comes through the door.
We are approaching this from multiple angles: shipping immediate relief now, while building toward longer-term, systemic improvements. Some of this is about tooling. Some is about creating clearer signals so maintainers can decide where to spend their limited time.
Plus, coming soon: pull request deletion from the UI. This will remove spam or abusive pull requests so repositories can stay more manageable.
These improvements focus on reducing review overhead.
We know that walls don’t build communities. As we explore next steps, our focus is on giving maintainers more control while helping protect what makes open source communities work.
Some of the directions we’re exploring in consultation with maintainers include:
CONTRIBUTING.md) and surface which pull requests should get your attention first.These tools are meant to support decision-making, not replace it. Maintainers should always remain in control.
We are also aware of tradeoffs. Restrictions can disproportionately affect first-time contributors acting in good faith. That’s why these controls are optional and configurable.
One of the things I love most about open source is that when the community hits a wall, people build ladders. We’re seeing a lot of that right now.
Maintainers across the ecosystem are experimenting with different approaches. Some projects have moved to invitation-only workflows. Others are building custom GitHub Actions for contributor triage and reputation scoring.
Mitchell Hashimoto’s Vouch project is an interesting example. It implements an explicit trust management system where contributors must be vouched for by trusted maintainers before they can participate. It’s experimental and some aspects will be debated, but it fits a longer lineage, from Advogato’s trust metric to Drupal’s credit system to the Linux kernel’s Signed-off-by chain.
At the same time, many communities are investing heavily in education and onboarding to widen who can contribute while setting clearer expectations. The Python community, for example, emphasizes contributor guides, mentorship, and clearly labeled entry points. Kubernetes pairs strong governance with extensive documentation and contributor education, helping new contributors understand not just how to contribute, but what a useful contribution looks like.
These approaches aren’t mutually exclusive. Education helps good-faith contributors succeed. Guardrails help maintainers manage scale.
There is no single correct solution. That’s why we are excited to see maintainers building tools that match their project’s specific values. The tools communities build around the platform often become the proving ground for what might eventually become features. So we’re paying close attention.
We also need to talk about incentives. If we only build blocks and bans, we create a fortress, not a bazaar.
Right now, the concept of “contribution” on GitHub still leans heavily toward code authorship. In WordPress, they use manually written “props” credit given not just for code, but for writing, reproduction steps, user testing, and community support. It recognizes the many forms of contribution that move a project forward.
We want to explore how GitHub can better surface and celebrate those contributions. Someone who has consistently triaged issues or merged documentation PRs has proven they understand your project’s voice. These are trust signals we should be surfacing to help you make decisions faster.
We’ve opened a community discussion to gather feedback on the directions we’re exploring: Exploring Solutions to Tackle Low-Quality Contributions on GitHub.
We want to hear from you. Share what is working for your projects, where the gaps are, and what would meaningfully improve your experience maintaining open source.
Open source’s Eternal September is a sign of something worth celebrating: more people want to participate than ever before. The volume of contributions is only going to grow — and that’s a good thing. But just as the early internet evolved its norms and tools to sustain community at scale, open source needs to do the same. Not by raising the drawbridge, but by giving maintainers better signals, better tools, and better ways to channel all that energy into work that moves their projects forward.
Let’s build that together.
The post Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. appeared first on The GitHub Blog.
]]>The post GitHub availability report: January 2026 appeared first on The GitHub Blog.
]]>January 13 09:38 UTC (lasting 46 minutes)
On January 13, 2026, from 09:25 to 10:11 UTC, GitHub Copilot experienced a service outage with error rates averaging 18% and peaking at 100%. This impacted chat features across Copilot Chat, VS Code, JetBrains IDEs, and other dependent products. The incident was triggered by a configuration error introduced during a model update and was initially mitigated by rolling back the change. A secondary recovery phase extended until 10:46 UTC due to upstream provider Open AI experiencing degraded availability for GPT‑4.1 model.
We have completed a detailed root‑cause review and are implementing stronger monitors, improved test environments, and tighter configuration safeguards to prevent recurrence and accelerate detection and mitigation of future issues.
January 15 16:56 UTC (lasting 1 hour and 40 minutes)
On January 15, 2026, between 16:40 UTC and 18:20 UTC, we observed increased latency and timeouts across issues, pull requests, notifications, actions, repositories, API, account login, and an internal service, Alive, that powers live updates on GitHub. An average 1.8% of combined web and API requests saw failure, peaking briefly at 10% early on. The majority of impact was observed for unauthenticated users, but authenticated users were impacted as well.
This was caused by an infrastructure update to some of our data stores. Upgrading this infrastructure to a new major version resulted in unexpected resource contention, leading to distributed impact in the form of slow queries and increased timeouts across services that depend on these datasets. We mitigated this by rolling back to the previous stable version.
We are working to improve our validation process for these types of upgrades to catch issues that only occur under high load before full release, improve detection time, and reduce mitigation times in the future.
Looking ahead
Please note that the incidents that occurred on February 9, 2026, will be included in next month’s February Availability Report. In the meantime, you can refer to incident report on the GitHub Status site for more details.
Follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the engineering section on the GitHub Blog.
The post GitHub availability report: January 2026 appeared first on The GitHub Blog.
]]>The post Continuous AI in practice: What developers can automate today with agentic CI appeared first on The GitHub Blog.
]]>Software engineering has always included work that’s repetitive, necessary, and historically difficult to automate. This isn’t because it lacks values, but because it resists deterministic rules.
Continuous integration (CI) solved part of this by handling tests, builds, formatting, and static analysis—anything that can be described with deterministic rules. CI excels when correctness can be expressed unambiguously: a test passes or fails, a build succeeds or doesn’t, a rule is violated or isn’t.
But CI is intentionally limited to problems that can be reduced to heuristics and rules.
For most teams, the hardest work isn’t writing code. It’s everything that requires judgment around that code: reviewing changes, keeping documentation accurate, managing dependencies, tracking regressions, maintaining tests, monitoring quality, and responding to issues that only surface after code ships.
But a lot of engineering work goes into work that requires interpretation, synthesis, and context, rather than deterministic validation. And an increasing share of engineering tasks fall into a category CI was never designed to handle: work that depends on understanding intent.
“Any task that requires judgment goes beyond heuristics,” says Idan Gazit, head of GitHub Next, which works on research and development initiatives.
Any time something can’t be expressed as a rule or a flow chart is a place where AI becomes incredibly helpful.
Idan Gazit, head of GitHub Next
This is why GitHub Next has been exploring a new pattern: Continuous AI, or background agents that operate in your repository the way CI jobs do, but only for tasks that require reasoning instead of rules.
CI isn’t failing. It’s doing exactly what it was designed to do.
CI is designed for binary outcomes. Tests pass or fail. Builds succeed or don’t. Linters flag well-defined violations. That works well for rule-based automation.
But many of the hardest and most time-consuming parts of engineering are judgment-heavy and context-dependent.
Consider these scenarios:
These problems are about whether intent still holds.
“The first era of AI for code was about code generation,” Idan explains. “The second era involves cognition and tackling the cognitively heavy chores off of developers.”
This is the gap Continuous AI fills: not more automation, but a different class of automation. CI handles deterministic work. Continuous AI applies where correctness depends on reasoning, interpretation, and intent.
Continuous AI is not a new product or CI replacement. Traditional CI remains essential.
Continuous AI is a pattern:
| Continuous AI = natural-language rules + agentic reasoning, executed continuously inside your repository. |
In practice, Continuous AI means expressing in plain language what should be true about your code, especially when that expectation cannot be reduced to rules or heuristics. An agent then evaluates the repository and produces artifacts a developer can review: suggested patches, issues, discussions, or insights.
Developers rarely author agentic workflows in a single pass. In practice, they collaborate with an agent to refine intent, add constraints, and define acceptable outputs. The workflow emerges through iteration, not a single sentence.
For example:
These workflows are not defined by brevity. They combine intent, constraints, and permitted outputs to express expectations that would be awkward or impossible to encode as deterministic rules.
“In the future, it’s not about agents running in your repositories,” Idan says. “It’s about being able to presume you can cheaply define agents for anything you want off your plate permanently.”
Think about what your work looks like when you can delegate more of it to AI, and what parts of your work you want to retain: your judgment, your taste.
Idan Gazit, head of GitHub Next
In our work, we define agentic workflows with safety as a first principle. By default, agents operate with read-only access to repositories. They cannot create issues, open pull requests, or modify content unless explicitly permitted.
We call this Safe Outputs, which provides a deterministic contract for what an agent is allowed to do. When defining a workflow, developers specify exactly which artifacts an agent may produce, such as opening a pull request or filing an issue, and under what constraints.
Anything outside those boundaries is forbidden.
This model assumes agents can fail or behave unexpectedly. Outputs are sanitized, permissions are explicit, and all activity is logged and auditable. The blast radius is deterministic.
This isn’t “AI taking over software development.” It’s AI operating within guardrails developers explicitly define.
As we’ve developed this, we’ve heard a common question: why not just extend CI with more rules?
When a problem can be expressed deterministically, extending CI is exactly the right approach. YAML, schemas, and heuristics remain the correct tools for those jobs.
But many expectations cannot be reduced to rules without losing meaning.
Idan puts it simply: “There’s a larger class of chores and tasks we can’t express in heuristics.”
A rule like “whenever documentation and code diverge, identify and fix it” cannot be expressed in a regex or schema. It requires understanding semantics and intent. A natural-language instruction can express that expectation clearly enough for an agent to reason over it.
Natural language doesn’t replace YAML, but instead complements it. CI remains the foundation. Continuous AI expands automation into commands CI was never designed to cover.
Agentic workflows don’t make autonomous commits. Instead, they can create the same kinds of artifacts developers would (pull requests, issues, comments, or discussions) depending on what the workflow is permitted to do.
Pull requests remain the most common outputs because they align with how developers already review and reason about change.
“The PR is the existing noun where developers expect to review work,” Idan says. “It’s the checkpoint everyone rallies around.”
That means:
Developer judgment remains the final authority. Continuous AI helps scale that judgment across a codebase.
The GitHub Next prototype (or you can find the repository at gh aw) uses a deliberately simple pattern:
Nothing is hidden; everything is transparent and visible.
“You want an action to look for style violations like misplaced brackets, that’s heuristics,” Idan explains. “But when you want deeper intent checks, you need AI.”
These aren’t theoretical examples. GitHub Next has tested these patterns in real repositories.
This is one of the hardest problems for CI because it requires understanding intent.
An agentic workflow can:
Idan calls this one of the most meaningful categories of work Continuous AI can address: “You don’t want to worry every time you ship code if the documentation is still right. That wasn’t possible to automate before AI.”
Maintainers and managers spend significant time answering the same questions repeatedly: What changed yesterday? Are bugs trending up or down? Which parts of the codebase are most active?
Agentic workflows can generate recurring reports that pull from multiple data sources (issues, pull requests, commits, and CI results), and apply reasoning on top.
For example, an agent can:
The value isn’t the report itself. It’s the synthesis across multiple data sources that would otherwise require manual analysis.
Anyone who has worked with localized applications knows the pattern: Content changes in English, translations fall behind, and teams batch work late in the cycle (often right before a release).
An agent can:
The workflow becomes continuous, not episodic. Machine translations might not be perfect out of the box, but having a draft translation ready for review in a pull request makes it that much easier to engage help from professional translators or community contributors.
Dependencies often change behavior without changing major versions. New flags appear. Defaults shift. Help output evolves.
In one demo, an agent:
This requires semantic interpretation, not just diffs, which is why classical CI cannot handle it.
“This is the first harbinger of the new phase of AI,” Idan says. “We’re moving from generation to reasoning.”
In one experiment:
And because the agent produced small pull requests daily, developers reviewed changes incrementally.
Linters and analyzers don’t always catch performance pitfalls that depend on understanding the code’s intent.
Example: compiling a regex inside a function call so it compiles on every invocation.
An agent can:
Small things add up, especially in frequently called code paths.
This was one of the more creative demos from Universe: using agents to play a simple platformer game thousands of times to detect UX regressions.
Strip away the game, and the pattern is widely useful:
Agents can simulate user behavior at scale and compare variants.
Developers don’t need a new CI system or separate infrastructure to try this. The GitHub Next prototype (gh aw) uses a simple pattern:
1. Write a natural-language rule in a Markdown file
For example:
---
on: daily
permissions: read
safe-outputs:
create-issue:
title-prefix: "[news] "
---
Analyze the recent activity in the repository and:
- create an upbeat daily status report about the activity
- proviate an agentic task description to improve the project based on the activity.
Create an issue with the report.
2. Compile it into an action
gh aw compile daily-team-status
This generates a GitHub Actions workflow.
3. Review the YAML
Nothing is hidden. You can see exactly what the agent will do.
4. Push to your repository
The agentic workflow begins executing in response to repository events or on a schedule you define, just like any other action.
5. Review the issue it creates
While still early, several trends are already emerging in developer workflows:
Pattern 1: Natural-language rules will become a part of automation
Developers will write short English rules that express intent:
Pattern 2: Repositories will begin hosting a fleet of small agents
Not one general agent, but many small ones with each responsible for one chore, one check, or one rule of thumb.
Pattern 3: Tests, docs, localization, and cleanup will shift into “continuous” mode
This mirrors the early CI movement: Not replacing developers, but changing when chores happen from “when someone remembers” to “every day.”
Pattern 4: Debuggability will win over complexity
Developers will adopt agentic patterns that are transparent, auditable, and diff-based—not opaque systems that act without visibility.
“Custom agents for offline tasks, that’s what Continuous AI is,” Idan says. “Anything you couldn’t outsource before, you now can.”
More precisely: many judgment-heavy chores that were previously manual can now be made continuous.
This requires a mental shift, like moving from owning files to streaming music.
“You already had all the music,” Idan says. “But suddenly the player is helping you discover more.”
Continuous AI is not an all-or-nothing paradigm. You don’t need to overhaul your pipeline. Start with something small:
Each of these is something agents can meaningfully assist with today.
Identify the recurring judgment-heavy tasks that quietly drain attention, and make those tasks continuous instead of episodic.
If CI automated rule-based work over the past decade, Continuous AI may do the same for select categories of judgment-based work, when applied deliberately and safely.
The post Continuous AI in practice: What developers can automate today with agentic CI appeared first on The GitHub Blog.
]]>The post Pick your agent: Use Claude and Codex on Agent HQ appeared first on The GitHub Blog.
]]>Context switching equals friction in software development. Today, we’re removing some of that friction with the latest updates to Agent HQ which lets you run coding agents from multiple providers directly inside GitHub and your editor, keeping context, history, and review attached to your work.
Copilot Pro+ and Copilot Enterprise users can now run multiple coding agents directly inside GitHub, GitHub Mobile, and Visual Studio Code (with Copilot CLI support coming soon). That means you can use agents like GitHub Copilot, Claude by Anthropic, and OpenAI Codex (both in public preview) today.
With Codex, Claude, and Copilot in Agent HQ, you can move from idea to implementation using different agents for different steps without switching tools or losing context.
We’re bringing Claude into GitHub to meet developers where they are. With Agent HQ, Claude can commit code and comment on pull requests, enabling teams to iterate and ship faster and with more confidence. Our goal is to give developers the reasoning power they need, right where they need it.
Katelyn Lesse, Head of Platform, Anthropic
Agent HQ also lets you compare how different agents approach the same problem, too. You can assign multiple agents to a task, and see how Copilot, Claude, and Codex reason about tradeoffs and arrive at different solutions.
In practice, this helps you surface issues earlier by using agents for different kinds of review:
This method of working moves your reviews and thinking to strategy over syntax.
Our collaboration with GitHub has always pushed the frontier of how developers build software. The first Codex model helped power Copilot and inspired a new generation of AI-assisted coding. We share GitHub’s vision of meeting developers wherever they work, and we’re excited to bring Codex to GitHub and VS Code. Codex helps engineers work faster and with greater confidence—and with this integration, millions more developers can now use it directly in their primary workspace, extending the power of Codex everywhere code gets written.
Alexander Embiricos, OpenAI
GitHub is already where code lives, collaboration happens, and decisions are reviewed, governed, and shipped.
Making coding agents native to that workflow, rather than external tools, makes them even more useful at scale. Instead of copying and pasting context between tools, documents, and threads, all discussion and proposed changes stay attached to the repository itself.
With Copilot, Claude, and Codex working directly in GitHub and VS Code, you can:
There are no new dashboards to learn, and no separate AI workflows to manage. Everything runs inside the environments you already use.
These workflows don’t just benefit individual developers. Agent HQ gives you org-wide visibility and systematic control over how AI interacts with your codebase:
This allows teams to adopt agent-based workflows without sacrificing code quality, accountability, or trust.
Access to Claude and Codex will soon expand to more Copilot subscription types. In the meantime, we’re actively working with partners, including Google, Cognition, and xAI to bring more specialized agents into GitHub, VS Code, and Copilot CLI workflows.
The post Pick your agent: Use Claude and Codex on Agent HQ appeared first on The GitHub Blog.
]]>The post What the fastest-growing tools reveal about how software is being built appeared first on The GitHub Blog.
]]>In 2025, software development crossed a quiet threshold. In our latest Octoverse report, we found that the fastest-growing languages, tools, and open source projects on GitHub are no longer about shipping more code. Instead, they’re about reducing friction in a world where AI is helping developers build more, faster.
By looking at some of the areas of fastest growth over the past year, we can see how developers are adapting through:
Rather than catalog trends, we want to focus on what those signals mean for how software is being built today and what choices you might consider heading into 2026.
In August 2025, TypeScript became the most-used language on GitHub, overtaking Python and JavaScript for the first time. Over the past year, TypeScript added more than one million contributors, which was the largest absolute growth of any language on GitHub.

Python also continued to grow rapidly, adding roughly 850,000 contributors (+48.78% YoY), while JavaScript grew more slowly (+24.79%, ~427,000 contributors). Together, TypeScript and Python both significantly outpaced JavaScript in both total and percentage growth.
This shift signals more than a preference change. Typed languages are increasingly becoming the default for new development, particularly as AI-assisted coding becomes routine. Why is that?
In practice, a significant portion of the failures teams encounter with AI-generated code surface as type mismatches, broken contracts, or incorrect assumptions between components. Stronger type systems act as early guardrails: they can help catch errors sooner, reduce review churn, and make AI-generated changes easier to reason about before code reaches production.
If you’re going to be using AI in your software design, which more and more developers are doing on a daily basis, strongly typed languages are your friend.
Here’s what this means in practice:
Contributor counts show who is using a language. Repository data shows what that language is being used to build.
When we look specifically at AI-focused repositories, Python stands apart. As of August 2025, nearly half of all new AI projects on GitHub were built primarily in Python.

This matters because AI projects now account for a disproportionate share of open source momentum. Six of the ten fastest-growing open source projects by contributors in 2025 were directly focused on AI infrastructure or tooling.

Python’s role here isn’t new, but it is evolving. The data suggests a shift from experimentation toward production-ready AI systems, with Python increasingly anchoring packaging, orchestration, and deployment rather than living only in notebooks.
Moreover, Python is likely to continue to grow in 2026, as AI continues to gain support and additional projects.
Here’s what this means in practice:
Looking across the fastest-growing projects, a clear pattern emerges: developers are optimizing for speed, control, and predictable outcomes.
Many of the fastest-growing tools emphasize performance and minimalism. Projects like astral-sh/uv, a package and project manager, focus on dramatically faster Python package management. This reflects a growing intolerance for slow feedback loops and non-deterministic environments.
Having just one of these projects could be an anomaly, but having multiple indicates a clear trend. This trend aligns closely with AI-assisted workflows where iteration speed and reproducibility directly impact developer productivity.
Here’s what this means in practice:
As the developer population grows, understanding where first-time contributors show up (and why) becomes increasingly important.

Projects like VS Code and First Contributions continued to top the list over the last year, reflecting both the scale of widely used tools and the persistent need for low-friction entry points into open source (notably, we define contributions as any content-generating activity on GitHub).
Despite this growth, basic project governance remains uneven across the ecosystem. README files are common, but contributor guides and codes of conduct are still relatively rare even as first-time contributions increase.
This gap represents one of the highest-leverage improvements maintainers and open source communities can make. The fact that most of the projects on this list have detailed documentation on what the project is and how to contribute shows the importance of this guidance.
Here’s what this means in practice:
Taken together, these trends point to a shift in what developers value and how they choose tools.
AI is no longer a separate category of development. It’s shaping the languages teams use, which tools gain traction, and which projects attract contributors.
Typed languages like TypeScript are becoming the default for reliability at scale, while Python remains central to AI-driven systems as they move from prototypes into production.
Across the ecosystem, developers are rewarding tools that minimize friction with faster feedback loops, reproducible environments, and clearer contribution paths.
Developers and teams that optimize for speed, clarity, and reliability are shaping how software is being built.
As a reminder, you can check out the full 2025 Octoverse report for more information and make your own conclusions. There’s a lot of good data in there, and we’re just scratching the surface of what you can learn from it.
The post What the fastest-growing tools reveal about how software is being built appeared first on The GitHub Blog.
]]>The post How to maximize GitHub Copilot’s agentic capabilities appeared first on The GitHub Blog.
]]>Modern engineering work rarely lives in a single file. Real systems evolve across years of incrementally layered decisions—some good, some accidental. A single feature request (“Add tagging to notes,” “Refactor the validation layer,” “Support a new consumer on our API”) often touches controllers, domain models, repositories, migrations, tests, documentation, and deployment strategy.
Copilot’s agentic capabilities don’t replace your judgment in these situations—they amplify it. When used well, Copilot becomes a partner in system design, refactoring, modernization, and multi-file coordination.
This guide focuses on architecture-aware, multi-step workflows used every day by staff engineers, but written to be accessible for earlier-career engineers who want to understand how senior engineers think—and how Copilot can accelerate their own growth.
It draws on four GitHub Skills exercises (linked below), and builds toward a complete, real-world scenario: extending a small modular Notes Service with a tagging subsystem, refactoring a validation layer, designing a safe migration, and modernizing tests.
You’ll get the most out of this guide if you have:
If you’re earlier in your career, don’t worry. Each section explains why these patterns matter and how to practice them safely.
Senior engineers rarely begin by writing code. They begin by identifying boundaries: domain logic, data access, interfaces, and how modules should interact.
Copilot agent mode can help by revealing structural issues and proposing architectures.
Prompt:
Analyze this service and propose a modular decomposition with domain, infrastructure, and interface layers.
Identify anti-patterns, coupling issues, and potential failure points.
You’ll typically get back:
This transforms Copilot from an autocomplete tool into a design reviewer.
You can push further by asking it to compare architectures:
Compare hexagonal architecture vs. layered architecture for this codebase.
Recommend one based on the constraints here. Include tradeoffs.
Want to try it yourself? Use these proposals as starting points.
Once boundaries are defined, Copilot can coordinate changes across modules.
Prompt:
Implement the domain, controller, and repository layers as distinct modules.
Use dependency inversion to reduce coupling.
Document assumptions and contracts for each module.
Copilot will typically generate:
For earlier-career engineers, this provides exposure to real engineering patterns. For senior engineers, it provides leverage and reduces boilerplate overhead.
Adding a tagging subsystem is a deceptively simple request with meaningful architectural implications.
Even this single feature forces decisions across the system:
Before touching code, ask Copilot to map the impact.
Prompt:
Propose the architectural changes required to add a tagging subsystem.
Identify migration needs, cross-cutting concerns, caching or indexing implications, and potential regressions.
Copilot may identify:
This is the staff-level lens that Copilot can help junior developers adopt.
Then implement it:
Implement the tagging domain model, schema changes, repository updates, and controller logic.
Update tests and documentation. Show each change as a diff.
Example output (simplified)
Migration example:
ALTER TABLE notes ADD COLUMN tags TEXT DEFAULT '[]';
Domain model example:
export interface Tag {
id: string;
label: string;
}
export interface Note {
id: string;
title: string;
body: string;
tags: Tag[];
}
Controller update (partial):
await noteService.addTag(noteId, { label: req.body.label });
This is where agent mode shines: coordinating multiple files with consistent intent.
At senior levels, the hardest part isn’t writing SQL. It’s designing a change that is:
Ask Copilot to reason about this:
Prompt:
Generate an additive, backward-compatible schema migration to support the tagging subsystem.
Describe the rollback plan, compatibility window, and expected impact to existing clients.
This forces Copilot to consider:
If you’re earlier in your career, this offers lessons on how safe migrations are designed. And if you’re more experienced, this gives you a repeatable workflow for multi-step schema evolution.
Let’s perform a real cross-module refactor: extracting validation out of controllers into a domain service.
Prompt:
Create a step-by-step refactor plan to extract validation logic into a domain service.
Identify affected modules and required test updates.
Copilot may output something like:
validationServicePrompt:
Execute steps 1–3 only. Stop before controller rewrites.
Provide detailed diffs and call out risky areas.
This is a low-blast-radius refactor, modeled directly in the IDE.
Instead of asking Copilot “write tests,” ask it to assess the entire suite.
Prompt:
Analyze the current test suite and identify systemic gaps.
Recommend a modernization plan including contract, integration, and domain-layer tests.
Then implement contract tests:
describe("NotesRepository contract", () => {
test("create + fetch returns a fully hydrated note object", async () => {
const note = await notesRepo.create({ title: "Test", body: "…" });
const fetched = await notesRepo.get(note.id);
expect(fetched).toMatchObject({ title: "Test" });
expect(fetched.id).toBeDefined();
});
});
This elevates testing into an architectural concern.
Bringing it all together, here’s a real sequence you might run with Copilot:
This workflow is architecturally realistic—and a model for how Copilot becomes a system-level collaborator.
It’s important to clarify that agent mode is not ideal for:
Copilot should support your decision-making, not replace it.
Here’s where GitHub Skills comes in—not as “beginner content,” but as a set of guided, self-contained labs that reinforce the patterns above.
Even senior engineers will benefit: These exercises are structured so you can reliably recreate complex workflows and test Copilot’s behavior in controlled environments.
The post How to maximize GitHub Copilot’s agentic capabilities appeared first on The GitHub Blog.
]]>