tag:us.githubstatus.com,2005:/historyGitHub Enterprise Cloud US Status - Incident History2026-02-08T12:46:29ZGitHub Enterprise Cloud UStag:us.githubstatus.com,2005:Incident/283915662026-02-06T18:36:52Z2026-02-06T18:36:52ZUS - Incident with Pull Requests<p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Update</strong> - Some GitHub Mobile app users may be unable to add review comments on deleted lines in pull requests. We're working on a fix and expect to release it early next week.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:04</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:01</var> UTC</small><br><strong>Update</strong> - We're currently investigating an issue affecting the Mobile app that can prevent review comments from being posted on certain pull requests when commenting on deleted lines.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>17:49</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p>tag:us.githubstatus.com,2005:Incident/283394062026-02-03T10:56:28Z2026-02-04T16:41:52ZUS - Incident with Copilot<p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:56</var> UTC</small><br><strong>Resolved</strong> - On February 3, 2026, between 09:35 UTC and 10:15 UTC, GitHub Copilot experienced elevated error rates, with an average of 4% of requests failing.<br /><br />This was caused by a capacity imbalance that led to resource exhaustion on backend services. The incident was resolved by infrastructure rebalancing, and we subsequently deployed additional capacity.<br /><br />We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:55</var> UTC</small><br><strong>Update</strong> - We are now seeing recovery.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:21</var> UTC</small><br><strong>Update</strong> - We are investigating elevated 500s across Copilot services.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:16</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/283308552026-02-03T00:56:06Z2026-02-03T20:36:44ZUS - Incident with Actions<p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>00:56</var> UTC</small><br><strong>Resolved</strong> - On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. <br /><br />This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. <br /><br />We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>00:54</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:50</var> UTC</small><br><strong>Update</strong> - Based on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners.<br />We are monitoring closely to confirm complete recovery.<br />Other GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot) should also see recovery.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:43</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:41</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:31</var> UTC</small><br><strong>Update</strong> - Pages is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>22:53</var> UTC</small><br><strong>Update</strong> - Our upstream provider has applied a mitigation to address queuing and job failures on hosted runners.<br />Telemetry shows improvement, and we are monitoring closely for full recovery.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>22:10</var> UTC</small><br><strong>Update</strong> - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We're waiting on our upstream provider to apply the identified mitigations, and we're preparing to resume job processing as safely as possible.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>21:30</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>21:13</var> UTC</small><br><strong>Update</strong> - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We have identified the root cause and are working with our upstream provider to mitigate.<br />This is also impacting GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot).</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>20:33</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Actions and Pages</p>tag:us.githubstatus.com,2005:Incident/281272592026-01-21T12:38:58Z2026-01-26T13:03:50ZUS - Copilot Chat - Grok Code Fast 1 Outage<p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>12:38</var> UTC</small><br><strong>Resolved</strong> - On Jan 21st, 2025, between 11:15 UTC and 13:00 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, more than 90% of the requests to this model failed due to an issue with an upstream provider. No other models were impacted.<br /><br />The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.</p><p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>12:09</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>11:33</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/280192552026-01-14T10:52:11Z2026-01-15T22:03:55ZUS - Copilot's GPT-5.1 model has degraded performance<p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>10:52</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>10:32</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with the GPT-5.1 model. We are also seeing an increase in failures for Copilot Code Reviews.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:53</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with the GPT-5.1 model with our model provider. Uses of other models are not impacted.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:26</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance when using the GPT-5.1 model. We are investigating the issue.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:24</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/279868792026-01-12T10:17:25Z2026-01-22T22:49:34ZUS - Disruption with some GitHub services<p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>10:17</var> UTC</small><br><strong>Resolved</strong> - From January 9 13:11 UTC to January 12 10:17 UTC, new Linux Custom Images generated for Larger Hosted Runners were broken and not able to run jobs. Customers who did not generate new Custom Images during this period were not impacted. This issue was caused by a change to improve reliability of the image creation process. Due to a bug, the change triggered an unrelated protection mechanism which determines if setup has already been attempted on the VM and caused the VM to be marked unhealthy. Only Linux images which were generated while the change was enabled were impacted. The issue was mitigated by rolling back the change.<br /><br />We are improving our testing around Custom Image generation as part of our GA readiness process for the public preview feature.. This includes expanding our canary suite to detect this and similar interactions as part of a controlled rollout in staging prior to any customer impact.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>10:09</var> UTC</small><br><strong>Update</strong> - Actions jobs that use custom Linux images are failing to start. We've identified the underlying issue and are working on mitigation.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>10:05</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Jan <var data-var='date'>12</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:us.githubstatus.com,2005:Incident/279516102026-01-10T02:33:18Z2026-01-12T20:26:26ZUS - Disruption with some GitHub services<p><small>Jan <var data-var='date'>10</var>, <var data-var='time'>02:33</var> UTC</small><br><strong>Resolved</strong> - From January 5, 2026, 00:00 UTC to January 10, 2026, 02:30 UTC, customers using the AI Controls public preview feature experienced delays in viewing Copilot agent session data. Newly created sessions took progressively longer to appear, initially hours, then eventually exceeding 24 hours. Since the page displays only the most recent 24 hours of activity, once processing delays exceeded this threshold, no recent data was visible. Session data remained available in audit logs throughout the incident.<br /><br />Inefficient database queries in the data processing pipeline caused significant processing latency, creating a multi-day backlog. As the backlog grew, the delay between when sessions occurred and when they appeared on the page increased, eventually exceeding the 24-hour display window.<br /><br />The issue was resolved on January 10, 2026, 02:30 UTC, after query optimizations and a database index were deployed. We are implementing enhanced monitoring and automated testing to detect inefficient queries before deployment to prevent recurrence.</p><p><small>Jan <var data-var='date'>10</var>, <var data-var='time'>02:33</var> UTC</small><br><strong>Update</strong> - Our queue has cleared. The last 24 hours of agent session history should now be visible on the AI Controls UI. No data was lost due to this incident.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>23:56</var> UTC</small><br><strong>Update</strong> - We estimate the backlogged queue will take 3 hours to process. We will post another update once it is completed, or if anything changes with the recovery process.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>23:44</var> UTC</small><br><strong>Update</strong> - We have deployed an additional fix and are beginning to see recovery to the queue preventing AI Sessions from showing in the AI Controls UI. We are working on an estimate for when the queue will be fully processed, and will post another update once we have that information.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>22:41</var> UTC</small><br><strong>Update</strong> - We are seeing delays processing the AI Session event queue, which is causing sessions to not be displayed on the AI Controls UI. We have deployed a fix to improve the queue processing and are monitoring for effectiveness. We continue to investigate other mitigation paths.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>21:36</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in AI Controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>21:08</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>20:08</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>19:35</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>19:02</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>18:39</var> UTC</small><br><strong>Update</strong> - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>18:08</var> UTC</small><br><strong>Update</strong> - Agent Session activity is still observable in audit logs, and this only impacts the AI Controls UI.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>17:57</var> UTC</small><br><strong>Update</strong> - We are investigating an incident affecting missing Agent Session data on the AI Settings page on Agent Control Plane.</p><p><small>Jan <var data-var='date'> 9</var>, <var data-var='time'>17:54</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:us.githubstatus.com,2005:Incident/279203852026-01-07T21:07:09Z2026-01-13T18:22:49ZUS - Some models missing in Copilot<p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>21:07</var> UTC</small><br><strong>Resolved</strong> - On January 7th, 2026, between 17:16 and 19:33 UTC Copilot Pro and Copilot Business users were unable to use certain premium models, including Claude Opus 4.5 and GPT-5.2. This was due to a misconfiguration with Copilot models, inadvertently marking these premium models as inaccessible for users with Copilot Pro and Copilot Business licenses.<br /><br />We mitigated the incident by reverting the erroneous config change. We are improving our testing processes to reduce the risk of similar incidents in the future, and refining our model availability alerting to improve detection time.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:43</var> UTC</small><br><strong>Update</strong> - We have implemented a mitigation and confirmed that Copilot Pro and Business accounts now have access to the previously missing models. We will continue monitoring to ensure complete resolution.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:29</var> UTC</small><br><strong>Update</strong> - We continue to investigate. We'll post another update by 19:50 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:10</var> UTC</small><br><strong>Update</strong> - Correction - Copilot Pro and Business users are impacted. Copilot Pro+ and Enterprise users are not impacted.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>19:06</var> UTC</small><br><strong>Update</strong> - We continue to investigate this problem and have confirmed only Copilot Business users are impacted. We'll post another update by 19:30 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:44</var> UTC</small><br><strong>Update</strong> - We are currently investigating reports of some Copilot Pro premium models including Opus and GPT 5.2 being unavailable in Copilot products. We'll post another update by 19:08 UTC.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:33</var> UTC</small><br><strong>Update</strong> - We have received reports that some expected models are missing from VSCode and other products using Copilot. We are investigating the cause of this to restore access.</p><p><small>Jan <var data-var='date'> 7</var>, <var data-var='time'>18:32</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/278974852026-01-06T10:08:06Z2026-01-09T10:31:13ZUS - Incident with Copilot<p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>10:08</var> UTC</small><br><strong>Resolved</strong> - On January 6th, 2026, between approximately 8:41 and 10:07 UTC, the Copilot service experienced a degradation of the GPT-5.1-Codex-Max model due to an issue with our upstream provider. During this time, up to 14.17% of requests to GPT-5.1-Codex-Max failed. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>10:07</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and GPT-5.1-Codex-Max is once again available.<br />We will continue monitoring to ensure stability.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>09:03</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the GPT-5.1-Codex-Max model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jan <var data-var='date'> 6</var>, <var data-var='time'>08:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/275994442025-12-15T15:45:53Z2025-12-19T14:26:11ZUS - Incident with Copilot Grok Code Fast 1<p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>15:45</var> UTC</small><br><strong>Resolved</strong> - On Dec 15th, 2025, between 14:00 UTC and 15:45 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, 4% of the requests to this model failed due to an issue with our upstream provider. No other models were impacted.<br /><br />The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>15:06</var> UTC</small><br><strong>Update</strong> - We are continuing to work with our provider on resolving the incident with Grok Code Fast 1. Users can expect some requests to intermittently fail until all issues are resolved.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>14:13</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Dec <var data-var='date'>15</var>, <var data-var='time'>14:12</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/275642522025-12-12T20:55:16Z2025-12-17T23:46:40ZUS - Incident with Git Operations<p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:55</var> UTC</small><br><strong>Resolved</strong> - On December 10, 2025 between 18:10 UTC and 20:10 UTC Git Operations for GitHub Data Residency environments experienced periods of failed or delayed git requests to repository, raw, and archive data. On average, the error rate was 4% and peaked at 23% of total requests. This was due to an infrastructure configuration change. <br /><br />We mitigated the incident by updating our configuration and adding additional capacity to serve the traffic spikes.<br /><br />We are working to improve our change management in order to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:55</var> UTC</small><br><strong>Update</strong> - The GHEC-DR Sweden region has also seen full recovery. At this time all services are expected to be operating normally.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:43</var> UTC</small><br><strong>Update</strong> - We have applied the mitigation to all GHEC-DR environments, and are seeing recovery for all regions except Sweden. We're investigating the remaining impact for this region.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:40</var> UTC</small><br><strong>Update</strong> - Git Operations and Pull Requests are operating normally.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:28</var> UTC</small><br><strong>Update</strong> - We have identified the issue and are working to mitigate it.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:24</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:17</var> UTC</small><br><strong>Update</strong> - We are currently investigating elevated error rates with Git operations in GHEC-DR environments.</p><p><small>Dec <var data-var='date'>12</var>, <var data-var='time'>20:17</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations</p>tag:us.githubstatus.com,2005:Incident/275071332025-12-08T21:06:12Z2025-12-12T19:56:44ZUS - Potential disruption with our Agent Control Plane UI Settings<p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>21:06</var> UTC</small><br><strong>Resolved</strong> - On November 26th, 2025, between approximately 02:24 UTC and December 8th, 2025 at 20:26 UTC, enterprise administrators experienced a disruption when viewing agent session activities in the Enterprise AI Controls page. During this period, users were unable to list agent session activity in the AI Controls view. This did not impact viewing agent session activity in audit logs or directly navigating to individual agent session logs, or otherwise managing AI Agents.<br /><br />The issue was caused by a misconfiguration in a change deployed on November 25th that unintentionally prevented data from being published to an internal Kafka topic responsible for feeding the AI Controls page with agent session activity information.<br /><br />The problem was identified and mitigated on December 8th by correcting the configuration issue. GitHub is improving monitoring for data pipeline dependencies and enhancing pre-deployment validation to catch configuration issues before they reach production.</p><p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>19:52</var> UTC</small><br><strong>Update</strong> - We are investigating an incident affecting missing Agent Session data on the AI Settings page on Agent Control Plane.</p><p><small>Dec <var data-var='date'> 8</var>, <var data-var='time'>19:51</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:us.githubstatus.com,2005:Incident/272026532025-11-17T19:08:43Z2025-11-20T18:54:38ZUS - Disruption with some GitHub services<p><small>Nov <var data-var='date'>17</var>, <var data-var='time'>19:08</var> UTC</small><br><strong>Resolved</strong> - From Nov 17, 2025 00:00 UTC to Nov 17, 2025 15:00 UTC Dependabot was hitting a rate limit in GitHub Container Registry (GHCR) and was unable to complete about 57% of jobs.<br /><br />To mitigate the issue we lowered the rate at which Dependabot started jobs and increased the GHCR rate limit.<br /><br />We’re adding new monitors and alerts and looking into more ways to decrease load on GHCR to help prevent this in the future.</p><p><small>Nov <var data-var='date'>17</var>, <var data-var='time'>18:54</var> UTC</small><br><strong>Update</strong> - We continue to see recovery and dependabot jobs are currently processing as expected.</p><p><small>Nov <var data-var='date'>17</var>, <var data-var='time'>18:18</var> UTC</small><br><strong>Update</strong> - We are applying a configuration change and will monitor for recovery.</p><p><small>Nov <var data-var='date'>17</var>, <var data-var='time'>17:50</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate dependabot failures and a configuration change to mitigate.</p><p><small>Nov <var data-var='date'>17</var>, <var data-var='time'>17:15</var> UTC</small><br><strong>Update</strong> - We are investigating dependabot job failures affecting approximately 50% of version updates and 25% of security updates.</p><p><small>Nov <var data-var='date'>17</var>, <var data-var='time'>16:52</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:us.githubstatus.com,2005:Incident/271359932025-11-12T23:04:25Z2025-11-19T01:52:28ZUS - Disruption with some GitHub services<p><small>Nov <var data-var='date'>12</var>, <var data-var='time'>23:04</var> UTC</small><br><strong>Resolved</strong> - On November 12, 2025, between 22:10 UTC and 23:04 UTC, Codespaces used internally at GitHub were impacted. There was no impact to external customers. The scope of impact was not clear in the initial steps of incident response, so it was considered public until confirmed otherwise. One improvement from this will be improved clarity of internal versus public impact for similar failures to better inform our status decisions going forward.</p><p><small>Nov <var data-var='date'>12</var>, <var data-var='time'>22:51</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate connectivity issues with codespaces</p><p><small>Nov <var data-var='date'>12</var>, <var data-var='time'>22:26</var> UTC</small><br><strong>Update</strong> - We are investing reports of codespaces no longer appearing in the UI or API. Users may experience connectivity issues to the impacted codespaces.</p><p><small>Nov <var data-var='date'>12</var>, <var data-var='time'>22:26</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:us.githubstatus.com,2005:Incident/269293712025-10-29T23:15:10Z2025-10-31T19:55:56ZUS - Experiencing connection issues across Actions, Codespaces, and possibly other services<p><small>Oct <var data-var='date'>29</var>, <var data-var='time'>23:15</var> UTC</small><br><strong>Resolved</strong> - On October 29th, 2025 between 14:07 UTC and 23:15 UTC, multiple GitHub services were degraded due to a broad outage in one of our service providers:<br /><br />- Users of Codespaces experienced failures connecting to new and existing Codespaces through VSCode Desktop or Web. On average the Codespace connection error rate was 90% and peaked at 100% across all regions throughout the incident period.<br />- GitHub Actions larger hosted runners experienced degraded performance, with 0.5% of overall workflow runs and 9.8% of larger hosted runner jobs failing or not starting within 5 minutes. These recovered by 20:40 UTC.<br />- The GitHub Enterprise Importer service was degraded, with some users experiencing migration failures during git push operations and most users experiencing delayed migration processing.<br />- Initiation of new trials for GitHub Enterprise Cloud with Data Residency were also delayed during this time.<br />- Copilot Metrics via the API could not access the downloadable link during this time. There were approximately 100 requests during the incident that would have failed the download. Recovery began around 20:25 UTC.<br /><br />We were able to apply a number of mitigations to reduce impact over the course of the incident, but we did not achieve 100% recovery until our service provider’s incident was resolved.<br /><br />We are working to reduce critical path dependencies on the service provider and gracefully degrade experiences where possible so that we are more resilient to future dependency outages.</p><p><small>Oct <var data-var='date'>29</var>, <var data-var='time'>16:17</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:us.githubstatus.com,2005:Incident/268720852025-10-24T14:20:55Z2025-10-24T14:20:55Zus.githubstatus.com was unavailable UTC 2025 Oct 24 02:55 to 03:13<p><small>Oct <var data-var='date'>24</var>, <var data-var='time'>14:20</var> UTC</small><br><strong>Resolved</strong> - On UTC Oct 24 2:55 - 3:15 AM, us.githubstatus.com was unreachable due to service interruption with our status page provider. <br />During this time, GitHub systems were not experiencing any outages or disruptions.<br />We are working our vendor to understand how to improve availability of us.githubstatus.com.</p>tag:us.githubstatus.com,2005:Incident/268481162025-10-22T17:35:48Z2025-10-24T18:10:32ZUS - Disruption with GHEC With Data Residency Signup<p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>17:35</var> UTC</small><br><strong>Resolved</strong> - From 00:00 UTC October 20 2025 to 17:11 UTC October 22 2025, a subset of customers who initiated a trial of GHEC with Data Residency experienced a delay in the provisioning of their GHEC-DR instance. This was caused by timeouts from an inefficient request between internal GitHub services used to create and host SSL certificates for GHEC-DR customer domains. This request has been replaced with a more performant one, and all 77 instances that were impacted have been provisioned successfully. Additionally, we are enhancing the monitoring of GHEC-DR instance provisioning for faster detection.<br /></p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>17:35</var> UTC</small><br><strong>Update</strong> - We have now recovered and will resolve this incident.</p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>16:45</var> UTC</small><br><strong>Update</strong> - We have applied a mitigation and are monitoring for recovery.</p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>14:33</var> UTC</small><br><strong>Update</strong> - We have identified the problem and are working on a mitigation. We will provide further updates as we have them.</p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>14:03</var> UTC</small><br><strong>Update</strong> - We are currently experiencing disruptions with the GHEC with Data Residency signup process. Some requests to provision new GHEC with Data Residency enterprises will not complete at this time. We are investigating and will provide further updates as we have more information</p><p><small>Oct <var data-var='date'>22</var>, <var data-var='time'>13:56</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:us.githubstatus.com,2005:Incident/267884102025-10-17T14:12:45Z2025-10-20T12:56:41ZUS - Disruption with push notifications<p><small>Oct <var data-var='date'>17</var>, <var data-var='time'>14:12</var> UTC</small><br><strong>Resolved</strong> - On October 17th, 2025, between 12:51 UTC and 14:01 UTC, mobile push notifications failed to be delivered for a total duration of 70 minutes. This affected github.com and GitHub Enterprise Cloud in all regions. The disruption was related to an erroneous configuration change to cloud resources used for mobile push notification delivery.<br /><br />We are reviewing our procedures and management of these cloud resources to prevent such an incident in the future.</p><p><small>Oct <var data-var='date'>17</var>, <var data-var='time'>14:01</var> UTC</small><br><strong>Update</strong> - We're investigating an issue with mobile push notifications. All notification types are affected, but notifications remain accessible in the app's inbox. For 2FA authentication, please open the GitHub mobile app directly to complete login.</p><p><small>Oct <var data-var='date'>17</var>, <var data-var='time'>13:12</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:us.githubstatus.com,2005:Incident/266333722025-10-03T03:47:29Z2025-10-10T15:45:09ZUS - Incident with Copilot<p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>03:47</var> UTC</small><br><strong>Resolved</strong> - <p>On October 3rd, between approximately 10:00 PM and 11:30 Eastern, the Copilot service experienced degradation due to an issue with our upstream provider. Users encountered elevated error rates when using the following Claude models: Claude Sonnet 3.7, Claude Opus 4, Claude Opus 4.1, Claude Sonnet 4, and Claude Sonnet 4.5. No other models were impacted.</p><p>The issue was mitigated by temporarily disabling affected endpoints while our provider resolved the upstream issue. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.</p></p><p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>03:47</var> UTC</small><br><strong>Update</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>03:04</var> UTC</small><br><strong>Update</strong> - The upstream provider is implementing a fix. Services are recovering. We are monitoring the situation.</p><p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>02:42</var> UTC</small><br><strong>Update</strong> - We’re seeing degraded experience across Anthropic models. We’re working with our partners to restore service.</p><p><small>Oct <var data-var='date'> 3</var>, <var data-var='time'>02:41</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/266153192025-10-02T22:33:22Z2025-10-06T22:39:22ZUS - Degraded Gemini 2.5 Pro experience in Copilot<p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>22:33</var> UTC</small><br><strong>Resolved</strong> - Between October 1st, 2025 at 1 AM UTC and October 2nd, 2025 at 10:33 PM UTC, the Copilot service experienced a degradation of the Gemini 2.5 Pro model due to an issue with our upstream provider. Before 15:53 UTC on October 1st, users experienced higher error rates with large context requests while using Gemini 2.5 Pro. After 15:53 UTC and until 10:33 PM UTC on October 2nd, requests were restricted to smaller context windows when using Gemini 2.5. Pro. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider. GitHub is collaborating with our provider to enhance communication and improve the ability to reproduce issues with the aim to reduce resolution time.</p><p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>22:26</var> UTC</small><br><strong>Update</strong> - We have confirmed that the fix for the lower token input limit for Gemini 2.5 Pro is in place and are currently testing our previous higher limit to verify that customers will experience no further impact.</p><p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>17:13</var> UTC</small><br><strong>Update</strong> - The underlying issue for the lower token limits for Gemini 2.5 Pro has been identified and a fix is in progress. We will update again once we have tested and confirmed that the fix is correct and globally deployed.</p><p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>02:52</var> UTC</small><br><strong>Update</strong> - We are continuing to work with our provider to resolve the issue where some Copilot requests using Gemini 2.5 Pro return an error indicating a bad request due to exceeding the input limit size.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>18:16</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate and test solutions internally while working with our model provider on a deeper investigation into the cause. We will update again when we have identified a mitigation.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>17:37</var> UTC</small><br><strong>Update</strong> - We are testing other internal mitigations so that we can return to the higher maximum input length. We are still working with our upstream model provider to understand the contributing factors for this sudden decrease in input limits.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:49</var> UTC</small><br><strong>Update</strong> - We are experiencing a service regression for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. The maximum input length of Gemini 2.5 prompts been decreased. Long prompts or large context windows may result in errors. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:43</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/265923422025-09-29T19:12:42Z2025-10-06T23:22:00ZUS - Disruption with Gemini 2.5 Pro and Gemini 2.0 Flash in Copilot<p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>19:12</var> UTC</small><br><strong>Resolved</strong> - On September 29, 2025, between 17:53 and 18:42 UTC, the Copilot service experienced a degradation of the Gemini 2.5 model due to an issue with our upstream provider. Approximately 24% of requests failed, affecting 56% of users during this period. No other models were impacted.<br /><br />GitHub notified the upstream provider of the problem as soon as it was detected. The issue was resolved after the upstream provider rolled back a recent change that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>19:12</var> UTC</small><br><strong>Update</strong> - The upstream model provided has resolved the issue and we are seeing full availability for Gemini 2.5 Pro and Gemini 2.0 Flash.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>18:40</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Gemini 2.5 Pro & Gemini 2.0 Flash models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>18:39</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:us.githubstatus.com,2005:Incident/265912822025-09-29T17:33:52Z2025-10-06T19:58:46ZUS - Disruption with some GitHub services<p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>17:33</var> UTC</small><br><strong>Resolved</strong> - On September 29, 2025 between 16:26 UTC and 17:33 UTC the Copilot API experienced a partial degradation causing intermittent erroneous 404 responses for an average of 0.2% of GitHub MCP server requests, peaking at times around 2% of requests. The issue stemmed from an upgrade of an internal dependency which exposed a misconfiguration in the service.<br /><br />We resolved the incident by rolling back the upgrade to address the misconfiguration. We fixed the configuration issue and will improve documentation and rollout process to prevent similar issues.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>17:28</var> UTC</small><br><strong>Update</strong> - Customers are getting 404 responses when connecting to the GitHub MCP server. We have reverted a change we believe is contributing to the impact, and are seeing resolution in deployed environments.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>16:45</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:us.githubstatus.com,2005:Incident/265389932025-09-24T09:18:32Z2025-09-29T15:40:36ZUS - Claude Opus 4 is experiencing degraded performance<p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:18</var> UTC</small><br><strong>Resolved</strong> - Anthropic reported degraded Claude Opus 4 and 4.1 through their status page. We were seeing this affect our model(s) performance, where we activated the relevant warning messages directing users to try other models. This was resolved after an hour on Anthropic's end, and shortly after we saw recovery, returning to our baseline success rate.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:16</var> UTC</small><br><strong>Update</strong> - Between around 8:16 UTC and 8:51 UTC we saw elevated errors on Claude Opus 4 and Opus 4.1, up to 49% of requests were failing. This has recovered to around 4% of requests failing, we are monitoring recovery.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:08</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>tag:us.githubstatus.com,2005:Incident/265346062025-09-24T00:26:29Z2025-10-01T21:21:17ZUS - Incident with Copilot<p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Resolved</strong> - Between 20:06 UTC September 23 and 04:58 UTC September 24, 2025, the Copilot service experienced degraded availability for Claude Sonnet 4 and 3.7 model requests.<br /><br />During this period, 0.46% of Claude 4 requests and 7.83% of Claude 3.7 requests failed.<br /><br />The reduced availability resulted from Copilot disabling routing to an upstream provider that was experiencing issues and reallocating capacity to other providers to manage requests for Claude Sonnet 3.7 and 4.<br />We are continuing to investigate the source of the issues with this provider and will provide an update as more information becomes available.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 and Claude Sonnet 4 are once again available in Copilot Chat, VS Code and other Copilot products.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.<br /></p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>22:22</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Claude Sonnet 3.7 and Claude Sonnet 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>22:22</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:us.githubstatus.com,2005:Incident/264503092025-09-15T18:28:35Z2025-09-18T23:23:31ZUS - Disruption with some GitHub services<p><small>Sep <var data-var='date'>15</var>, <var data-var='time'>18:28</var> UTC</small><br><strong>Resolved</strong> - On September 15th between 17:55 and 18:20 UTC, Copilot experienced degraded availability for all features. This was due a partial deployment of a feature flag to a global rate limiter. The flag triggered behavior that unintentionally rate limited all requests, resulting in 100% of them returning 403 errors. The issue was resolved by reverting the feature flag which resulted in immediate recovery.<br /><br />The root cause of the incident was from an undetected edge case in our rate limiting logic. The flag was meant to scale down rate limiting for a subset of users, but unintentionally put our rate limiting configuration into an invalid state.<br /><br />To prevent this from happening again, we have addressed the bug with our rate limiting. We are also adding additional monitors to detect anomalies in our traffic patterns, which will allow us to identify similar issues during future deployments. Furthermore, we are exploring ways to test our rate limit scaling in our internal environment to enhance our pre-production validation process.<br /></p><p><small>Sep <var data-var='date'>15</var>, <var data-var='time'>18:21</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p>