gvfs-helper: prevent and/or give advice on repeated downloads to shared object cache#840
Merged
dscho merged 3 commits intomicrosoft:vfs-2.52.0from Jan 9, 2026
Merged
Conversation
When we are installing a loose object, finalize_object_file() first checks to see if the contents match what already exists in a loose object file of the target name. However, this doesn't check if the target is valid, it assumes the target is valid. However, in the case of a power outage or something like that, the file may be corrupt (for example: all NUL bytes). That is a common occurrence when we are needing to install a loose object _again_: we don't think we have it already so any copy that exists is probably bogus. Use the flagged version with FOF_SKIP_COLLISION_CHECK to avoid these types of errors, as seen in GitHub issue microsoft#837. Signed-off-by: Derrick Stolee <stolee@gmail.com>
Users sometimes see transient network errors, but they are actually due to some other problem within the installation of a packfile. Observed resolutions include freeing up space on a full disk or deleting the shared object cache because something was broken due to a file corruption or power outage. This change only provides the advice to suggest those workarounds to help users help themselves. This is our first advice custom to the microsoft/git fork, so I have partitioned the key away from the others to avoid adjacent change conflicts (at least until upstream adds a new change at the end of the alphabetical list). We could consider providing a tool that does a more robust check of the shared object cache, but since 'git fsck' isn't safe to run as it may download missing objects, we do not have that ability at the moment. The good news is that it is safe to delete and rebuild the shared object cache as long as all local branches are pushed. The branches must be pushed because the local .git/objects/ directory is moved to the shared object cache in the 'cache-local-objects' maintenance task. Signed-off-by: Derrick Stolee <stolee@gmail.com>
Similar to a recent change to avoid the collision check for loose objects, do the same for prefetch packfiles. This should be more rare, but the same prefetch packfile could be downloaded from the same cache server so this isn't out of the range of possibility. Signed-off-by: Derrick Stolee <stolee@gmail.com>
dscho
added a commit
that referenced
this pull request
Jan 27, 2026
…ed object cache (#840) There have been a number of customer-reported problems with errors of the form ``` error: inflate: data stream error (unknown compression method) error: unable to unpack a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3 header error: files 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/pack/tempPacks/t-20260106-014520-049919-0001.temp' and 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3' differ in contents error: gvfs-helper error: 'could not install loose object 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3': from GET a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3' ``` or ``` Receiving packfile 1/1 with 1 objects (bytes received): 17367934, done. Receiving packfile 1/1 with 1 objects [retry 1/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (8/8), done. Receiving packfile 1/1 with 1 objects [retry 2/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (16/16), done. Receiving packfile 1/1 with 1 objects [retry 3/6] (bytes received): 17367934, done. ``` These are not actually due to network issues, but they look like it based on the stack that is doing retries. Instead, these actually have problems when installing the loose object or packfile into the shared object cache. The loose objects are hitting issues when installing and the target loose object is corrupt in some way, such as all NUL bytes because the disk wasn't flushed when the machine shut down. The error results because we are doing a collision check without confirming that the existing contents are valid. The packfiles may be hitting similar comparison cases, but it is less likely. We update these packfile installations to also skip the collision check. In both cases, if we have a transient network error, we add a new advice message that helps users with the two most common workarounds: 1. Your disk may be full. Make room. 2. Your shared object cache may be corrupt. Push all branches, delete it, and fetch to refill it. I make special note of when the shared object cache doesn't exist and point that it probably should so the whole repo is suspect at that point. * [X] This change only applies to interactions with Azure DevOps and the GVFS Protocol. Resolves #837.
dscho
added a commit
that referenced
this pull request
Jan 28, 2026
…ed object cache (#840) There have been a number of customer-reported problems with errors of the form ``` error: inflate: data stream error (unknown compression method) error: unable to unpack a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3 header error: files 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/pack/tempPacks/t-20260106-014520-049919-0001.temp' and 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3' differ in contents error: gvfs-helper error: 'could not install loose object 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3': from GET a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3' ``` or ``` Receiving packfile 1/1 with 1 objects (bytes received): 17367934, done. Receiving packfile 1/1 with 1 objects [retry 1/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (8/8), done. Receiving packfile 1/1 with 1 objects [retry 2/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (16/16), done. Receiving packfile 1/1 with 1 objects [retry 3/6] (bytes received): 17367934, done. ``` These are not actually due to network issues, but they look like it based on the stack that is doing retries. Instead, these actually have problems when installing the loose object or packfile into the shared object cache. The loose objects are hitting issues when installing and the target loose object is corrupt in some way, such as all NUL bytes because the disk wasn't flushed when the machine shut down. The error results because we are doing a collision check without confirming that the existing contents are valid. The packfiles may be hitting similar comparison cases, but it is less likely. We update these packfile installations to also skip the collision check. In both cases, if we have a transient network error, we add a new advice message that helps users with the two most common workarounds: 1. Your disk may be full. Make room. 2. Your shared object cache may be corrupt. Push all branches, delete it, and fetch to refill it. I make special note of when the shared object cache doesn't exist and point that it probably should so the whole repo is suspect at that point. * [X] This change only applies to interactions with Azure DevOps and the GVFS Protocol. Resolves #837.
dscho
added a commit
that referenced
this pull request
Jan 31, 2026
…ed object cache (#840) There have been a number of customer-reported problems with errors of the form ``` error: inflate: data stream error (unknown compression method) error: unable to unpack a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3 header error: files 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/pack/tempPacks/t-20260106-014520-049919-0001.temp' and 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3' differ in contents error: gvfs-helper error: 'could not install loose object 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3': from GET a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3' ``` or ``` Receiving packfile 1/1 with 1 objects (bytes received): 17367934, done. Receiving packfile 1/1 with 1 objects [retry 1/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (8/8), done. Receiving packfile 1/1 with 1 objects [retry 2/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (16/16), done. Receiving packfile 1/1 with 1 objects [retry 3/6] (bytes received): 17367934, done. ``` These are not actually due to network issues, but they look like it based on the stack that is doing retries. Instead, these actually have problems when installing the loose object or packfile into the shared object cache. The loose objects are hitting issues when installing and the target loose object is corrupt in some way, such as all NUL bytes because the disk wasn't flushed when the machine shut down. The error results because we are doing a collision check without confirming that the existing contents are valid. The packfiles may be hitting similar comparison cases, but it is less likely. We update these packfile installations to also skip the collision check. In both cases, if we have a transient network error, we add a new advice message that helps users with the two most common workarounds: 1. Your disk may be full. Make room. 2. Your shared object cache may be corrupt. Push all branches, delete it, and fetch to refill it. I make special note of when the shared object cache doesn't exist and point that it probably should so the whole repo is suspect at that point. * [X] This change only applies to interactions with Azure DevOps and the GVFS Protocol. Resolves #837.
dscho
added a commit
that referenced
this pull request
Feb 3, 2026
…ed object cache (#840) There have been a number of customer-reported problems with errors of the form ``` error: inflate: data stream error (unknown compression method) error: unable to unpack a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3 header error: files 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/pack/tempPacks/t-20260106-014520-049919-0001.temp' and 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3' differ in contents error: gvfs-helper error: 'could not install loose object 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3': from GET a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3' ``` or ``` Receiving packfile 1/1 with 1 objects (bytes received): 17367934, done. Receiving packfile 1/1 with 1 objects [retry 1/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (8/8), done. Receiving packfile 1/1 with 1 objects [retry 2/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (16/16), done. Receiving packfile 1/1 with 1 objects [retry 3/6] (bytes received): 17367934, done. ``` These are not actually due to network issues, but they look like it based on the stack that is doing retries. Instead, these actually have problems when installing the loose object or packfile into the shared object cache. The loose objects are hitting issues when installing and the target loose object is corrupt in some way, such as all NUL bytes because the disk wasn't flushed when the machine shut down. The error results because we are doing a collision check without confirming that the existing contents are valid. The packfiles may be hitting similar comparison cases, but it is less likely. We update these packfile installations to also skip the collision check. In both cases, if we have a transient network error, we add a new advice message that helps users with the two most common workarounds: 1. Your disk may be full. Make room. 2. Your shared object cache may be corrupt. Push all branches, delete it, and fetch to refill it. I make special note of when the shared object cache doesn't exist and point that it probably should so the whole repo is suspect at that point. * [X] This change only applies to interactions with Azure DevOps and the GVFS Protocol. Resolves #837.
1 task
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
There have been a number of customer-reported problems with errors of the form
or
These are not actually due to network issues, but they look like it based on the stack that is doing retries. Instead, these actually have problems when installing the loose object or packfile into the shared object cache.
The loose objects are hitting issues when installing and the target loose object is corrupt in some way, such as all NUL bytes because the disk wasn't flushed when the machine shut down. The error results because we are doing a collision check without confirming that the existing contents are valid.
The packfiles may be hitting similar comparison cases, but it is less likely. We update these packfile installations to also skip the collision check.
In both cases, if we have a transient network error, we add a new advice message that helps users with the two most common workarounds:
I make special note of when the shared object cache doesn't exist and point that it probably should so the whole repo is suspect at that point.
GVFS Protocol.
Resolves #837.