archive mirror
 help / color / mirror / Atom feed
From: Jonathan Tan <>
Subject: Re: [PATCH 0/7] Submodules and partial clones
Date: Wed, 30 Sep 2020 13:41:59 -0700	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

> Yes, grep just calls `add_to_alternates_memory` and will be broken.
> When handling nested submodules `config_from_gitmodules` does the same
> thing, so that will also be broken if some of the .gitmodules files
> need fetching.
> Fixing these probably does require supporting fetching of objects from
> submodules.


> > > The patches rework various cases where objects from a submodule are
> > > added to the object store of the main repository.  There are some
> > > remaining cases where add_to_alternates_memory is used to do this,
> > > but add_submodule_odb has been removed.
> > > 
> > > I expect there will be some remaining issues, but these changes
> > > seem to be enough to get the basics working.  
> > 
> > What are the basics that work?
> I've tried at least the following, in a repo with several submodules and
> large objects (but no nested submodules):
> - git clone --recursive --filter=blob:limit=1M ...
> - git pull --rebase --recurse-submodules=on-demand
> - git show --submodue=diff <commit-with-big-submodule-object>
> - git push --recurse-submodules=check
> - git push --recurse-submodules=on-demand
> I used the partial clone for a while and didn't hit any problems, but I
> can't say what (relevant) commands I might have used.
> An important thing that I've not tried is a merge that needs to fetch
> objects.  I should probably write a testcase for that.

Thanks - so it looks like what we need to have are:

 (1) propagate --filter when cloning (done here)
 (2) handle how Git lazy-fetches when accessing submodule objects
  (2a) access through add_submodule_odb (seems to have been done here -
       the patches here convert these accesses into (2c))
  (2b) access through add_to_alternates_memory (reading missing objects will
       trigger a likely-to-fail fetch)
  (2c) access through repo_.* and similar functions (reading missing objects
       will fail outright)
  (2d) access through invoking a subprocess (will work)

Having (1) and (2a) means that (as you described above) we can have
generally working partial-clone submodules (in that commands like
"clone", "pull", and "push" work), except that there may be some
commands like "grep" that will have strange behavior (lazy-fetching from
the wrong repo and failing). Currently, partial-clone submodules do not
work at all. My initial inclination was to say that we should resolve
(2c) first, but it is true that this change (even without (2b) and (2c))
would bring user-facing benefit, but at the cost of possible negative
surprises (even if we warn in the documentation that this feature is
experimental and might fail). I'm not sure what the rest of the Git
developer community thinks about this.

> > When I looked into this, my main difficulty lay in getting the
> > lazy fetch to work in another repository. Now that lazy fetches are
> > done using a separate process, the problem has shifted to being able
> > to invoke run_command() in a separate Git repository. I haven't
> > figured out the best way to ensure that run_command() is run with a
> > clean set of environment variables (so no inheriting of GIT_DIR
> > etc.), but that doesn't seem insurmountable.
> Yes, I think that to fix promisor_remote_get_direct we need to:
> - store the promisor configuration per-repository
> - run the fetch process in the correct repository
> AFAICT we just need to set cp.dir and call prepare_submodule_repo_env
> to get the right environment for the fetch process. The per-repository
> configuration looks more fiddly to do.  I'm happy to try and make these
> additional changes (but it won't be quick as I'm busy with the day job).
> In any case we need to pass the right repository around.

Ah, good point about prepare_submodule_repo_env() - that does take care
of the environment variables.

I'll look at per-repository configuration and see what I can do too.

      reply	other threads:[~2020-09-30 20:42 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-29 15:53 Andrew Oakley
2020-09-29 15:53 ` [PATCH 1/7] refs: store owning repository for object lookup Andrew Oakley
2020-09-29 15:53 ` [PATCH 2/7] submodule: use separate submodule repositories Andrew Oakley
2020-09-29 15:53 ` [PATCH 3/7] Add failing test for partial clones with submodules Andrew Oakley
2020-09-29 15:53 ` [PATCH 4/7] refs: use correct repo in refs_peel_ref Andrew Oakley
2020-09-29 15:53 ` [PATCH 5/7] merge-recursive: use separate submodule repository Andrew Oakley
2020-09-29 15:53 ` [PATCH 6/7] submodule: remove add_submodule_odb Andrew Oakley
2020-09-29 15:53 ` [PATCH 7/7] submodule: use partial clone filter Andrew Oakley
2020-09-29 18:05 ` [PATCH 0/7] Submodules and partial clones Jonathan Tan
2020-09-30 13:28   ` Andrew Oakley
2020-09-30 20:41     ` Jonathan Tan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \
    --subject='Re: [PATCH 0/7] Submodules and partial clones' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).