git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Junio C Hamano <gitster@pobox.com>
To: Thomas Gummerer <t.gummerer@gmail.com>
Cc: git@vger.kernel.org,
	Johannes Schindelin <johannes.schindelin@gmx.de>,
	Taylor Blau <me@ttaylorr.com>
Subject: Re: [PATCH] fetch --prune: exit with error if pruning fails
Date: Thu, 27 Jan 2022 11:57:32 -0800	[thread overview]
Message-ID: <xmqqmtjh0x5f.fsf@gitster.g> (raw)
In-Reply-To: <20220127153714.1190894-1-t.gummerer@gmail.com> (Thomas Gummerer's message of "Thu, 27 Jan 2022 15:37:14 +0000")

Thomas Gummerer <t.gummerer@gmail.com> writes:

> When pruning refs fails, we currently print an error to stderr, but
> still exit 0 from 'git fetch'.  Since this is a genuine error fetch
> should be exiting with some non-zero exit code.  Make it so.
>
> The --prune option was introduced in f360d844de ("builtin-fetch: add
> --prune option", 2009-11-10).  Unfortunately it's unclear from that
> commit whether ignoring the exit code was an oversight or
> intentional, but it feels like an oversight.

It is a good idea to signal, with a non-zero status, that the user
needs to inspect the situation and possibly take a corrective
action.  I however do not know if it is sufficient to just give the
same exit status as returned by prune_refs(), which may or may not
happen to crash with other possible non-zero exit status to make it
indistinguishable from other kinds of errors.

>  		if (rs->nr) {
> -			prune_refs(rs, ref_map, transport->url);
> +			retcode = prune_refs(rs, ref_map, transport->url);
>  		} else {
> -			prune_refs(&transport->remote->fetch,
> -				   ref_map,
> -				   transport->url);
> +			retcode = prune_refs(&transport->remote->fetch,
> +					     ref_map,
> +					     transport->url);
> +		}

It seems that it is possible for do_fetch() to return a negative
value, even without this patch.  The return value from prune_refs()
comes from refs.c::delete_refs(), which can come from error_errno()
that is -1, so this patch adds more of the same problem to existing
ones, though.

And the value propagates up to cmd_fetch() via fetch_one().  This
will be relayed to exit() without changing via this callchain:

    git.c::cmd_main()
      git.c::run_argv()
        git.c::handle_builtin()
          git.c::run_builtin()

This is not a new problem, which does not want to be corrected as
part of this patch, but let's leave a mental note as #leftoverbits
for now.

> +		if (retcode) {
> +			free_refs(ref_map);
> +			goto cleanup;
>  		}

This part is iffy.  We tried to prune refs, we may have removed some
of the refs missing from the other side but we may still have some
other refs that are missing from the other side due to the failure
we noticed.

Is it sensible to abort the fetching?  I am undecided, but without
further input, my gut reaction is that it is safe and may even be
better to treat this as a soft error and try to go closer to where
the user wanted to go as much as possible by continuing to fetch
from the other side, given that we have already paid for the cost of
discovering the refs from the other side.

> diff --git a/t/t5510-fetch.sh b/t/t5510-fetch.sh
> index 20f7110ec1..df824cc3d0 100755
> --- a/t/t5510-fetch.sh
> +++ b/t/t5510-fetch.sh
> @@ -164,6 +164,16 @@ test_expect_success 'fetch --prune --tags with refspec prunes based on refspec'
>  	git rev-parse sometag
>  '
>  
> +test_expect_success REFFILES 'fetch --prune fails to delete branches' '
> +	cd "$D" &&
> +	git clone . prune-fail &&
> +	cd prune-fail &&
> +	git update-ref refs/remotes/origin/extrabranch main &&
> +	>.git/packed-refs.new &&
> +	test_must_fail git fetch --prune origin

Is it because the lockfile ".new" on packed-refs prevents deletion
of refs but does not block creation and updates to existing refs
that it is an effective test for the "--prune" issue?  If we somehow
"locked" the whole ref updates, then the fetch would fail even
without "--prune", so it may be "effective", but smells like knowing
too much implementation detail.  Yuck, but I do not offhand think of
any better way (it is easier to think of worse ways), so without
further input, I would say that this is the best (or "least bad") we
can do.

Another tangent #leftoverbits is that many tests in this script are
so old-style that chdir around without isolating themselves in
subshells, while some do.  We may want to clean them up when the
file is quiescent.

Thanks.

  reply	other threads:[~2022-01-27 19:57 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-27 15:37 [PATCH] fetch --prune: exit with error if pruning fails Thomas Gummerer
2022-01-27 19:57 ` Junio C Hamano [this message]
2022-01-28 10:13   ` Johannes Schindelin
2022-01-28 10:55     ` Thomas Gummerer
2022-01-28 12:36       ` Johannes Schindelin
2022-01-28 18:14     ` Junio C Hamano
2022-01-28 11:04   ` Thomas Gummerer
2022-01-28 12:34     ` Johannes Schindelin
2022-01-31 13:07       ` Thomas Gummerer
2022-01-31 13:30 ` [PATCH v2] " Thomas Gummerer
2022-01-31 19:20   ` Junio C Hamano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=xmqqmtjh0x5f.fsf@gitster.g \
    --to=gitster@pobox.com \
    --cc=git@vger.kernel.org \
    --cc=johannes.schindelin@gmx.de \
    --cc=me@ttaylorr.com \
    --cc=t.gummerer@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).