git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Taylor Blau <me@ttaylorr.com>
To: Junio C Hamano <gitster@pobox.com>
Cc: git@vger.kernel.org
Subject: Re: What's cooking in git.git (Sep 2019, #02; Wed, 18)
Date: Fri, 20 Sep 2019 13:08:11 -0400	[thread overview]
Message-ID: <20190920170811.GA62895@syl.local> (raw)
In-Reply-To: <xmqqy2yl44lw.fsf@gitster-ct.c.googlers.com>

On Wed, Sep 18, 2019 at 03:33:15PM -0700, Junio C Hamano wrote:
> * tb/commit-graph-harden (2019-09-09) 3 commits
>  - commit-graph.c: handle corrupt/missing trees
>  - commit-graph.c: handle commit parsing errors
>  - t/t5318: introduce failing 'git commit-graph write' tests
>
>  The code to parse and use the commit-graph file has been made more
>  robust against corrupted input.
>
>  Will merge to 'next'.

Thanks for moving my topic along. This was found while generating
commit-graph files on all repositories hosted on GitHub, and some
corrupt repositories triggered the behavior.

We've been running this patch since a few days before I submitted it to
the mailing list without issue, and it certainly does squash the issue
I originally found.

> * jk/disable-commit-graph-during-upload-pack (2019-09-12) 2 commits
>  - upload-pack: disable commit graph more gently for shallow traversal
>  - commit-graph: bump DIE_ON_LOAD check to actual load-time
>
>  The "upload-pack" (the counterpart of "git fetch") needs to disable
>  commit-graph when responding to a shallow clone/fetch request, but
>  the way this was done made Git panic, which has been corrected.
>
>  Will merge to 'next'.

This one has a similar origin story, and has also been running at GitHub
for a few weeks. Happily, it does as it advertises and makes the
commit-graph process faster.

> * jk/partial-clone-sparse-blob (2019-09-16) 4 commits
>  - list-objects-filter: use empty string instead of NULL for sparse "base"
>  - list-objects-filter: give a more specific error sparse parsing error
>  - list-objects-filter: delay parsing of sparse oid
>  - t5616: test cloning/fetching with sparse:oid=<oid> filter
>
>  The name of the blob object that stores the filter specification
>  for sparse cloning/fetching was interpreted in a wrong place in the
>  code, causing Git to abort.
>
>  Will merge to 'next'.

A previous version of this series is running at GitHub, too, and also
without issue.

Thanks,
Taylor

  parent reply	other threads:[~2019-09-20 17:08 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-18 22:33 What's cooking in git.git (Sep 2019, #02; Wed, 18) Junio C Hamano
2019-09-18 22:37 ` Jonathan Tan
2019-09-20 16:50   ` Junio C Hamano
2019-09-19  4:41 ` Martin Ågren
2019-09-20 16:51   ` Junio C Hamano
2019-09-19 21:39 ` Thomas Gummerer
2019-09-20 17:00   ` Junio C Hamano
2019-09-20 17:08 ` Taylor Blau [this message]
2019-09-20 19:22   ` Junio C Hamano
2019-09-23 19:35 ` git-gui contributions, was " Johannes Schindelin
2019-09-24 12:23   ` Pratyush Yadav
2019-09-26 18:44     ` Johannes Schindelin
2019-09-26 22:41       ` Pratyush Yadav

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190920170811.GA62895@syl.local \
    --to=me@ttaylorr.com \
    --cc=git@vger.kernel.org \
    --cc=gitster@pobox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).