All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/23] pack-bitmap: bitmap generation improvements
@ 2020-11-11 19:41 Taylor Blau
  2020-11-11 19:41 ` [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
                   ` (25 more replies)
  0 siblings, 26 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:41 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

This series contains some patches that GitHub has been using in its fork
for the past few months to improve generating reachability bitmaps,
particularly in pathological cases, such as the repo containing all
forks of chromium/chromium.

The patches that follow are organized into five parts:

  - The first nine patches do some basic clean-up and fix a bug that we
    were able to exercise in tests while writing these patches.

  - The next two patches reimplements bitmap writing in order to avoid making
    multiple passes over the object graph. This approach ends up
    regressing both the time and memory used to generate bitmaps on the
    kernel's fork-network, but ends up being a useful stepping stone for
    further improvements.

  - The six patches that follow that culminates in a patch to build
    fewer intermediate bitmaps during the walk in order to reduce both
    memory and time for reasonably-sized repositories. (Which
    intermediate bitmaps are considered "important" is discussed in
    detail in the seventeenth patch).

  - The next several patches make reusing previously generated reachability
    bitmaps purely an optimization for generating new bitmaps. Importantly, that
    allows the bitmap selection process to pick better commits to bitmap going
    forward, rather than blindly reusing previously selected ones. They also
    include some light refactoring, and a patch to avoid tree walks when
    existing bitmaps suffice.

  - The final two patches address a trade-off in the prior patches between
    walking a wide history only once with high memory cost, and walking the same
    history multiple times with lower memory cost. Here, the walk is reduced to
    only cover the first-parent history. The final patch treats existing bitmaps
    as maximal in order to make it more difficult for a different set of
    selected commits to "walk around" the previously selected commits and force
    a large number of new bitmaps to be computed.

In the end, no block-buster performance improvements are attained on
normal-to-large sized repositories, but the new bitmap generation routine helps
substantially on enormous repositories, like the chromium/chromium fork-network.

Individual performance numbers are available in the patches throughout.

This series is a prerequisite to a list of other bitmap-related patches in
GitHub's fork, including multi-pack bitmaps.

Derrick Stolee (9):
  pack-bitmap-write: fill bitmap with commit history
  bitmap: add bitmap_diff_nonzero()
  commit: implement commit_list_contains()
  t5310: add branch-based checks
  pack-bitmap-write: rename children to reverse_edges
  pack-bitmap-write: build fewer intermediate bitmaps
  pack-bitmap-write: use existing bitmaps
  pack-bitmap-write: relax unique rewalk condition
  pack-bitmap-write: better reuse bitmaps

Jeff King (11):
  pack-bitmap: fix header size check
  pack-bitmap: bounds-check size of cache extension
  t5310: drop size of truncated ewah bitmap
  rev-list: die when --test-bitmap detects a mismatch
  ewah: factor out bitmap growth
  ewah: make bitmap growth less aggressive
  ewah: implement bitmap_or()
  ewah: add bitmap_dup() function
  pack-bitmap-write: reimplement bitmap writing
  pack-bitmap-write: pass ownership of intermediate bitmaps
  pack-bitmap-write: ignore BITMAP_FLAG_REUSE

Taylor Blau (3):
  ewah/ewah_bitmap.c: grow buffer past 1
  pack-bitmap: factor out 'bitmap_for_commit()'
  pack-bitmap: factor out 'add_commit_to_bitmap()'

 builtin/pack-objects.c  |   1 -
 commit.c                |  11 +
 commit.h                |   2 +
 ewah/bitmap.c           |  54 ++++-
 ewah/ewah_bitmap.c      |   2 +-
 ewah/ewok.h             |   3 +-
 pack-bitmap-write.c     | 452 +++++++++++++++++++++++++---------------
 pack-bitmap.c           | 130 +++++-------
 pack-bitmap.h           |   8 +-
 t/t5310-pack-bitmaps.sh | 164 ++++++++++++---
 10 files changed, 548 insertions(+), 279 deletions(-)

--
2.29.2.156.gc03786897f

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
@ 2020-11-11 19:41 ` Taylor Blau
  2020-11-22 19:36   ` Junio C Hamano
  2020-11-11 19:41 ` [PATCH 02/23] pack-bitmap: fix header size check Taylor Blau
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:41 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

When the buffer size is exactly 1, we fail to grow it properly, since
the integer truncation means that 1 * 3 / 2 = 1. This can cause a bad
write on the line below.

Bandaid this by first padding the buffer by 16, and then growing it.
This still allows old blocks to fit into new ones, but fixes the case
where the block size equals 1.

Co-authored-by: Jeff King <peff@peff.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/ewah_bitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ewah/ewah_bitmap.c b/ewah/ewah_bitmap.c
index d59b1afe3d..3fae04ad00 100644
--- a/ewah/ewah_bitmap.c
+++ b/ewah/ewah_bitmap.c
@@ -45,7 +45,7 @@ static inline void buffer_grow(struct ewah_bitmap *self, size_t new_size)
 static inline void buffer_push(struct ewah_bitmap *self, eword_t value)
 {
 	if (self->buffer_size + 1 >= self->alloc_size)
-		buffer_grow(self, self->buffer_size * 3 / 2);
+		buffer_grow(self, (self->buffer_size + 16) * 3 / 2);
 
 	self->buffer[self->buffer_size++] = value;
 }
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 02/23] pack-bitmap: fix header size check
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
  2020-11-11 19:41 ` [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
@ 2020-11-11 19:41 ` Taylor Blau
  2020-11-12 17:39   ` Martin Ågren
  2020-11-11 19:42 ` [PATCH 03/23] pack-bitmap: bounds-check size of cache extension Taylor Blau
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:41 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

When we parse a .bitmap header, we first check that we have enough bytes
to make a valid header. We do that based on sizeof(struct
bitmap_disk_header). However, as of 0f4d6cada8 (pack-bitmap: make bitmap
header handling hash agnostic, 2019-02-19), that struct oversizes its
checksum member to GIT_MAX_RAWSZ. That means we need to adjust for the
difference between that constant and the size of the actual hash we're
using. That commit adjusted the code which moves our pointer forward,
but forgot to update the size check.

This meant we were overly strict about the header size (requiring room
for a 32-byte worst-case hash, when sha1 is only 20 bytes). But in
practice it didn't matter because bitmap files tend to have at least 12
bytes of actual data anyway, so it was unlikely for a valid file to be
caught by this.

Let's fix it by pulling the header size into a separate variable and
using it in both spots. That fixes the bug and simplifies the code to make
it harder to have a mismatch like this in the future. It will also come
in handy in the next patch for more bounds checking.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 4077e731e8..cea3bb88bf 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -138,8 +138,9 @@ static struct ewah_bitmap *read_bitmap_1(struct bitmap_index *index)
 static int load_bitmap_header(struct bitmap_index *index)
 {
 	struct bitmap_disk_header *header = (void *)index->map;
+	size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
 
-	if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
+	if (index->map_size < header_size)
 		return error("Corrupted bitmap index (missing header data)");
 
 	if (memcmp(header->magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE)) != 0)
@@ -164,7 +165,7 @@ static int load_bitmap_header(struct bitmap_index *index)
 	}
 
 	index->entry_count = ntohl(header->entry_count);
-	index->map_pos += sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
+	index->map_pos += header_size;
 	return 0;
 }
 
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 03/23] pack-bitmap: bounds-check size of cache extension
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
  2020-11-11 19:41 ` [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
  2020-11-11 19:41 ` [PATCH 02/23] pack-bitmap: fix header size check Taylor Blau
@ 2020-11-11 19:42 ` Taylor Blau
  2020-11-12 17:47   ` Martin Ågren
  2020-11-11 19:42 ` [PATCH 04/23] t5310: drop size of truncated ewah bitmap Taylor Blau
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:42 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

A .bitmap file may have a "name hash cache" extension, which puts a
sequence of uint32_t bytes (one per object) at the end of the file. When
we see a flag indicating this extension, we blindly subtract the
appropriate number of bytes from our available length. However, if the
.bitmap file is too short, we'll underflow our length variable and wrap
around, thinking we have a very large length. This can lead to reading
out-of-bounds bytes while loading individual ewah bitmaps.

We can fix this by checking the number of available bytes when we parse
the header. The existing "truncated bitmap" test is now split into two
tests: one where we don't have this extension at all (and hence actually
do try to read a truncated ewah bitmap) and one where we realize
up-front that we can't even fit in the cache structure. We'll check
stderr in each case to make sure we hit the error we're expecting.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c           |  8 ++++++--
 t/t5310-pack-bitmaps.sh | 17 +++++++++++++++--
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index cea3bb88bf..42d4824c76 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -153,14 +153,18 @@ static int load_bitmap_header(struct bitmap_index *index)
 	/* Parse known bitmap format options */
 	{
 		uint32_t flags = ntohs(header->options);
+		uint32_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
+		unsigned char *index_end = index->map + index->map_size - the_hash_algo->rawsz;
 
 		if ((flags & BITMAP_OPT_FULL_DAG) == 0)
 			return error("Unsupported options for bitmap index file "
 				"(Git requires BITMAP_OPT_FULL_DAG)");
 
 		if (flags & BITMAP_OPT_HASH_CACHE) {
-			unsigned char *end = index->map + index->map_size - the_hash_algo->rawsz;
-			index->hashes = ((uint32_t *)end) - index->pack->num_objects;
+			if (index->map + header_size + cache_size > index_end)
+				return error("corrupted bitmap index file (too short to fit hash cache)");
+			index->hashes = (void *)(index_end - cache_size);
+			index_end -= cache_size;
 		}
 	}
 
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 8318781d2b..e2c3907a68 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -343,7 +343,8 @@ test_expect_success 'pack reuse respects --incremental' '
 	test_must_be_empty actual
 '
 
-test_expect_success 'truncated bitmap fails gracefully' '
+test_expect_success 'truncated bitmap fails gracefully (ewah)' '
+	test_config pack.writebitmaphashcache false &&
 	git repack -ad &&
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
@@ -352,7 +353,19 @@ test_expect_success 'truncated bitmap fails gracefully' '
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-	test_i18ngrep corrupt stderr
+	test_i18ngrep corrupt.ewah.bitmap stderr
+'
+
+test_expect_success 'truncated bitmap fails gracefully (cache)' '
+	git repack -ad &&
+	git rev-list --use-bitmap-index --count --all >expect &&
+	bitmap=$(ls .git/objects/pack/*.bitmap) &&
+	test_when_finished "rm -f $bitmap" &&
+	test_copy_bytes 512 <$bitmap >$bitmap.tmp &&
+	mv -f $bitmap.tmp $bitmap &&
+	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
+	test_cmp expect actual &&
+	test_i18ngrep corrupted.bitmap.index stderr
 '
 
 # have_delta <obj> <expected_base>
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 04/23] t5310: drop size of truncated ewah bitmap
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (2 preceding siblings ...)
  2020-11-11 19:42 ` [PATCH 03/23] pack-bitmap: bounds-check size of cache extension Taylor Blau
@ 2020-11-11 19:42 ` Taylor Blau
  2020-11-11 19:42 ` [PATCH 05/23] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:42 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

We truncate the .bitmap file to 512 bytes and expect to run into
problems reading an individual ewah file. But this length is somewhat
arbitrary, and just happened to work when the test was added in
9d2e330b17 (ewah_read_mmap: bounds-check mmap reads, 2018-06-14).

An upcoming commit will change the size of the history we create in the
test repo, which will cause this test to fail. We can future-proof it a
bit more by reducing the size of the truncated bitmap file.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 t/t5310-pack-bitmaps.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index e2c3907a68..70a4fc4843 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -349,7 +349,7 @@ test_expect_success 'truncated bitmap fails gracefully (ewah)' '
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
 	test_when_finished "rm -f $bitmap" &&
-	test_copy_bytes 512 <$bitmap >$bitmap.tmp &&
+	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 05/23] rev-list: die when --test-bitmap detects a mismatch
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (3 preceding siblings ...)
  2020-11-11 19:42 ` [PATCH 04/23] t5310: drop size of truncated ewah bitmap Taylor Blau
@ 2020-11-11 19:42 ` Taylor Blau
  2020-11-11 19:42 ` [PATCH 06/23] ewah: factor out bitmap growth Taylor Blau
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:42 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

You can use "git rev-list --test-bitmap HEAD" to check that bitmaps
produce the same answer we'd get from a regular traversal. But if we
detect an error, we only print "mismatch", and still exit with a
successful error code.

That makes the uses of --test-bitmap in the test suite (e.g., in t5310)
mostly pointless: even if we saw an error, the tests wouldn't notice.
Let's instead call die(), which will let these tests work as designed,
and alert us if the bitmaps are bogus.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 42d4824c76..82c6bf2843 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1328,7 +1328,7 @@ void test_bitmap_walk(struct rev_info *revs)
 	if (bitmap_equals(result, tdata.base))
 		fprintf(stderr, "OK!\n");
 	else
-		fprintf(stderr, "Mismatch!\n");
+		die("mismatch in bitmap results");
 
 	free_bitmap_index(bitmap_git);
 }
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 06/23] ewah: factor out bitmap growth
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (4 preceding siblings ...)
  2020-11-11 19:42 ` [PATCH 05/23] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
@ 2020-11-11 19:42 ` Taylor Blau
  2020-11-11 19:42 ` [PATCH 07/23] ewah: make bitmap growth less aggressive Taylor Blau
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:42 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

We auto-grow bitmaps when somebody asks to set a bit whose position is
outside of our currently allocated range. Other operations besides
single bit-setting might need to do this, too, so let's pull it into its
own function.

Note that we change the semantics a little: you now ask for the number
of words you'd like to have, not the id of the block you'd like to write
to.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index d8cec585af..7c1ecfa6fd 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -35,18 +35,22 @@ struct bitmap *bitmap_new(void)
 	return bitmap_word_alloc(32);
 }
 
-void bitmap_set(struct bitmap *self, size_t pos)
+static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
-	size_t block = EWAH_BLOCK(pos);
-
-	if (block >= self->word_alloc) {
+	if (word_alloc > self->word_alloc) {
 		size_t old_size = self->word_alloc;
-		self->word_alloc = block ? block * 2 : 1;
+		self->word_alloc = word_alloc * 2;
 		REALLOC_ARRAY(self->words, self->word_alloc);
 		memset(self->words + old_size, 0x0,
 			(self->word_alloc - old_size) * sizeof(eword_t));
 	}
+}
 
+void bitmap_set(struct bitmap *self, size_t pos)
+{
+	size_t block = EWAH_BLOCK(pos);
+
+	bitmap_grow(self, block + 1);
 	self->words[block] |= EWAH_MASK(pos);
 }
 
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 07/23] ewah: make bitmap growth less aggressive
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (5 preceding siblings ...)
  2020-11-11 19:42 ` [PATCH 06/23] ewah: factor out bitmap growth Taylor Blau
@ 2020-11-11 19:42 ` Taylor Blau
  2020-11-22 20:32   ` Junio C Hamano
  2020-11-11 19:43 ` [PATCH 08/23] ewah: implement bitmap_or() Taylor Blau
                   ` (18 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:42 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

If you ask to set a bit in the Nth word and we haven't yet allocated
that many slots in our array, we'll increase the bitmap size to 2*N.
This means we might frequently end up with bitmaps that are twice the
necessary size (as soon as you ask for the biggest bit, we'll size up to
twice that).

But if we just allocate as many words as were asked for, we may not grow
fast enough. The worst case there is setting bit 0, then 1, etc. Each
time we grow we'd just extend by one more word, giving us linear
reallocations (and quadratic memory copies).

Let's combine those by allocating the maximum of:

 - what the caller asked for

 - a geometric increase in existing size; we'll switch to 3/2 instead of
   2 here. That's less aggressive and may help avoid fragmenting memory
   (N + 3N/2 > 9N/4, so old chunks can be reused as we scale up).

Our worst case is still 3/2N wasted bits (you set bit N-1, then setting
bit N causes us to grow by 3/2), but our average should be much better.

This isn't usually that big a deal, but it will matter as we shift the
reachability bitmap generation code to store more bitmaps in memory.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 7c1ecfa6fd..43a59d7fed 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -39,7 +39,9 @@ static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
 	if (word_alloc > self->word_alloc) {
 		size_t old_size = self->word_alloc;
-		self->word_alloc = word_alloc * 2;
+		self->word_alloc = old_size * 3 / 2;
+		if (word_alloc > self->word_alloc)
+			self->word_alloc = word_alloc;
 		REALLOC_ARRAY(self->words, self->word_alloc);
 		memset(self->words + old_size, 0x0,
 			(self->word_alloc - old_size) * sizeof(eword_t));
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 08/23] ewah: implement bitmap_or()
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (6 preceding siblings ...)
  2020-11-11 19:42 ` [PATCH 07/23] ewah: make bitmap growth less aggressive Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-22 20:34   ` Junio C Hamano
  2020-11-11 19:43 ` [PATCH 09/23] ewah: add bitmap_dup() function Taylor Blau
                   ` (17 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

We have a function to bitwise-OR an ewah into an uncompressed bitmap,
but not to OR two uncompressed bitmaps. Let's add it.

Interestingly, we have a public header declaration going back to
e1273106f6 (ewah: compressed bitmap implementation, 2013-11-14), but the
function was never implemented.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 43a59d7fed..c3f8e7242b 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -127,6 +127,15 @@ void bitmap_and_not(struct bitmap *self, struct bitmap *other)
 		self->words[i] &= ~other->words[i];
 }
 
+void bitmap_or(struct bitmap *self, const struct bitmap *other)
+{
+	size_t i;
+
+	bitmap_grow(self, other->word_alloc);
+	for (i = 0; i < other->word_alloc; i++)
+		self->words[i] |= other->words[i];
+}
+
 void bitmap_or_ewah(struct bitmap *self, struct ewah_bitmap *other)
 {
 	size_t original_size = self->word_alloc;
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 09/23] ewah: add bitmap_dup() function
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (7 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 08/23] ewah: implement bitmap_or() Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 19:43 ` [PATCH 10/23] pack-bitmap-write: reimplement bitmap writing Taylor Blau
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

There's no easy way to make a copy of a bitmap. Obviously a caller can
iterate over the bits and set them one by one in a new bitmap, but we
can go much faster by copying whole words with memcpy().

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 7 +++++++
 ewah/ewok.h   | 1 +
 2 files changed, 8 insertions(+)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index c3f8e7242b..eb7e2539be 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -35,6 +35,13 @@ struct bitmap *bitmap_new(void)
 	return bitmap_word_alloc(32);
 }
 
+struct bitmap *bitmap_dup(const struct bitmap *src)
+{
+	struct bitmap *dst = bitmap_word_alloc(src->word_alloc);
+	COPY_ARRAY(dst->words, src->words, src->word_alloc);
+	return dst;
+}
+
 static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
 	if (word_alloc > self->word_alloc) {
diff --git a/ewah/ewok.h b/ewah/ewok.h
index 011852bef1..1fc555e672 100644
--- a/ewah/ewok.h
+++ b/ewah/ewok.h
@@ -173,6 +173,7 @@ struct bitmap {
 
 struct bitmap *bitmap_new(void);
 struct bitmap *bitmap_word_alloc(size_t word_alloc);
+struct bitmap *bitmap_dup(const struct bitmap *src);
 void bitmap_set(struct bitmap *self, size_t pos);
 void bitmap_unset(struct bitmap *self, size_t pos);
 int bitmap_get(struct bitmap *self, size_t pos);
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 10/23] pack-bitmap-write: reimplement bitmap writing
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (8 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 09/23] ewah: add bitmap_dup() function Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 19:43 ` [PATCH 11/23] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

The bitmap generation code works by iterating over the set of commits
for which we plan to write bitmaps, and then for each one performing a
traditional traversal over the reachable commits and trees, filling in
the bitmap. Between two traversals, we can often reuse the previous
bitmap result as long as the first commit is an ancestor of the second.
However, our worst case is that we may end up doing "n" complete
complete traversals to the root in order to create "n" bitmaps.

In a real-world case (the shared-storage repo consisting of all GitHub
forks of chromium/chromium), we perform very poorly: generating bitmaps
takes ~3 hours, whereas we can walk the whole object graph in ~3
minutes.

This commit completely rewrites the algorithm, with the goal of
accessing each object only once. It works roughly like this:

  - generate a list of commits in topo-order using a single traversal

  - invert the edges of the graph (so have parents point at their
    children)

  - make one pass in reverse topo-order, generating a bitmap for each
    commit and passing the result along to child nodes

We generate correct results because each node we visit has already had
all of its ancestors added to the bitmap. And we make only two linear
passes over the commits.

We also visit each tree usually only once. When filling in a bitmap, we
don't bother to recurse into trees whose bit is already set in the
bitmap (since we know we've already done so when setting their bit).
That means that if commit A references tree T, none of its descendants
will need to open T again. I say "usually", though, because it is
possible for a given tree to be mentioned in unrelated parts of history
(e.g., cherry-picking to a parallel branch).

So we've accomplished our goal, and the resulting algorithm is pretty
simple to understand. But there are some downsides, at least with this
initial implementation:

  - we no longer reuse the results of any on-disk bitmaps when
    generating. So we'd expect to sometimes be slower than the original
    when bitmaps already exist. However, this is something we'll be able
    to add back in later.

  - we use much more memory. Instead of keeping one bitmap in memory at
    a time, we're passing them up through the graph. So our memory use
    should scale with the graph width (times the size of a bitmap).

So how does it perform?

For a clone of linux.git, generating bitmaps from scratch with the old
algorithm took 63s. Using this algorithm it takes 205s. Which is much
worse, but _might_ be acceptable if it behaved linearly as the size
grew. It also increases peak heap usage by ~1G. That's not impossibly
large, but not encouraging.

On the complete fork-network of torvalds/linux, it increases the peak
RAM usage by 40GB. Yikes. (I forgot to record the time it took, but the
memory usage was too much to consider this reasonable anyway).

On the complete fork-network of chromium/chromium, I ran out of memory
before succeeding. Some back-of-the-envelope calculations indicate it
would need 80+GB to complete.

So at this stage, we've managed to make things much worse. But because
of the way this new algorithm is structured, there are a lot of
opportunities for optimization on top. We'll start implementing those in
the follow-on patches.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 303 ++++++++++++++++++++++++--------------------
 1 file changed, 169 insertions(+), 134 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 5e998bdaa7..f2f0b6b2c2 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -110,8 +110,6 @@ void bitmap_writer_build_type_index(struct packing_data *to_pack,
 /**
  * Compute the actual bitmaps
  */
-static struct object **seen_objects;
-static unsigned int seen_objects_nr, seen_objects_alloc;
 
 static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
 {
@@ -127,21 +125,6 @@ static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitm
 	writer.selected_nr++;
 }
 
-static inline void mark_as_seen(struct object *object)
-{
-	ALLOC_GROW(seen_objects, seen_objects_nr + 1, seen_objects_alloc);
-	seen_objects[seen_objects_nr++] = object;
-}
-
-static inline void reset_all_seen(void)
-{
-	unsigned int i;
-	for (i = 0; i < seen_objects_nr; ++i) {
-		seen_objects[i]->flags &= ~(SEEN | ADDED | SHOWN);
-	}
-	seen_objects_nr = 0;
-}
-
 static uint32_t find_object_pos(const struct object_id *oid)
 {
 	struct object_entry *entry = packlist_find(writer.to_pack, oid);
@@ -154,60 +137,6 @@ static uint32_t find_object_pos(const struct object_id *oid)
 	return oe_in_pack_pos(writer.to_pack, entry);
 }
 
-static void show_object(struct object *object, const char *name, void *data)
-{
-	struct bitmap *base = data;
-	bitmap_set(base, find_object_pos(&object->oid));
-	mark_as_seen(object);
-}
-
-static void show_commit(struct commit *commit, void *data)
-{
-	mark_as_seen((struct object *)commit);
-}
-
-static int
-add_to_include_set(struct bitmap *base, struct commit *commit)
-{
-	khiter_t hash_pos;
-	uint32_t bitmap_pos = find_object_pos(&commit->object.oid);
-
-	if (bitmap_get(base, bitmap_pos))
-		return 0;
-
-	hash_pos = kh_get_oid_map(writer.bitmaps, commit->object.oid);
-	if (hash_pos < kh_end(writer.bitmaps)) {
-		struct bitmapped_commit *bc = kh_value(writer.bitmaps, hash_pos);
-		bitmap_or_ewah(base, bc->bitmap);
-		return 0;
-	}
-
-	bitmap_set(base, bitmap_pos);
-	return 1;
-}
-
-static int
-should_include(struct commit *commit, void *_data)
-{
-	struct bitmap *base = _data;
-
-	if (!add_to_include_set(base, commit)) {
-		struct commit_list *parent = commit->parents;
-
-		mark_as_seen((struct object *)commit);
-
-		while (parent) {
-			parent->item->object.flags |= SEEN;
-			mark_as_seen((struct object *)parent->item);
-			parent = parent->next;
-		}
-
-		return 0;
-	}
-
-	return 1;
-}
-
 static void compute_xor_offsets(void)
 {
 	static const int MAX_XOR_OFFSET_SEARCH = 10;
@@ -248,79 +177,185 @@ static void compute_xor_offsets(void)
 	}
 }
 
-void bitmap_writer_build(struct packing_data *to_pack)
+struct bb_commit {
+	struct commit_list *children;
+	struct bitmap *bitmap;
+	unsigned selected:1;
+	unsigned idx; /* within selected array */
+};
+
+define_commit_slab(bb_data, struct bb_commit);
+
+struct bitmap_builder {
+	struct bb_data data;
+	struct commit **commits;
+	size_t commits_nr, commits_alloc;
+};
+
+static void bitmap_builder_init(struct bitmap_builder *bb,
+				struct bitmap_writer *writer)
 {
-	static const double REUSE_BITMAP_THRESHOLD = 0.2;
-
-	int i, reuse_after, need_reset;
-	struct bitmap *base = bitmap_new();
 	struct rev_info revs;
+	struct commit *commit;
+	unsigned int i;
+
+	memset(bb, 0, sizeof(*bb));
+	init_bb_data(&bb->data);
+
+	reset_revision_walk();
+	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
+	revs.topo_order = 1;
+
+	for (i = 0; i < writer->selected_nr; i++) {
+		struct commit *c = writer->selected[i].commit;
+		struct bb_commit *ent = bb_data_at(&bb->data, c);
+		ent->selected = 1;
+		ent->idx = i;
+		add_pending_object(&revs, &c->object, "");
+	}
+
+	if (prepare_revision_walk(&revs))
+		die("revision walk setup failed");
+
+	while ((commit = get_revision(&revs))) {
+		struct commit_list *p;
+
+		parse_commit_or_die(commit);
+
+		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+		bb->commits[bb->commits_nr++] = commit;
+
+		for (p = commit->parents; p; p = p->next) {
+			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
+			commit_list_insert(commit, &ent->children);
+		}
+	}
+}
+
+static void bitmap_builder_clear(struct bitmap_builder *bb)
+{
+	clear_bb_data(&bb->data);
+	free(bb->commits);
+	bb->commits_nr = bb->commits_alloc = 0;
+}
+
+static void fill_bitmap_tree(struct bitmap *bitmap,
+			     struct tree *tree)
+{
+	uint32_t pos;
+	struct tree_desc desc;
+	struct name_entry entry;
+
+	/*
+	 * If our bit is already set, then there is nothing to do. Both this
+	 * tree and all of its children will be set.
+	 */
+	pos = find_object_pos(&tree->object.oid);
+	if (bitmap_get(bitmap, pos))
+		return;
+	bitmap_set(bitmap, pos);
+
+	if (parse_tree(tree) < 0)
+		die("unable to load tree object %s",
+		    oid_to_hex(&tree->object.oid));
+	init_tree_desc(&desc, tree->buffer, tree->size);
+
+	while (tree_entry(&desc, &entry)) {
+		switch (object_type(entry.mode)) {
+		case OBJ_TREE:
+			fill_bitmap_tree(bitmap,
+					 lookup_tree(the_repository, &entry.oid));
+			break;
+		case OBJ_BLOB:
+			bitmap_set(bitmap, find_object_pos(&entry.oid));
+			break;
+		default:
+			/* Gitlink, etc; not reachable */
+			break;
+		}
+	}
+
+	free_tree_buffer(tree);
+}
+
+static void fill_bitmap_commit(struct bb_commit *ent,
+			       struct commit *commit)
+{
+	if (!ent->bitmap)
+		ent->bitmap = bitmap_new();
+
+	/*
+	 * mark ourselves, but do not bother with parents; their values
+	 * will already have been propagated to us
+	 */
+	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
+	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
+}
+
+static void store_selected(struct bb_commit *ent, struct commit *commit)
+{
+	struct bitmapped_commit *stored = &writer.selected[ent->idx];
+	khiter_t hash_pos;
+	int hash_ret;
+
+	/*
+	 * the "reuse bitmaps" phase may have stored something here, but
+	 * our new algorithm doesn't use it. Drop it.
+	 */
+	if (stored->bitmap)
+		ewah_free(stored->bitmap);
+
+	stored->bitmap = bitmap_to_ewah(ent->bitmap);
+
+	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
+	if (hash_ret == 0)
+		die("Duplicate entry when writing index: %s",
+		    oid_to_hex(&commit->object.oid));
+	kh_value(writer.bitmaps, hash_pos) = stored;
+}
+
+void bitmap_writer_build(struct packing_data *to_pack)
+{
+	struct bitmap_builder bb;
+	size_t i;
+	int nr_stored = 0; /* for progress */
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
 
 	if (writer.show_progress)
 		writer.progress = start_progress("Building bitmaps", writer.selected_nr);
-
-	repo_init_revisions(to_pack->repo, &revs, NULL);
-	revs.tag_objects = 1;
-	revs.tree_objects = 1;
-	revs.blob_objects = 1;
-	revs.no_walk = 0;
-
-	revs.include_check = should_include;
-	reset_revision_walk();
-
-	reuse_after = writer.selected_nr * REUSE_BITMAP_THRESHOLD;
-	need_reset = 0;
-
-	for (i = writer.selected_nr - 1; i >= 0; --i) {
-		struct bitmapped_commit *stored;
-		struct object *object;
-
-		khiter_t hash_pos;
-		int hash_ret;
-
-		stored = &writer.selected[i];
-		object = (struct object *)stored->commit;
-
-		if (stored->bitmap == NULL) {
-			if (i < writer.selected_nr - 1 &&
-			    (need_reset ||
-			     !in_merge_bases(writer.selected[i + 1].commit,
-					     stored->commit))) {
-			    bitmap_reset(base);
-			    reset_all_seen();
-			}
-
-			add_pending_object(&revs, object, "");
-			revs.include_check_data = base;
-
-			if (prepare_revision_walk(&revs))
-				die("revision walk setup failed");
-
-			traverse_commit_list(&revs, show_commit, show_object, base);
-
-			object_array_clear(&revs.pending);
-
-			stored->bitmap = bitmap_to_ewah(base);
-			need_reset = 0;
-		} else
-			need_reset = 1;
-
-		if (i >= reuse_after)
-			stored->flags |= BITMAP_FLAG_REUSE;
-
-		hash_pos = kh_put_oid_map(writer.bitmaps, object->oid, &hash_ret);
-		if (hash_ret == 0)
-			die("Duplicate entry when writing index: %s",
-			    oid_to_hex(&object->oid));
-
-		kh_value(writer.bitmaps, hash_pos) = stored;
-		display_progress(writer.progress, writer.selected_nr - i);
+	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
+		the_repository);
+
+	bitmap_builder_init(&bb, &writer);
+	for (i = bb.commits_nr; i > 0; i--) {
+		struct commit *commit = bb.commits[i-1];
+		struct bb_commit *ent = bb_data_at(&bb.data, commit);
+		struct commit *child;
+
+		fill_bitmap_commit(ent, commit);
+
+		if (ent->selected) {
+			store_selected(ent, commit);
+			nr_stored++;
+			display_progress(writer.progress, nr_stored);
+		}
+
+		while ((child = pop_commit(&ent->children))) {
+			struct bb_commit *child_ent =
+				bb_data_at(&bb.data, child);
+
+			if (child_ent->bitmap)
+				bitmap_or(child_ent->bitmap, ent->bitmap);
+			else
+				child_ent->bitmap = bitmap_dup(ent->bitmap);
+		}
+		bitmap_free(ent->bitmap);
+		ent->bitmap = NULL;
 	}
+	bitmap_builder_clear(&bb);
 
-	bitmap_free(base);
 	stop_progress(&writer.progress);
 
 	compute_xor_offsets();
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 11/23] pack-bitmap-write: pass ownership of intermediate bitmaps
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (9 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 10/23] pack-bitmap-write: reimplement bitmap writing Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 19:43 ` [PATCH 12/23] pack-bitmap-write: fill bitmap with commit history Taylor Blau
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

Our algorithm to generate reachability bitmaps walks through the commit
graph from the bottom up, passing bitmap data from each commit to its
descendants. For a linear stretch of history like:

  A -- B -- C

our sequence of steps is:

  - compute the bitmap for A by walking its trees, etc

  - duplicate A's bitmap as a starting point for B; we can now free A's
    bitmap, since we only needed it as an intermediate result

  - OR in any extra objects that B can reach into its bitmap

  - duplicate B's bitmap as a starting point for C; likewise, free B's
    bitmap

  - OR in objects for C, and so on...

Rather than duplicating bitmaps and immediately freeing the original, we
can just pass ownership from commit to commit. Note that this doesn't
always work:

  - the recipient may be a merge which already has an intermediate
    bitmap from its other ancestor. In that case we have to OR our
    result into it. Note that the first ancestor to reach the merge does
    get to pass ownership, though.

  - we may have multiple children; we can only pass ownership to one of
    them

However, it happens often enough and copying bitmaps is expensive enough
that this provides a noticeable speedup. On a clone of linux.git, this
reduces the time to generate bitmaps from 205s to 70s. This is about the
same amount of time it took to generate bitmaps using our old "many
traversals" algorithm (the previous commit measures the identical
scenario as taking 63s). It unfortunately provides only a very modest
reduction in the peak memory usage, though.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index f2f0b6b2c2..d2d46ff5f4 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -333,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *commit = bb.commits[i-1];
 		struct bb_commit *ent = bb_data_at(&bb.data, commit);
 		struct commit *child;
+		int reused = 0;
 
 		fill_bitmap_commit(ent, commit);
 
@@ -348,10 +349,15 @@ void bitmap_writer_build(struct packing_data *to_pack)
 
 			if (child_ent->bitmap)
 				bitmap_or(child_ent->bitmap, ent->bitmap);
-			else
+			else if (reused)
 				child_ent->bitmap = bitmap_dup(ent->bitmap);
+			else {
+				child_ent->bitmap = ent->bitmap;
+				reused = 1;
+			}
 		}
-		bitmap_free(ent->bitmap);
+		if (!reused)
+			bitmap_free(ent->bitmap);
 		ent->bitmap = NULL;
 	}
 	bitmap_builder_clear(&bb);
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 12/23] pack-bitmap-write: fill bitmap with commit history
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (10 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 11/23] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 19:43 ` [PATCH 13/23] bitmap: add bitmap_diff_nonzero() Taylor Blau
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

The fill_bitmap_commit() method assumes that every parent of the given
commit is already part of the current bitmap. Instead of making that
assumption, let's walk parents until we reach commits already part of
the bitmap. Set the value for that parent immediately after querying to
save time doing double calls to find_object_pos() and to avoid inserting
the parent into the queue multiple times.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index d2d46ff5f4..361f3305a2 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -12,6 +12,7 @@
 #include "sha1-lookup.h"
 #include "pack-objects.h"
 #include "commit-reach.h"
+#include "prio-queue.h"
 
 struct bitmapped_commit {
 	struct commit *commit;
@@ -279,17 +280,30 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
 }
 
 static void fill_bitmap_commit(struct bb_commit *ent,
-			       struct commit *commit)
+			       struct commit *commit,
+			       struct prio_queue *queue)
 {
 	if (!ent->bitmap)
 		ent->bitmap = bitmap_new();
 
-	/*
-	 * mark ourselves, but do not bother with parents; their values
-	 * will already have been propagated to us
-	 */
 	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
-	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
+	prio_queue_put(queue, commit);
+
+	while (queue->nr) {
+		struct commit_list *p;
+		struct commit *c = prio_queue_get(queue);
+
+		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
+		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
+
+		for (p = c->parents; p; p = p->next) {
+			int pos = find_object_pos(&p->item->object.oid);
+			if (!bitmap_get(ent->bitmap, pos)) {
+				bitmap_set(ent->bitmap, pos);
+				prio_queue_put(queue, p->item);
+			}
+		}
+	}
 }
 
 static void store_selected(struct bb_commit *ent, struct commit *commit)
@@ -319,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	struct bitmap_builder bb;
 	size_t i;
 	int nr_stored = 0; /* for progress */
+	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
@@ -335,7 +350,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *child;
 		int reused = 0;
 
-		fill_bitmap_commit(ent, commit);
+		fill_bitmap_commit(ent, commit, &queue);
 
 		if (ent->selected) {
 			store_selected(ent, commit);
@@ -360,6 +375,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 			bitmap_free(ent->bitmap);
 		ent->bitmap = NULL;
 	}
+	clear_prio_queue(&queue);
 	bitmap_builder_clear(&bb);
 
 	stop_progress(&writer.progress);
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 13/23] bitmap: add bitmap_diff_nonzero()
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (11 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 12/23] pack-bitmap-write: fill bitmap with commit history Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 19:43 ` [PATCH 14/23] commit: implement commit_list_contains() Taylor Blau
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_diff_nonzero() checks if the 'self' bitmap contains any bits
that are not on in the 'other' bitmap.

Also, delete the declaration of bitmap_is_subset() as it is not used or
implemented.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 24 ++++++++++++++++++++++++
 ewah/ewok.h   |  2 +-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index eb7e2539be..e2ebeac0e5 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -200,6 +200,30 @@ int bitmap_equals(struct bitmap *self, struct bitmap *other)
 	return 1;
 }
 
+int bitmap_diff_nonzero(struct bitmap *self, struct bitmap *other)
+{
+	struct bitmap *small;
+	size_t i;
+
+	if (self->word_alloc < other->word_alloc) {
+		small = self;
+	} else {
+		small = other;
+
+		for (i = other->word_alloc; i < self->word_alloc; i++) {
+			if (self->words[i] != 0)
+				return 1;
+		}
+	}
+
+	for (i = 0; i < small->word_alloc; i++) {
+		if ((self->words[i] & ~other->words[i]))
+			return 1;
+	}
+
+	return 0;
+}
+
 void bitmap_reset(struct bitmap *bitmap)
 {
 	memset(bitmap->words, 0x0, bitmap->word_alloc * sizeof(eword_t));
diff --git a/ewah/ewok.h b/ewah/ewok.h
index 1fc555e672..156c71d06d 100644
--- a/ewah/ewok.h
+++ b/ewah/ewok.h
@@ -180,7 +180,7 @@ int bitmap_get(struct bitmap *self, size_t pos);
 void bitmap_reset(struct bitmap *self);
 void bitmap_free(struct bitmap *self);
 int bitmap_equals(struct bitmap *self, struct bitmap *other);
-int bitmap_is_subset(struct bitmap *self, struct bitmap *super);
+int bitmap_diff_nonzero(struct bitmap *self, struct bitmap *other);
 
 struct ewah_bitmap * bitmap_to_ewah(struct bitmap *bitmap);
 struct bitmap *ewah_to_bitmap(struct ewah_bitmap *ewah);
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 14/23] commit: implement commit_list_contains()
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (12 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 13/23] bitmap: add bitmap_diff_nonzero() Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 19:43 ` [PATCH 15/23] t5310: add branch-based checks Taylor Blau
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

It can be helpful to check if a commit_list contains a commit. Use
pointer equality, assuming lookup_commit() was used.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 commit.c | 11 +++++++++++
 commit.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/commit.c b/commit.c
index fe1fa3dc41..9a785bf906 100644
--- a/commit.c
+++ b/commit.c
@@ -544,6 +544,17 @@ struct commit_list *commit_list_insert(struct commit *item, struct commit_list *
 	return new_list;
 }
 
+int commit_list_contains(struct commit *item, struct commit_list *list)
+{
+	while (list) {
+		if (list->item == item)
+			return 1;
+		list = list->next;
+	}
+
+	return 0;
+}
+
 unsigned commit_list_count(const struct commit_list *l)
 {
 	unsigned c = 0;
diff --git a/commit.h b/commit.h
index 5467786c7b..742a6de460 100644
--- a/commit.h
+++ b/commit.h
@@ -167,6 +167,8 @@ int find_commit_subject(const char *commit_buffer, const char **subject);
 
 struct commit_list *commit_list_insert(struct commit *item,
 					struct commit_list **list);
+int commit_list_contains(struct commit *item,
+			 struct commit_list *list);
 struct commit_list **commit_list_append(struct commit *commit,
 					struct commit_list **next);
 unsigned commit_list_count(const struct commit_list *l);
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 15/23] t5310: add branch-based checks
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (13 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 14/23] commit: implement commit_list_contains() Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 20:58   ` Derrick Stolee
  2020-11-11 19:43 ` [PATCH 16/23] pack-bitmap-write: rename children to reverse_edges Taylor Blau
                   ` (10 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

The current rev-list tests that check the bitmap data only work on HEAD
instead of multiple branches. Expand the test cases to handle both
'master' and 'other' branches.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 t/t5310-pack-bitmaps.sh | 61 +++++++++++++++++++++++------------------
 1 file changed, 34 insertions(+), 27 deletions(-)

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 70a4fc4843..6bf68fee85 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -41,63 +41,70 @@ test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
 	git rev-list --test-bitmap HEAD
 '
 
-rev_list_tests() {
-	state=$1
-
-	test_expect_success "counting commits via bitmap ($state)" '
-		git rev-list --count HEAD >expect &&
-		git rev-list --use-bitmap-index --count HEAD >actual &&
+rev_list_tests_head () {
+	test_expect_success "counting commits via bitmap ($state, $branch)" '
+		git rev-list --count $branch >expect &&
+		git rev-list --use-bitmap-index --count $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting partial commits via bitmap ($state)" '
-		git rev-list --count HEAD~5..HEAD >expect &&
-		git rev-list --use-bitmap-index --count HEAD~5..HEAD >actual &&
+	test_expect_success "counting partial commits via bitmap ($state, $branch)" '
+		git rev-list --count $branch~5..$branch >expect &&
+		git rev-list --use-bitmap-index --count $branch~5..$branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting commits with limit ($state)" '
-		git rev-list --count -n 1 HEAD >expect &&
-		git rev-list --use-bitmap-index --count -n 1 HEAD >actual &&
+	test_expect_success "counting commits with limit ($state, $branch)" '
+		git rev-list --count -n 1 $branch >expect &&
+		git rev-list --use-bitmap-index --count -n 1 $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting non-linear history ($state)" '
+	test_expect_success "counting non-linear history ($state, $branch)" '
 		git rev-list --count other...master >expect &&
 		git rev-list --use-bitmap-index --count other...master >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting commits with limiting ($state)" '
-		git rev-list --count HEAD -- 1.t >expect &&
-		git rev-list --use-bitmap-index --count HEAD -- 1.t >actual &&
+	test_expect_success "counting commits with limiting ($state, $branch)" '
+		git rev-list --count $branch -- 1.t >expect &&
+		git rev-list --use-bitmap-index --count $branch -- 1.t >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting objects via bitmap ($state)" '
-		git rev-list --count --objects HEAD >expect &&
-		git rev-list --use-bitmap-index --count --objects HEAD >actual &&
+	test_expect_success "counting objects via bitmap ($state, $branch)" '
+		git rev-list --count --objects $branch >expect &&
+		git rev-list --use-bitmap-index --count --objects $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "enumerate commits ($state)" '
-		git rev-list --use-bitmap-index HEAD >actual &&
-		git rev-list HEAD >expect &&
+	test_expect_success "enumerate commits ($state, $branch)" '
+		git rev-list --use-bitmap-index $branch >actual &&
+		git rev-list $branch >expect &&
 		test_bitmap_traversal --no-confirm-bitmaps expect actual
 	'
 
-	test_expect_success "enumerate --objects ($state)" '
-		git rev-list --objects --use-bitmap-index HEAD >actual &&
-		git rev-list --objects HEAD >expect &&
+	test_expect_success "enumerate --objects ($state, $branch)" '
+		git rev-list --objects --use-bitmap-index $branch >actual &&
+		git rev-list --objects $branch >expect &&
 		test_bitmap_traversal expect actual
 	'
 
-	test_expect_success "bitmap --objects handles non-commit objects ($state)" '
-		git rev-list --objects --use-bitmap-index HEAD tagged-blob >actual &&
+	test_expect_success "bitmap --objects handles non-commit objects ($state, $branch)" '
+		git rev-list --objects --use-bitmap-index $branch tagged-blob >actual &&
 		grep $blob actual
 	'
 }
 
+rev_list_tests () {
+	state=$1
+
+	for branch in "master" "other"
+	do
+		rev_list_tests_head
+	done
+}
+
 rev_list_tests 'full bitmap'
 
 test_expect_success 'clone from bitmapped repository' '
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 16/23] pack-bitmap-write: rename children to reverse_edges
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (14 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 15/23] t5310: add branch-based checks Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 19:43 ` [PATCH 17/23] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_builder_init() method walks the reachable commits in
topological order and constructs a "reverse graph" along the way. At the
moment, this reverse graph contains an edge from commit A to commit B if
and only if A is a parent of B. Thus, the name "children" is appropriate
for for this reverse graph.

In the next change, we will repurpose the reverse graph to not be
directly-adjacent commits in the commit-graph, but instead a more
abstract relationship. The previous changes have already incorporated
the necessary updates to fill_bitmap_commit() that allow these edges to
not be immediate children.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 361f3305a2..369c76a87c 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -179,7 +179,7 @@ static void compute_xor_offsets(void)
 }
 
 struct bb_commit {
-	struct commit_list *children;
+	struct commit_list *reverse_edges;
 	struct bitmap *bitmap;
 	unsigned selected:1;
 	unsigned idx; /* within selected array */
@@ -228,7 +228,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		for (p = commit->parents; p; p = p->next) {
 			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->children);
+			commit_list_insert(commit, &ent->reverse_edges);
 		}
 	}
 }
@@ -358,7 +358,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 			display_progress(writer.progress, nr_stored);
 		}
 
-		while ((child = pop_commit(&ent->children))) {
+		while ((child = pop_commit(&ent->reverse_edges))) {
 			struct bb_commit *child_ent =
 				bb_data_at(&bb.data, child);
 
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 17/23] pack-bitmap-write: build fewer intermediate bitmaps
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (15 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 16/23] pack-bitmap-write: rename children to reverse_edges Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-13 22:23   ` SZEDER Gábor
  2020-11-11 19:43 ` [PATCH 18/23] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
                   ` (8 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_writer_build() method calls bitmap_builder_init() to
construct a list of commits reachable from the selected commits along
with a "reverse graph". This reverse graph has edges pointing from a
commit to other commits that can reach that commit. After computing a
reachability bitmap for a commit, the values in that bitmap are then
copied to the reachability bitmaps across the edges in the reverse
graph.

We can now relax the role of the reverse graph to greatly reduce the
number of intermediate reachability bitmaps we compute during this
reverse walk. The end result is that we walk objects the same number of
times as before when constructing the reachability bitmaps, but we also
spend much less time copying bits between bitmaps and have much lower
memory pressure in the process.

The core idea is to select a set of "important" commits based on
interactions among the sets of commits reachable from each selected commit.

The first technical concept is to create a new 'commit_mask' member in the
bb_commit struct. Note that the selected commits are provided in an
ordered array. The first thing to do is to mark the ith bit in the
commit_mask for the ith selected commit. As we walk the commit-graph, we
copy the bits in a commit's commit_mask to its parents. At the end of
the walk, the ith bit in the commit_mask for a commit C stores a boolean
representing "The ith selected commit can reach C."

As we walk, we will discover non-selected commits that are important. We
will get into this later, but those important commits must also receive
bit positions, growing the width of the bitmasks as we walk. At the true
end of the walk, the ith bit means "the ith _important_ commit can reach
C."

MAXIMAL COMMITS
---------------

We use a new 'maximal' bit in the bb_commit struct to represent whether
a commit is important or not. The term "maximal" comes from the
partially-ordered set of commits in the commit-graph where C >= P if P
is a parent of C, and then extending the relationship transitively.
Instead of taking the maximal commits across the entire commit-graph, we
instead focus on selecting each commit that is maximal among commits
with the same bits on in their commit_mask. This definition is
important, so let's consider an example.

Suppose we have three selected commits A, B, and C. These are assigned
bitmasks 100, 010, and 001 to start. Each of these can be marked as
maximal immediately because they each will be the uniquely maximal
commit that contains their own bit. Keep in mind that that these commits
may have different bitmasks after the walk; for example, if B can reach
C but A cannot, then the final bitmask for C is 011. Even in these
cases, C would still be a maximal commit among all commits with the
third bit on in their masks.

Now define sets X, Y, and Z to be the sets of commits reachable from A,
B, and C, respectively. The intersections of these sets correspond to
different bitmasks:

 * 100: X - (Y union Z)
 * 010: Y - (X union Z)
 * 001: Z - (X union Y)
 * 110: (X intersect Y) - Z
 * 101: (X intersect Z) - Y
 * 011: (Y intersect Z) - X
 * 111: X intersect Y intersect Z

This can be visualized with the following Hasse diagram:

	100    010    001
         | \  /   \  / |
         |  \/     \/  |
         |  /\     /\  |
         | /  \   /  \ |
        110    101    011
          \___  |  ___/
              \ | /
               111

Some of these bitmasks may not be represented, depending on the topology
of the commit-graph. In fact, we are counting on it, since the number of
possible bitmasks is exponential in the number of selected commits, but
is also limited by the total number of commits. In practice, very few
bitmasks are possible because most commits converge on a common "trunk"
in the commit history.

With this three-bit example, we wish to find commits that are maximal
for each bitmask. How can we identify this as we are walking?

As we walk, we visit a commit C. Since we are walking the commits in
topo-order, we know that C is visited after all of its children are
visited. Thus, when we get C from the revision walk we inspect the
'maximal' property of its bb_data and use that to determine if C is truly
important. Its commit_mask is also nearly final. If C is not one of the
originally-selected commits, then assign a bit position to C (by
incrementing num_maximal) and set that bit on in commit_mask. See
"MULTIPLE MAXIMAL COMMITS" below for more detail on this.

Now that the commit C is known to be maximal or not, consider each
parent P of C. Compute two new values:

 * c_not_p : true if and only if the commit_mask for C contains a bit
             that is not contained in the commit_mask for P.

 * p_not_c : true if and only if the commit_mask for P contains a bit
             that is not contained in the commit_mask for P.

If c_not_p is false, then P already has all of the bits that C would
provide to its commit_mask. In this case, move on to other parents as C
has nothing to contribute to P's state that was not already provided by
other children of P.

We continue with the case that c_not_p is true. This means there are
bits in C's commit_mask to copy to P's commit_mask, so use bitmap_or()
to add those bits.

If p_not_c is also true, then set the maximal bit for P to one. This means
that if no other commit has P as a parent, then P is definitely maximal.
This is because no child had the same bitmask. It is important to think
about the maximal bit for P at this point as a temporary state: "P is
maximal based on current information."

In contrast, if p_not_c is false, then set the maximal bit for P to
zero. Further, clear all reverse_edges for P since any edges that were
previously assigned to P are no longer important. P will gain all
reverse edges based on C.

The final thing we need to do is to update the reverse edges for P.
These reverse edges respresent "which closest maximal commits
contributed bits to my commit_mask?" Since C contributed bits to P's
commit_mask in this case, C must add to the reverse edges of P.

If C is maximal, then C is a 'closest' maximal commit that contributed
bits to P. Add C to P's reverse_edges list.

Otherwise, C has a list of maximal commits that contributed bits to its
bitmask (and this list is exactly one element). Add all of these items
to P's reverse_edges list. Be careful to ignore duplicates here.

After inspecting all parents P for a commit C, we can clear the
commit_mask for C. This reduces the memory load to be limited to the
"width" of the commit graph.

Consider our ABC/XYZ example from earlier and let's inspect the state of
the commits for an interesting bitmask, say 011. Suppose that D is the
only maximal commit with this bitmask (in the first three bits). All
other commits with bitmask 011 have D as the only entry in their
reverse_edges list. D's reverse_edges list contains B and C.

COMPUTING REACHABILITY BITMAPS
------------------------------

Now that we have our definition, let's zoom out and consider what
happens with our new reverse graph when computing reachability bitmaps.
We walk the reverse graph in reverse-topo-order, so we visit commits
with largest commit_masks first. After we compute the reachability
bitmap for a commit C, we push the bits in that bitmap to each commit D
in the reverse edge list for C. Then, when we finally visit D we already
have the bits for everything reachable from maximal commits that D can
reach and we only need to walk the objects in the set-difference.

In our ABC/XYZ example, when we finally walk for the commit A we only
need to walk commits with bitmask equal to A's bitmask. If that bitmask
is 100, then we are only walking commits in X - (Y union Z) because the
bitmap already contains the bits for objects reachable from (X intersect
Y) union (X intersect Z) (i.e. the bits from the reachability bitmaps
for the maximal commits with bitmasks 110 and 101).

The behavior is intended to walk each commit (and the trees that commit
introduces) at most once while allocating and copying fewer reachability
bitmaps. There is one caveat: what happens when there are multiple
maximal commits with the same bitmask, with respect to the initial set
of selected commits?

MULTIPLE MAXIMAL COMMITS
------------------------

Earlier, we mentioned that when we discover a new maximal commit, we
assign a new bit position to that commit and set that bit position to
one for that commit. This is absolutely important for interesting
commit-graphs such as git/git and torvalds/linux. The reason is due to
the existence of "butterflies" in the commit-graph partial order.

Here is an example of four commits forming a butterfly:

   I    J
   |\  /|
   | \/ |
   | /\ |
   |/  \|
   M    N
    \  /
     |/
     Q

Here, I and J both have parents M and N. In general, these do not need
to be exact parent relationships, but reachability relationships. The
most important part is that M and N cannot reach each other, so they are
independent in the partial order. If I had commit_mask 10 and J had
commit_mask 01, then M and N would both be assigned commit_mask 11 and
be maximal commits with the bitmask 11. Then, what happens when M and N
can both reach a commit Q? If Q is also assigned the bitmask 11, then it
is not maximal but is reachable from both M and N.

While this is not necessarily a deal-breaker for our abstract definition
of finding maximal commits according to a given bitmask, we have a few
issues that can come up in our larger picture of constructing
reachability bitmaps.

In particular, if we do not also consider Q to be a "maximal" commit,
then we will walk commits reachable from Q twice: once when computing
the reachability bitmap for M and another time when computing the
reachability bitmap for N. This becomes much worse if the topology
continues this pattern with multiple butterflies.

The solution has already been mentioned: each of M and N are assigned
their own bits to the bitmask and hence they become uniquely maximal for
their bitmasks. Finally, Q also becomes maximal and thus we do not need
to walk its commits multiple times. The final bitmasks for these commits
are as follows:

  I:10       J:01
   |\        /|
   | \ _____/ |
   | /\____   |
   |/      \  |
   M:111    N:1101
        \  /
       Q:1111

Further, Q's reverse edge list is { M, N }, while M and N both have
reverse edge list { I, J }.

PERFORMANCE MEASUREMENTS
------------------------

Now that we've spent a LOT of time on the theory of this algorithm,
let's show that this is actually worth all that effort.

To test the performance, use GIT_TRACE2_PERF=1 when running
'git repack -abd' in a repository with no existing reachability bitmaps.
This avoids any issues with keeping existing bitmaps to skew the
numbers.

Inspect the "building_bitmaps_total" region in the trace2 output to
focus on the portion of work that is affected by this change. Here are
the performance comparisons for a few repositories. The timings are for
the following versions of Git: "multi" is the timing from before any
reverse graph is constructed, where we might perform multiple
traversals. "reverse" is for the previous change where the reverse graph
has every reachable commit.  Finally "maximal" is the version introduced
here where the reverse graph only contains the maximal commits.

      Repository: git/git
           multi: 2.628 sec
         reverse: 2.344 sec
         maximal: 2.047 sec

      Repository: torvalds/linux
           multi: 64.7 sec
         reverse: 205.3 sec
         maximal: 44.7 sec

So in all cases we've not only recovered any time lost to switching to
the reverse-edge algorithm, but we come out ahead of "multi" in all
cases. Likewise, peak heap has gone back to something reasonable:

      Repository: torvalds/linux
           multi: 2.087 GB
         reverse: 3.141 GB
         maximal: 2.288 GB

While I do not have access to full fork networks on GitHub, Peff has run
this algorithm on the chromium/chromium fork network and reported a
change from 3 hours to ~233 seconds. That network is particularly
beneficial for this approach because it has a long, linear history along
with many tags. The "multi" approach was obviously quadratic and the new
approach is linear.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 72 +++++++++++++++++++++++++++++++---
 t/t5310-pack-bitmaps.sh | 85 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 148 insertions(+), 9 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 369c76a87c..7b4fc0f304 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -180,8 +180,10 @@ static void compute_xor_offsets(void)
 
 struct bb_commit {
 	struct commit_list *reverse_edges;
+	struct bitmap *commit_mask;
 	struct bitmap *bitmap;
-	unsigned selected:1;
+	unsigned selected:1,
+		 maximal:1;
 	unsigned idx; /* within selected array */
 };
 
@@ -198,7 +200,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i;
+	unsigned int i, num_maximal;
 
 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -210,27 +212,85 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
 		struct bb_commit *ent = bb_data_at(&bb->data, c);
+
 		ent->selected = 1;
+		ent->maximal = 1;
 		ent->idx = i;
+
+		ent->commit_mask = bitmap_new();
+		bitmap_set(ent->commit_mask, i);
+
 		add_pending_object(&revs, &c->object, "");
 	}
+	num_maximal = writer->selected_nr;
 
 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");
 
 	while ((commit = get_revision(&revs))) {
 		struct commit_list *p;
+		struct bb_commit *c_ent;
 
 		parse_commit_or_die(commit);
 
-		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
-		bb->commits[bb->commits_nr++] = commit;
+		c_ent = bb_data_at(&bb->data, commit);
+
+		if (c_ent->maximal) {
+			if (!c_ent->selected) {
+				bitmap_set(c_ent->commit_mask, num_maximal);
+				num_maximal++;
+			}
+
+			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+			bb->commits[bb->commits_nr++] = commit;
+		}
 
 		for (p = commit->parents; p; p = p->next) {
-			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->reverse_edges);
+			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
+			int c_not_p, p_not_c;
+
+			if (!p_ent->commit_mask) {
+				p_ent->commit_mask = bitmap_new();
+				c_not_p = 1;
+				p_not_c = 0;
+			} else {
+				c_not_p = bitmap_diff_nonzero(c_ent->commit_mask, p_ent->commit_mask);
+				p_not_c = bitmap_diff_nonzero(p_ent->commit_mask, c_ent->commit_mask);
+			}
+
+			if (!c_not_p)
+				continue;
+
+			bitmap_or(p_ent->commit_mask, c_ent->commit_mask);
+
+			if (p_not_c)
+				p_ent->maximal = 1;
+			else {
+				p_ent->maximal = 0;
+				free_commit_list(p_ent->reverse_edges);
+				p_ent->reverse_edges = NULL;
+			}
+
+			if (c_ent->maximal) {
+				commit_list_insert(commit, &p_ent->reverse_edges);
+			} else {
+				struct commit_list *cc = c_ent->reverse_edges;
+
+				for (; cc; cc = cc->next) {
+					if (!commit_list_contains(cc->item, p_ent->reverse_edges))
+						commit_list_insert(cc->item, &p_ent->reverse_edges);
+				}
+			}
 		}
+
+		bitmap_free(c_ent->commit_mask);
+		c_ent->commit_mask = NULL;
 	}
+
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_selected_commits", writer->selected_nr);
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_maximal_commits", num_maximal);
 }
 
 static void bitmap_builder_clear(struct bitmap_builder *bb)
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 6bf68fee85..33ef9a098d 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -20,11 +20,87 @@ has_any () {
 	grep -Ff "$1" "$2"
 }
 
+# To ensure the logic for "maximal commits" is exercised, make
+# the repository a bit more complicated.
+#
+#    other                         master
+#      *                             *
+# (99 commits)                  (99 commits)
+#      *                             *
+#      |\                           /|
+#      | * octo-other  octo-master * |
+#      |/|\_________  ____________/|\|
+#      | \          \/  __________/  |
+#      |  | ________/\ /             |
+#      *  |/          * merge-right  *
+#      | _|__________/ \____________ |
+#      |/ |                         \|
+# (l1) *  * merge-left               * (r1)
+#      | / \________________________ |
+#      |/                           \|
+# (l2) *                             * (r2)
+#       \____________...____________ |
+#                                   \|
+#                                    * (base)
+#
+# The important part for the maximal commit algorithm is how
+# the bitmasks are extended. Assuming starting bit positions
+# for master (bit 0) and other (bit 1), and some flexibility
+# in the order that merge bases are visited, the bitmasks at
+# the end should be:
+#
+#      master: 1       (maximal, selected)
+#       other: 01      (maximal, selected)
+# octo-master: 1
+#  octo-other: 01
+# merge-right: 111     (maximal)
+#        (l1): 111
+#        (r1): 111
+#  merge-left: 1101    (maximal)
+#        (l2): 11111   (maximal)
+#        (r2): 111101  (maximal)
+#      (base): 1111111 (maximal)
+
 test_expect_success 'setup repo with moderate-sized history' '
-	test_commit_bulk --id=file 100 &&
+	test_commit_bulk --id=file 10 &&
 	git checkout -b other HEAD~5 &&
 	test_commit_bulk --id=side 10 &&
+
+	# add complicated history setup, including merges and
+	# ambiguous merge-bases
+
+	git checkout -b merge-left other~2 &&
+	git merge master~2 -m "merge-left" &&
+
+	git checkout -b merge-right master~1 &&
+	git merge other~1 -m "merge-right" &&
+
+	git checkout -b octo-master master &&
+	git merge merge-left merge-right -m "octopus-master" &&
+
+	git checkout -b octo-other other &&
+	git merge merge-left merge-right -m "octopus-other" &&
+
+	git checkout other &&
+	git merge octo-other -m "pull octopus" &&
+
 	git checkout master &&
+	git merge octo-master -m "pull octopus" &&
+
+	# Remove these branches so they are not selected
+	# as bitmap tips
+	git branch -D merge-left &&
+	git branch -D merge-right &&
+	git branch -D octo-other &&
+	git branch -D octo-master &&
+
+	# add padding to make these merges less interesting
+	# and avoid having them selected for bitmaps
+	test_commit_bulk --id=file 100 &&
+	git checkout other &&
+	test_commit_bulk --id=side 100 &&
+	git checkout master &&
+
 	bitmaptip=$(git rev-parse master) &&
 	blob=$(echo tagged-blob | git hash-object -w --stdin) &&
 	git tag tagged-blob $blob &&
@@ -32,9 +108,12 @@ test_expect_success 'setup repo with moderate-sized history' '
 '
 
 test_expect_success 'full repack creates bitmaps' '
-	git repack -ad &&
+	GIT_TRACE2_EVENT_NESTING=4 GIT_TRACE2_EVENT="$(pwd)/trace" \
+		git repack -ad &&
 	ls .git/objects/pack/ | grep bitmap >output &&
-	test_line_count = 1 output
+	test_line_count = 1 output &&
+	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
 '
 
 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 18/23] pack-bitmap-write: ignore BITMAP_FLAG_REUSE
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (16 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 17/23] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
@ 2020-11-11 19:43 ` Taylor Blau
  2020-11-11 19:44 ` [PATCH 19/23] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:43 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Jeff King <peff@peff.net>

The on-disk bitmap format has a flag to mark a bitmap to be "reused".
This is a rather curious feature, and works like this:

  - a run of pack-objects would decide to mark the last 80% of the
    bitmaps it generates with the reuse flag

  - the next time we generate bitmaps, we'd see those reuse flags from
    the last run, and mark those commits as special:

      - we'd be more likely to select those commits to get bitmaps in
        the new output

      - when generating the bitmap for a selected commit, we'd reuse the
        old bitmap as-is (rearranging the bits to match the new pack, of
        course)

However, neither of these behaviors particularly makes sense.

Just because a commit happened to be bitmapped last time does not make
it a good candidate for having a bitmap this time. In particular, we may
choose bitmaps based on how recent they are in history, or whether a ref
tip points to them, and those things will change. We're better off
re-considering fresh which commits are good candidates.

Reusing the existing bitmap _is_ a reasonable thing to do to save
computation. But only reusing exact bitmaps is a weak form of this. If
we have an old bitmap for A and now want a new bitmap for its child, we
should be able to compute that only by looking at trees and that are new
to the child. But this code would consider only exact reuse (which is
perhaps why it was eager to select those commits in the first place).

Furthermore, the recent switch to the reverse-edge algorithm for
generating bitmaps dropped this optimization entirely (and yet still
performs better).

So let's do a few cleanups:

 - drop the whole "reusing bitmaps" phase of generating bitmaps. It's
   not helping anything, and is mostly unused code (or worse, code that
   is using CPU but not doing anything useful)

 - drop the use of the on-disk reuse flag to select commits to bitmap

 - stop setting the on-disk reuse flag in bitmaps we generate (since
   nothing respects it anymore)

We will keep a few innards of the reuse code, which will help us
implement a more capable version of the "reuse" optimization:

 - simplify rebuild_existing_bitmaps() into a function that only builds
   the mapping of bits between the old and new orders, but doesn't
   actually convert any bitmaps

 - make rebuild_bitmap() public; we'll call it lazily to convert bitmaps
   as we traverse (using the mapping created above)

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 builtin/pack-objects.c |  1 -
 pack-bitmap-write.c    | 50 +++++-------------------------------------
 pack-bitmap.c          | 46 +++++---------------------------------
 pack-bitmap.h          |  6 ++++-
 4 files changed, 16 insertions(+), 87 deletions(-)

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 5617c01b5a..2a00358f34 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -1104,7 +1104,6 @@ static void write_pack_file(void)
 				stop_progress(&progress_state);
 
 				bitmap_writer_show_progress(progress);
-				bitmap_writer_reuse_bitmaps(&to_pack);
 				bitmap_writer_select_commits(indexed_commits, indexed_commits_nr, -1);
 				bitmap_writer_build(&to_pack);
 				bitmap_writer_finish(written_list, nr_written,
diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 7b4fc0f304..1995f75818 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -30,7 +30,6 @@ struct bitmap_writer {
 	struct ewah_bitmap *tags;
 
 	kh_oid_map_t *bitmaps;
-	kh_oid_map_t *reused;
 	struct packing_data *to_pack;
 
 	struct bitmapped_commit *selected;
@@ -112,7 +111,7 @@ void bitmap_writer_build_type_index(struct packing_data *to_pack,
  * Compute the actual bitmaps
  */
 
-static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
+static inline void push_bitmapped_commit(struct commit *commit)
 {
 	if (writer.selected_nr >= writer.selected_alloc) {
 		writer.selected_alloc = (writer.selected_alloc + 32) * 2;
@@ -120,7 +119,7 @@ static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitm
 	}
 
 	writer.selected[writer.selected_nr].commit = commit;
-	writer.selected[writer.selected_nr].bitmap = reused;
+	writer.selected[writer.selected_nr].bitmap = NULL;
 	writer.selected[writer.selected_nr].flags = 0;
 
 	writer.selected_nr++;
@@ -372,13 +371,6 @@ static void store_selected(struct bb_commit *ent, struct commit *commit)
 	khiter_t hash_pos;
 	int hash_ret;
 
-	/*
-	 * the "reuse bitmaps" phase may have stored something here, but
-	 * our new algorithm doesn't use it. Drop it.
-	 */
-	if (stored->bitmap)
-		ewah_free(stored->bitmap);
-
 	stored->bitmap = bitmap_to_ewah(ent->bitmap);
 
 	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
@@ -477,35 +469,6 @@ static int date_compare(const void *_a, const void *_b)
 	return (long)b->date - (long)a->date;
 }
 
-void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack)
-{
-	struct bitmap_index *bitmap_git;
-	if (!(bitmap_git = prepare_bitmap_git(to_pack->repo)))
-		return;
-
-	writer.reused = kh_init_oid_map();
-	rebuild_existing_bitmaps(bitmap_git, to_pack, writer.reused,
-				 writer.show_progress);
-	/*
-	 * NEEDSWORK: rebuild_existing_bitmaps() makes writer.reused reference
-	 * some bitmaps in bitmap_git, so we can't free the latter.
-	 */
-}
-
-static struct ewah_bitmap *find_reused_bitmap(const struct object_id *oid)
-{
-	khiter_t hash_pos;
-
-	if (!writer.reused)
-		return NULL;
-
-	hash_pos = kh_get_oid_map(writer.reused, *oid);
-	if (hash_pos >= kh_end(writer.reused))
-		return NULL;
-
-	return kh_value(writer.reused, hash_pos);
-}
-
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 				  unsigned int indexed_commits_nr,
 				  int max_bitmaps)
@@ -519,12 +482,11 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 
 	if (indexed_commits_nr < 100) {
 		for (i = 0; i < indexed_commits_nr; ++i)
-			push_bitmapped_commit(indexed_commits[i], NULL);
+			push_bitmapped_commit(indexed_commits[i]);
 		return;
 	}
 
 	for (;;) {
-		struct ewah_bitmap *reused_bitmap = NULL;
 		struct commit *chosen = NULL;
 
 		next = next_commit_index(i);
@@ -539,15 +501,13 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 
 		if (next == 0) {
 			chosen = indexed_commits[i];
-			reused_bitmap = find_reused_bitmap(&chosen->object.oid);
 		} else {
 			chosen = indexed_commits[i + next];
 
 			for (j = 0; j <= next; ++j) {
 				struct commit *cm = indexed_commits[i + j];
 
-				reused_bitmap = find_reused_bitmap(&cm->object.oid);
-				if (reused_bitmap || (cm->object.flags & NEEDS_BITMAP) != 0) {
+				if ((cm->object.flags & NEEDS_BITMAP) != 0) {
 					chosen = cm;
 					break;
 				}
@@ -557,7 +517,7 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 			}
 		}
 
-		push_bitmapped_commit(chosen, reused_bitmap);
+		push_bitmapped_commit(chosen);
 
 		i += next + 1;
 		display_progress(writer.progress, i);
diff --git a/pack-bitmap.c b/pack-bitmap.c
index 82c6bf2843..682f4d19dd 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1333,9 +1333,9 @@ void test_bitmap_walk(struct rev_info *revs)
 	free_bitmap_index(bitmap_git);
 }
 
-static int rebuild_bitmap(uint32_t *reposition,
-			  struct ewah_bitmap *source,
-			  struct bitmap *dest)
+int rebuild_bitmap(const uint32_t *reposition,
+		   struct ewah_bitmap *source,
+		   struct bitmap *dest)
 {
 	uint32_t pos = 0;
 	struct ewah_iterator it;
@@ -1364,19 +1364,11 @@ static int rebuild_bitmap(uint32_t *reposition,
 	return 0;
 }
 
-int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
-			     struct packing_data *mapping,
-			     kh_oid_map_t *reused_bitmaps,
-			     int show_progress)
+uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
+				struct packing_data *mapping)
 {
 	uint32_t i, num_objects;
 	uint32_t *reposition;
-	struct bitmap *rebuild;
-	struct stored_bitmap *stored;
-	struct progress *progress = NULL;
-
-	khiter_t hash_pos;
-	int hash_ret;
 
 	num_objects = bitmap_git->pack->num_objects;
 	reposition = xcalloc(num_objects, sizeof(uint32_t));
@@ -1394,33 +1386,7 @@ int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
 			reposition[i] = oe_in_pack_pos(mapping, oe) + 1;
 	}
 
-	rebuild = bitmap_new();
-	i = 0;
-
-	if (show_progress)
-		progress = start_progress("Reusing bitmaps", 0);
-
-	kh_foreach_value(bitmap_git->bitmaps, stored, {
-		if (stored->flags & BITMAP_FLAG_REUSE) {
-			if (!rebuild_bitmap(reposition,
-					    lookup_stored_bitmap(stored),
-					    rebuild)) {
-				hash_pos = kh_put_oid_map(reused_bitmaps,
-							  stored->oid,
-							  &hash_ret);
-				kh_value(reused_bitmaps, hash_pos) =
-					bitmap_to_ewah(rebuild);
-			}
-			bitmap_reset(rebuild);
-			display_progress(progress, ++i);
-		}
-	});
-
-	stop_progress(&progress);
-
-	free(reposition);
-	bitmap_free(rebuild);
-	return 0;
+	return reposition;
 }
 
 void free_bitmap_index(struct bitmap_index *b)
diff --git a/pack-bitmap.h b/pack-bitmap.h
index 1203120c43..afa4115136 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -73,7 +73,11 @@ void bitmap_writer_set_checksum(unsigned char *sha1);
 void bitmap_writer_build_type_index(struct packing_data *to_pack,
 				    struct pack_idx_entry **index,
 				    uint32_t index_nr);
-void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack);
+uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
+				struct packing_data *mapping);
+int rebuild_bitmap(const uint32_t *reposition,
+		   struct ewah_bitmap *source,
+		   struct bitmap *dest);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 19/23] pack-bitmap: factor out 'bitmap_for_commit()'
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (17 preceding siblings ...)
  2020-11-11 19:43 ` [PATCH 18/23] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
@ 2020-11-11 19:44 ` Taylor Blau
  2020-11-11 19:44 ` [PATCH 20/23] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:44 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

A couple of callers within pack-bitmap.c duplicate logic to lookup a
given object id in the bitamps khash. Factor this out into a new
function, 'bitmap_for_commit()' to reduce some code duplication.

Make this new function non-static, since it will be used in later
commits from outside of pack-bitmap.c.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 33 +++++++++++++++++++--------------
 pack-bitmap.h |  2 ++
 2 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 682f4d19dd..99a0683f49 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -375,6 +375,16 @@ struct include_data {
 	struct bitmap *seen;
 };
 
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit)
+{
+	khiter_t hash_pos = kh_get_oid_map(bitmap_git->bitmaps,
+					   commit->object.oid);
+	if (hash_pos >= kh_end(bitmap_git->bitmaps))
+		return NULL;
+	return lookup_stored_bitmap(kh_value(bitmap_git->bitmaps, hash_pos));
+}
+
 static inline int bitmap_position_extended(struct bitmap_index *bitmap_git,
 					   const struct object_id *oid)
 {
@@ -460,10 +470,10 @@ static void show_commit(struct commit *commit, void *data)
 
 static int add_to_include_set(struct bitmap_index *bitmap_git,
 			      struct include_data *data,
-			      const struct object_id *oid,
+			      struct commit *commit,
 			      int bitmap_pos)
 {
-	khiter_t hash_pos;
+	struct ewah_bitmap *partial;
 
 	if (data->seen && bitmap_get(data->seen, bitmap_pos))
 		return 0;
@@ -471,10 +481,9 @@ static int add_to_include_set(struct bitmap_index *bitmap_git,
 	if (bitmap_get(data->base, bitmap_pos))
 		return 0;
 
-	hash_pos = kh_get_oid_map(bitmap_git->bitmaps, *oid);
-	if (hash_pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, hash_pos);
-		bitmap_or_ewah(data->base, lookup_stored_bitmap(st));
+	partial = bitmap_for_commit(bitmap_git, commit);
+	if (partial) {
+		bitmap_or_ewah(data->base, partial);
 		return 0;
 	}
 
@@ -493,8 +502,7 @@ static int should_include(struct commit *commit, void *_data)
 						  (struct object *)commit,
 						  NULL);
 
-	if (!add_to_include_set(data->bitmap_git, data, &commit->object.oid,
-				bitmap_pos)) {
+	if (!add_to_include_set(data->bitmap_git, data, commit, bitmap_pos)) {
 		struct commit_list *parent = commit->parents;
 
 		while (parent) {
@@ -1277,10 +1285,10 @@ void test_bitmap_walk(struct rev_info *revs)
 {
 	struct object *root;
 	struct bitmap *result = NULL;
-	khiter_t pos;
 	size_t result_popcnt;
 	struct bitmap_test_data tdata;
 	struct bitmap_index *bitmap_git;
+	struct ewah_bitmap *bm;
 
 	if (!(bitmap_git = prepare_bitmap_git(revs->repo)))
 		die("failed to load bitmap indexes");
@@ -1292,12 +1300,9 @@ void test_bitmap_walk(struct rev_info *revs)
 		bitmap_git->version, bitmap_git->entry_count);
 
 	root = revs->pending.objects[0].item;
-	pos = kh_get_oid_map(bitmap_git->bitmaps, root->oid);
-
-	if (pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-		struct ewah_bitmap *bm = lookup_stored_bitmap(st);
+	bm = bitmap_for_commit(bitmap_git, (struct commit *)root);
 
+	if (bm) {
 		fprintf(stderr, "Found bitmap for %s. %d bits / %08x checksum\n",
 			oid_to_hex(&root->oid), (int)bm->bit_size, ewah_checksum(bm));
 
diff --git a/pack-bitmap.h b/pack-bitmap.h
index afa4115136..25dfcf5615 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -78,6 +78,8 @@ uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
 int rebuild_bitmap(const uint32_t *reposition,
 		   struct ewah_bitmap *source,
 		   struct bitmap *dest);
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 20/23] pack-bitmap: factor out 'add_commit_to_bitmap()'
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (18 preceding siblings ...)
  2020-11-11 19:44 ` [PATCH 19/23] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
@ 2020-11-11 19:44 ` Taylor Blau
  2020-11-11 19:44 ` [PATCH 21/23] pack-bitmap-write: use existing bitmaps Taylor Blau
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:44 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

'find_objects()' currently needs to interact with the bitmaps khash
pretty closely. To make 'find_objects()' read a little more
straightforwardly, remove some of the khash-level details into a new
function that describes what it does: 'add_commit_to_bitmap()'.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 36 +++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 99a0683f49..dc811ebae8 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -516,6 +516,23 @@ static int should_include(struct commit *commit, void *_data)
 	return 1;
 }
 
+static int add_commit_to_bitmap(struct bitmap_index *bitmap_git,
+				struct bitmap **base,
+				struct commit *commit)
+{
+	struct ewah_bitmap *or_with = bitmap_for_commit(bitmap_git, commit);
+
+	if (!or_with)
+		return 0;
+
+	if (*base == NULL)
+		*base = ewah_to_bitmap(or_with);
+	else
+		bitmap_or_ewah(*base, or_with);
+
+	return 1;
+}
+
 static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
 				   struct rev_info *revs,
 				   struct object_list *roots,
@@ -539,21 +556,10 @@ static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
 		struct object *object = roots->item;
 		roots = roots->next;
 
-		if (object->type == OBJ_COMMIT) {
-			khiter_t pos = kh_get_oid_map(bitmap_git->bitmaps, object->oid);
-
-			if (pos < kh_end(bitmap_git->bitmaps)) {
-				struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-				struct ewah_bitmap *or_with = lookup_stored_bitmap(st);
-
-				if (base == NULL)
-					base = ewah_to_bitmap(or_with);
-				else
-					bitmap_or_ewah(base, or_with);
-
-				object->flags |= SEEN;
-				continue;
-			}
+		if (object->type == OBJ_COMMIT &&
+		    add_commit_to_bitmap(bitmap_git, &base, (struct commit *)object)) {
+			object->flags |= SEEN;
+			continue;
 		}
 
 		object_list_insert(object, &not_mapped);
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 21/23] pack-bitmap-write: use existing bitmaps
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (19 preceding siblings ...)
  2020-11-11 19:44 ` [PATCH 20/23] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
@ 2020-11-11 19:44 ` Taylor Blau
  2020-11-11 19:44 ` [PATCH 22/23] pack-bitmap-write: relax unique rewalk condition Taylor Blau
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:44 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

When constructing new bitmaps, we perform a commit and tree walk in
fill_bitmap_commit() and fill_bitmap_tree(). This walk would benefit
from using existing bitmaps when available. We must track the existing
bitmaps and translate them into the new object order, but this is
generally faster than parsing trees.

In fill_bitmap_commit(), we must reorder thing somewhat. The priority
queue walks commits from newest-to-oldest, which means we correctly stop
walking when reaching a commit with a bitmap. However, if we walk trees
from top to bottom, then we might be parsing trees that are actually
part of a re-used bitmap. To avoid over-walking trees, add them to a
LIFO queue and walk them from bottom-to-top after exploring commits
completely.

On git.git, this reduces a second immediate bitmap computation from 2.0s
to 1.0s. On linux.git, we go from 32s to 22s. On chromium's fork
network, we go from 227s to 198s.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 42 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 38 insertions(+), 4 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 1995f75818..37204b691c 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -340,20 +340,39 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
 
 static void fill_bitmap_commit(struct bb_commit *ent,
 			       struct commit *commit,
-			       struct prio_queue *queue)
+			       struct prio_queue *queue,
+			       struct prio_queue *tree_queue,
+			       struct bitmap_index *old_bitmap,
+			       const uint32_t *mapping)
 {
 	if (!ent->bitmap)
 		ent->bitmap = bitmap_new();
 
-	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
 	prio_queue_put(queue, commit);
 
 	while (queue->nr) {
 		struct commit_list *p;
 		struct commit *c = prio_queue_get(queue);
 
+		/*
+		 * If this commit has an old bitmap, then translate that
+		 * bitmap and add its bits to this one. No need to walk
+		 * parents or the tree for this commit.
+		 */
+		if (old_bitmap && mapping) {
+			struct ewah_bitmap *old;
+
+			old = bitmap_for_commit(old_bitmap, c);
+			if (old && !rebuild_bitmap(mapping, old, ent->bitmap))
+				continue;
+		}
+
+		/*
+		 * Mark ourselves and queue our tree. The commit
+		 * walk ensures we cover all parents.
+		 */
 		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
-		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
+		prio_queue_put(tree_queue, get_commit_tree(c));
 
 		for (p = c->parents; p; p = p->next) {
 			int pos = find_object_pos(&p->item->object.oid);
@@ -363,6 +382,9 @@ static void fill_bitmap_commit(struct bb_commit *ent,
 			}
 		}
 	}
+
+	while (tree_queue->nr)
+		fill_bitmap_tree(ent->bitmap, prio_queue_get(tree_queue));
 }
 
 static void store_selected(struct bb_commit *ent, struct commit *commit)
@@ -386,6 +408,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	size_t i;
 	int nr_stored = 0; /* for progress */
 	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
+	struct prio_queue tree_queue = { NULL };
+	struct bitmap_index *old_bitmap;
+	uint32_t *mapping;
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
@@ -395,6 +420,12 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
 		the_repository);
 
+	old_bitmap = prepare_bitmap_git(to_pack->repo);
+	if (old_bitmap)
+		mapping = create_bitmap_mapping(old_bitmap, to_pack);
+	else
+		mapping = NULL;
+
 	bitmap_builder_init(&bb, &writer);
 	for (i = bb.commits_nr; i > 0; i--) {
 		struct commit *commit = bb.commits[i-1];
@@ -402,7 +433,8 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *child;
 		int reused = 0;
 
-		fill_bitmap_commit(ent, commit, &queue);
+		fill_bitmap_commit(ent, commit, &queue, &tree_queue,
+				   old_bitmap, mapping);
 
 		if (ent->selected) {
 			store_selected(ent, commit);
@@ -428,7 +460,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		ent->bitmap = NULL;
 	}
 	clear_prio_queue(&queue);
+	clear_prio_queue(&tree_queue);
 	bitmap_builder_clear(&bb);
+	free(mapping);
 
 	stop_progress(&writer.progress);
 
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 22/23] pack-bitmap-write: relax unique rewalk condition
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (20 preceding siblings ...)
  2020-11-11 19:44 ` [PATCH 21/23] pack-bitmap-write: use existing bitmaps Taylor Blau
@ 2020-11-11 19:44 ` Taylor Blau
  2020-11-11 19:44 ` [PATCH 23/23] pack-bitmap-write: better reuse bitmaps Taylor Blau
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:44 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

The previous commits improved the bitmap computation process for very
long, linear histories with many refs by removing quadratic growth in
how many objects were walked. The strategy of computing "intermediate
commits" using bitmasks for which refs can reach those commits
partitioned the poset of reachable objects so each part could be walked
exactly once. This was effective for linear histories.

However, there was a (significant) drawback: wide histories with many
refs had an explosion of memory costs to compute the commit bitmasks
during the exploration that discovers these intermediate commits. Since
these wide histories are unlikely to repeat walking objects, the benefit
of walking objects multiple times was not expensive before. But now, the
commit walk *before computing bitmaps* is incredibly expensive.

In an effort to discover a happy medium, this change reduces the walk
for intermediate commits to only the first-parent history. This focuses
the walk on how the histories converge, which still has significant
reduction in repeat object walks. It is still possible to create
quadratic behavior in this version, but it is probably less likely in
realistic data shapes.

Here is some data taken on a fresh clone of the kernel:

             |   runtime (sec)    |   peak heap (GB)   |
             |                    |                    |
             |   from  |   with   |   from  |   with   |
             | scratch | existing | scratch | existing |
  -----------+---------+----------+---------+-----------
    original |  64.044 |   83.241 |   2.088 |    2.194 |
  last patch |  44.811 |   27.828 |   2.289 |    2.358 |
  this patch | 100.641 |   35.560 |   2.152 |    2.224 |

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 14 +++++---------
 t/t5310-pack-bitmaps.sh | 27 ++++++++++++++-------------
 2 files changed, 19 insertions(+), 22 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 37204b691c..b0493d971d 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -199,7 +199,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i, num_maximal;
+	unsigned int i, num_maximal = 0;
 
 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -207,6 +207,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	reset_revision_walk();
 	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
 	revs.topo_order = 1;
+	revs.first_parent_only = 1;
 
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
@@ -221,13 +222,12 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		add_pending_object(&revs, &c->object, "");
 	}
-	num_maximal = writer->selected_nr;
 
 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");
 
 	while ((commit = get_revision(&revs))) {
-		struct commit_list *p;
+		struct commit_list *p = commit->parents;
 		struct bb_commit *c_ent;
 
 		parse_commit_or_die(commit);
@@ -235,16 +235,12 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 		c_ent = bb_data_at(&bb->data, commit);
 
 		if (c_ent->maximal) {
-			if (!c_ent->selected) {
-				bitmap_set(c_ent->commit_mask, num_maximal);
-				num_maximal++;
-			}
-
+			num_maximal++;
 			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
 			bb->commits[bb->commits_nr++] = commit;
 		}
 
-		for (p = commit->parents; p; p = p->next) {
+		if (p) {
 			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
 			int c_not_p, p_not_c;
 
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 33ef9a098d..68badd63cb 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -43,23 +43,24 @@ has_any () {
 #                                   \|
 #                                    * (base)
 #
+# We only push bits down the first-parent history, which
+# makes some of these commits unimportant!
+#
 # The important part for the maximal commit algorithm is how
 # the bitmasks are extended. Assuming starting bit positions
-# for master (bit 0) and other (bit 1), and some flexibility
-# in the order that merge bases are visited, the bitmasks at
-# the end should be:
+# for master (bit 0) and other (bit 1), the bitmasks at the
+# end should be:
 #
 #      master: 1       (maximal, selected)
 #       other: 01      (maximal, selected)
-# octo-master: 1
-#  octo-other: 01
-# merge-right: 111     (maximal)
-#        (l1): 111
-#        (r1): 111
-#  merge-left: 1101    (maximal)
-#        (l2): 11111   (maximal)
-#        (r2): 111101  (maximal)
-#      (base): 1111111 (maximal)
+#      (base): 11 (maximal)
+#
+# This complicated history was important for a previous
+# version of the walk that guarantees never walking a
+# commit multiple times. That goal might be important
+# again, so preserve this complicated case. For now, this
+# test will guarantee that the bitmaps are computed
+# correctly, even with the repeat calculations.
 
 test_expect_success 'setup repo with moderate-sized history' '
 	test_commit_bulk --id=file 10 &&
@@ -113,7 +114,7 @@ test_expect_success 'full repack creates bitmaps' '
 	ls .git/objects/pack/ | grep bitmap >output &&
 	test_line_count = 1 output &&
 	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
-	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"107\"" trace
 '
 
 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
-- 
2.29.2.156.gc03786897f


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH 23/23] pack-bitmap-write: better reuse bitmaps
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (21 preceding siblings ...)
  2020-11-11 19:44 ` [PATCH 22/23] pack-bitmap-write: relax unique rewalk condition Taylor Blau
@ 2020-11-11 19:44 ` Taylor Blau
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-11 19:44 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff

From: Derrick Stolee <dstolee@microsoft.com>

If the old bitmap file contains a bitmap for a given commit, then that
commit does not need help from intermediate commits in its history to
compute its final bitmap. Eject that commit from the walk and insert it
as a maximal commit in the list of commits for computing bitmaps.

This helps the repeat bitmap computation task, even if the selected
commits shift drastically. This helps when a previously-bitmapped commit
exists in the first-parent history of a newly-selected commit. Since we
stop the walk at these commits and we use a first-parent walk, it is
harder to walk "around" these bitmapped commits. It's not impossible,
but we can greatly reduce the computation time for many selected
commits.

             |   runtime (sec)    |   peak heap (GB)   |
             |                    |                    |
             |   from  |   with   |   from  |   with   |
             | scratch | existing | scratch | existing |
  -----------+---------+----------+---------+-----------
  last patch | 100.641 |   35.560 |   2.152 |    2.224 |
  this patch |  99.720 |   11.696 |   2.152 |    2.217 |

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index b0493d971d..3ac90ae410 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -195,7 +195,8 @@ struct bitmap_builder {
 };
 
 static void bitmap_builder_init(struct bitmap_builder *bb,
-				struct bitmap_writer *writer)
+				struct bitmap_writer *writer,
+				struct bitmap_index *old_bitmap)
 {
 	struct rev_info revs;
 	struct commit *commit;
@@ -234,12 +235,26 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		c_ent = bb_data_at(&bb->data, commit);
 
+		if (old_bitmap && bitmap_for_commit(old_bitmap, commit)) {
+			/*
+			 * This commit has an existing bitmap, so we can
+			 * get its bits immediately without an object
+			 * walk. There is no need to continue walking
+			 * beyond this commit.
+			 */
+			c_ent->maximal = 1;
+			p = NULL;
+		}
+
 		if (c_ent->maximal) {
 			num_maximal++;
 			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
 			bb->commits[bb->commits_nr++] = commit;
 		}
 
+		if (!c_ent->commit_mask)
+			continue;
+
 		if (p) {
 			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
 			int c_not_p, p_not_c;
@@ -422,7 +437,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	else
 		mapping = NULL;
 
-	bitmap_builder_init(&bb, &writer);
+	bitmap_builder_init(&bb, &writer, old_bitmap);
 	for (i = bb.commits_nr; i > 0; i--) {
 		struct commit *commit = bb.commits[i-1];
 		struct bb_commit *ent = bb_data_at(&bb.data, commit);
-- 
2.29.2.156.gc03786897f

^ permalink raw reply related	[flat|nested] 174+ messages in thread

* Re: [PATCH 15/23] t5310: add branch-based checks
  2020-11-11 19:43 ` [PATCH 15/23] t5310: add branch-based checks Taylor Blau
@ 2020-11-11 20:58   ` Derrick Stolee
  2020-11-11 21:04     ` Junio C Hamano
  0 siblings, 1 reply; 174+ messages in thread
From: Derrick Stolee @ 2020-11-11 20:58 UTC (permalink / raw)
  To: Taylor Blau, git; +Cc: dstolee, gitster, peff, Johannes Schindelin

On 11/11/2020 2:43 PM, Taylor Blau wrote:
> From: Derrick Stolee <dstolee@microsoft.com>
> 
> The current rev-list tests that check the bitmap data only work on HEAD
> instead of multiple branches. Expand the test cases to handle both
> 'master' and 'other' branches.

Adding Johannes to CC since this likely will start colliding with his
default branch rename efforts.

> +rev_list_tests () {
> +	state=$1
> +
> +	for branch in "master" "other"
> +	do
> +		rev_list_tests_head
> +	done
> +}

Specifically, this is a _new_ instance of "master", but all the
other instances of "master" are likely being converted to "main"
in parallel. It would certainly be easier to convert this test
_after_ these changes are applied, but that's unlikely to happen
with the current schedule of things.

Thanks,
-Stolee



^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 15/23] t5310: add branch-based checks
  2020-11-11 20:58   ` Derrick Stolee
@ 2020-11-11 21:04     ` Junio C Hamano
  2020-11-15 23:26       ` Johannes Schindelin
  0 siblings, 1 reply; 174+ messages in thread
From: Junio C Hamano @ 2020-11-11 21:04 UTC (permalink / raw)
  To: Derrick Stolee; +Cc: Taylor Blau, git, dstolee, peff, Johannes Schindelin

Derrick Stolee <stolee@gmail.com> writes:

> On 11/11/2020 2:43 PM, Taylor Blau wrote:
>> From: Derrick Stolee <dstolee@microsoft.com>
>> 
>> The current rev-list tests that check the bitmap data only work on HEAD
>> instead of multiple branches. Expand the test cases to handle both
>> 'master' and 'other' branches.
>
> Adding Johannes to CC since this likely will start colliding with his
> default branch rename efforts.
>
>> +rev_list_tests () {
>> +	state=$1
>> +
>> +	for branch in "master" "other"
>> +	do
>> +		rev_list_tests_head
>> +	done
>> +}
>
> Specifically, this is a _new_ instance of "master", but all the
> other instances of "master" are likely being converted to "main"
> in parallel. It would certainly be easier to convert this test
> _after_ these changes are applied, but that's unlikely to happen
> with the current schedule of things.

In some tests, it may make sense to configure init.defaultbranchname
in $HOME/.gitconfig upfront and either (1) leave instances of
'master' as they are (we may want to avoid 'slave', but 'master' is
not all that wrong), or (2) rewrite instances of 'master' to 'main'
(or 'primary' or whatever init.defaultbranchname gets configured).




^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 02/23] pack-bitmap: fix header size check
  2020-11-11 19:41 ` [PATCH 02/23] pack-bitmap: fix header size check Taylor Blau
@ 2020-11-12 17:39   ` Martin Ågren
  0 siblings, 0 replies; 174+ messages in thread
From: Martin Ågren @ 2020-11-12 17:39 UTC (permalink / raw)
  To: Taylor Blau, Jeff King; +Cc: Git Mailing List, Derrick Stolee, Junio C Hamano

Hi Taylor/Peff,

On Wed, 11 Nov 2020 at 20:43, Taylor Blau <me@ttaylorr.com> wrote:
>
> This meant we were overly strict about the header size (requiring room
> for a 32-byte worst-case hash, when sha1 is only 20 bytes). But in
> practice it didn't matter because bitmap files tend to have at least 12
> bytes of actual data anyway, so it was unlikely for a valid file to be
> caught by this.

Good catch.

> +       size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
>
> -       if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
> +       if (index->map_size < header_size)
>                 return error("Corrupted bitmap index (missing header data)");

I wondered if the "12" in the commit message shouldn't be "32". We used
to count the hash bytes twice: first 32 that are included in the
`sizeof()` and then another 20 or 32 on top of that. So we'd always
count 32 too many.

Except, what the addition of `the_hash_algo->rawsz` tries to account for
is the hash aaaaall the way at the end of the file -- not the one at the
end of the header. That's my reading of the state before 0f4d6cada8
("pack-bitmap: make bitmap header handling hash agnostic", 2019-02-19),
anyway. So with that in mind, "12" makes sense.

I think we should actually check that we have room for the footer
hash. I'll comment more on the next patch.

> -       index->map_pos += sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
> +       index->map_pos += header_size;

Makes sense.

Martin

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 03/23] pack-bitmap: bounds-check size of cache extension
  2020-11-11 19:42 ` [PATCH 03/23] pack-bitmap: bounds-check size of cache extension Taylor Blau
@ 2020-11-12 17:47   ` Martin Ågren
  2020-11-13  4:57     ` Jeff King
  0 siblings, 1 reply; 174+ messages in thread
From: Martin Ågren @ 2020-11-12 17:47 UTC (permalink / raw)
  To: Taylor Blau, Jeff King; +Cc: Git Mailing List, Derrick Stolee, Junio C Hamano

On Wed, 11 Nov 2020 at 20:43, Taylor Blau <me@ttaylorr.com> wrote:
>
> A .bitmap file may have a "name hash cache" extension, which puts a
> sequence of uint32_t bytes (one per object) at the end of the file. When

s/bytes/values/, perhaps?

> we see a flag indicating this extension, we blindly subtract the
> appropriate number of bytes from our available length. However, if the
> .bitmap file is too short, we'll underflow our length variable and wrap
> around, thinking we have a very large length. This can lead to reading
> out-of-bounds bytes while loading individual ewah bitmaps.

> +               uint32_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));

Hmm. If `sizeof(size_t)` is 8, then this multiplication can't possibly
overflow. A huge value of `num_objects` (say, 0xffffffff) would give a
huge return value (0xffffffff<<2) which would be truncated (0xfffffffc).
I think?

Do we want a `u32_mult()`?

> +               unsigned char *index_end = index->map + index->map_size - the_hash_algo->rawsz;

The addition should be ok or mmap has failed on us. Do we know that we
have room for the final hash there so that the subtraction is ok? Yes,
from the previous commit, we know we have room for the header, which is
even larger. But that's cheating a bit -- see below.

> +                       if (index->map + header_size + cache_size > index_end)
> +                               return error("corrupted bitmap index file (too short to fit hash cache)");
> +                       index->hashes = (void *)(index_end - cache_size);
> +                       index_end -= cache_size;

If the header size we're adding is indeed too large, the addition in the
check would be undefined behavior, if I'm not mistaken. In practical
terms, with 32-bit pointers and a huge size, we'd wrap around, decide
that everything is ok and go on to do the same erroneous subtraction as
before.

Maybe shuffle a few things over from the left to the right to only make
subtractions that we know are ok:

  if (cache_size > index_end - index->map - header_size)

One could substitute for `index_end - index_map` and end up with

  if (cache_size > index->map_size - header_size - the_hash_algo->rawsz)

Maybe that's clearer in a way, or maybe then it's not so obvious that
the subtraction that follows matches this check.

But I don't think we can fully trust those subtractions. We're
subtracting the size of two hashes (one in the header, one in the
footer), but after the previous patch, we only know that there's room
for one. So probably the previous patch could go

  +       /*
  +        * Verify that we have room for the header and the
  +        * trailing checksum hash, so we can safely subtract
  +        * their sizes from map_size. We can afford to be
  +        * a bit imprecise with the error message.
  +        */
  -       if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
  +       if (index->map_size < header_size + the_hash_algo->rawsz)

I *think* I've got most of my comments here somewhat right, but I could
easily have missed something.

Martin

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 03/23] pack-bitmap: bounds-check size of cache extension
  2020-11-12 17:47   ` Martin Ågren
@ 2020-11-13  4:57     ` Jeff King
  2020-11-13  5:26       ` Martin Ågren
  2020-11-13 21:29       ` Taylor Blau
  0 siblings, 2 replies; 174+ messages in thread
From: Jeff King @ 2020-11-13  4:57 UTC (permalink / raw)
  To: Martin Ågren
  Cc: Taylor Blau, Git Mailing List, Derrick Stolee, Junio C Hamano

On Thu, Nov 12, 2020 at 06:47:09PM +0100, Martin Ågren wrote:

> > A .bitmap file may have a "name hash cache" extension, which puts a
> > sequence of uint32_t bytes (one per object) at the end of the file. When
> 
> s/bytes/values/, perhaps?

Yeah, definitely.

> > we see a flag indicating this extension, we blindly subtract the
> > appropriate number of bytes from our available length. However, if the
> > .bitmap file is too short, we'll underflow our length variable and wrap
> > around, thinking we have a very large length. This can lead to reading
> > out-of-bounds bytes while loading individual ewah bitmaps.
> 
> > +               uint32_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
> 
> Hmm. If `sizeof(size_t)` is 8, then this multiplication can't possibly
> overflow. A huge value of `num_objects` (say, 0xffffffff) would give a
> huge return value (0xffffffff<<2) which would be truncated (0xfffffffc).
> I think?

Yeah, `cache_size` should absolutely be a `size_t`. If you have more
than a billion objects, obviously your cache is going to be bigger than
that. But most importantly, somebody can _claim_ to have a huge number
of objects and foil the size checks by wrapping around.

> Do we want a `u32_mult()`?

Nah, we should be doing this as a size_t in the first place. There are
similar problems with the .idx format, btw. I have a series to deal with
that which I've been meaning to post.

> > +               unsigned char *index_end = index->map + index->map_size - the_hash_algo->rawsz;
> 
> The addition should be ok or mmap has failed on us. Do we know that we
> have room for the final hash there so that the subtraction is ok? Yes,
> from the previous commit, we know we have room for the header, which is
> even larger. But that's cheating a bit -- see below.

Yeah, I agree this ought to be checking the minimum size against the
header _plus_ the trailer.

I think the previous patch is actually where it goes wrong. The original
was checking for a minimum of:

  if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)

which is the header plus the trailer. We want to readjust for the
MAX_RAWSZ part of the header, so it should be:

  size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
  if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)

> > +                       if (index->map + header_size + cache_size > index_end)
> > +                               return error("corrupted bitmap index file (too short to fit hash cache)");
> > +                       index->hashes = (void *)(index_end - cache_size);
> > +                       index_end -= cache_size;
> 
> If the header size we're adding is indeed too large, the addition in the
> check would be undefined behavior, if I'm not mistaken. In practical
> terms, with 32-bit pointers and a huge size, we'd wrap around, decide
> that everything is ok and go on to do the same erroneous subtraction as
> before.
> 
> Maybe shuffle a few things over from the left to the right to only make
> subtractions that we know are ok:
> 
>   if (cache_size > index_end - index->map - header_size)

Yes, I agree this should be done as a subtraction as you showed to avoid
integer overflow.

> But I don't think we can fully trust those subtractions. We're
> subtracting the size of two hashes (one in the header, one in the
> footer), but after the previous patch, we only know that there's room
> for one. So probably the previous patch could go
> 
>   +       /*
>   +        * Verify that we have room for the header and the
>   +        * trailing checksum hash, so we can safely subtract
>   +        * their sizes from map_size. We can afford to be
>   +        * a bit imprecise with the error message.
>   +        */
>   -       if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
>   +       if (index->map_size < header_size + the_hash_algo->rawsz)
> 
> I *think* I've got most of my comments here somewhat right, but I could
> easily have missed something.

Right. I think that's right, and the previous patch is just buggy.

-Peff

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 03/23] pack-bitmap: bounds-check size of cache extension
  2020-11-13  4:57     ` Jeff King
@ 2020-11-13  5:26       ` Martin Ågren
  2020-11-13 21:29       ` Taylor Blau
  1 sibling, 0 replies; 174+ messages in thread
From: Martin Ågren @ 2020-11-13  5:26 UTC (permalink / raw)
  To: Jeff King; +Cc: Taylor Blau, Git Mailing List, Derrick Stolee, Junio C Hamano

On Fri, 13 Nov 2020 at 05:57, Jeff King <peff@peff.net> wrote:
>
> On Thu, Nov 12, 2020 at 06:47:09PM +0100, Martin Ågren wrote:
>
> > > +               uint32_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
> >
> > Hmm. If `sizeof(size_t)` is 8, then this multiplication can't possibly
> > overflow. A huge value of `num_objects` (say, 0xffffffff) would give a
> > huge return value (0xffffffff<<2) which would be truncated (0xfffffffc).
> > I think?
>
> Yeah, `cache_size` should absolutely be a `size_t`. If you have more
> than a billion objects, obviously your cache is going to be bigger than
> that. But most importantly, somebody can _claim_ to have a huge number
> of objects and foil the size checks by wrapping around.
>
> > Do we want a `u32_mult()`?
>
> Nah, we should be doing this as a size_t in the first place. There are
> similar problems with the .idx format, btw. I have a series to deal with
> that which I've been meaning to post.

Yes, that makes sense!

> >   if (cache_size > index_end - index->map - header_size)
>
> Yes, I agree this should be done as a subtraction as you showed to avoid
> integer overflow.

> >   -       if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
> >   +       if (index->map_size < header_size + the_hash_algo->rawsz)

> Right. I think that's right, and the previous patch is just buggy.

Martin

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 03/23] pack-bitmap: bounds-check size of cache extension
  2020-11-13  4:57     ` Jeff King
  2020-11-13  5:26       ` Martin Ågren
@ 2020-11-13 21:29       ` Taylor Blau
  2020-11-13 21:39         ` Jeff King
  1 sibling, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-13 21:29 UTC (permalink / raw)
  To: Jeff King
  Cc: Martin Ågren, Git Mailing List, Derrick Stolee, Junio C Hamano

On Thu, Nov 12, 2020 at 11:57:00PM -0500, Jeff King wrote:
> On Thu, Nov 12, 2020 at 06:47:09PM +0100, Martin Ågren wrote:
>
> > The addition should be ok or mmap has failed on us. Do we know that we
> > have room for the final hash there so that the subtraction is ok? Yes,
> > from the previous commit, we know we have room for the header, which is
> > even larger. But that's cheating a bit -- see below.
>
> Yeah, I agree this ought to be checking the minimum size against the
> header _plus_ the trailer.
>
> I think the previous patch is actually where it goes wrong. The original
> was checking for a minimum of:
>
>   if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
>
> which is the header plus the trailer. We want to readjust for the
> MAX_RAWSZ part of the header, so it should be:
>
>   size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
>   if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)

I'm not sure that I follow. If you apply this to the second patch in
this series, the only thing that changes is that it factors out:

  index->map_pos += ...;

into

  size_t header_size = ...;
  // ...
  index->map_pos += header_size;

What am I missing here?

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 03/23] pack-bitmap: bounds-check size of cache extension
  2020-11-13 21:29       ` Taylor Blau
@ 2020-11-13 21:39         ` Jeff King
  2020-11-13 21:49           ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jeff King @ 2020-11-13 21:39 UTC (permalink / raw)
  To: Taylor Blau
  Cc: Martin Ågren, Git Mailing List, Derrick Stolee, Junio C Hamano

On Fri, Nov 13, 2020 at 04:29:54PM -0500, Taylor Blau wrote:

> > which is the header plus the trailer. We want to readjust for the
> > MAX_RAWSZ part of the header, so it should be:
> >
> >   size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
> >   if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
> 
> I'm not sure that I follow. If you apply this to the second patch in
> this series, the only thing that changes is that it factors out:
> 
>   index->map_pos += ...;
> 
> into
> 
>   size_t header_size = ...;
>   // ...
>   index->map_pos += header_size;
> 
> What am I missing here?

The problem is this hunk from patch 2:

> +       size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
> 
> -       if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
> +       if (index->map_size < header_size)
>                 return error("Corrupted bitmap index (missing header data)");

The header struct contains a field for the hash of the pack. So the
original code as taking that full header, and adding in another
current-algo rawsz to account for the trailer.

Afterwards, we adjust header_size to swap out the MAX_RAWSZ for the
current-algo rawsz. So header_size is a correct substitution for
sizeof(*header) now. But we still have to add back in
the_hash_algo->rawsz to account for the trailer. The second "+" line is
wrong to have removed it.

The later line we adjust:

> -       index->map_pos += sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
> +       index->map_pos += header_size;

is correct. It's just skipping past the header, and doesn't care about
the trailer at all (and confusing the two is probably what led me to
write the bug in the first place).

-Peff

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 03/23] pack-bitmap: bounds-check size of cache extension
  2020-11-13 21:39         ` Jeff King
@ 2020-11-13 21:49           ` Taylor Blau
  2020-11-13 22:11             ` Jeff King
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-13 21:49 UTC (permalink / raw)
  To: Jeff King
  Cc: Taylor Blau, Martin Ågren, Git Mailing List, Derrick Stolee,
	Junio C Hamano

On Fri, Nov 13, 2020 at 04:39:42PM -0500, Jeff King wrote:
> The problem is this hunk from patch 2:
>
> > +       size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
> >
> > -       if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
> > +       if (index->map_size < header_size)
> >                 return error("Corrupted bitmap index (missing header data)");
>
> The header struct contains a field for the hash of the pack. So the
> original code as taking that full header, and adding in another
> current-algo rawsz to account for the trailer.
>
> Afterwards, we adjust header_size to swap out the MAX_RAWSZ for the
> current-algo rawsz. So header_size is a correct substitution for
> sizeof(*header) now. But we still have to add back in
> the_hash_algo->rawsz to account for the trailer. The second "+" line is
> wrong to have removed it.

Thanks for your patient explanation. This hunk should instead read:

+       size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;

-       if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
+       if (index->map_size < header_size + the_hash_algo->rawsz)
                return error("Corrupted bitmap index (missing header data)");

That error might not necessarily be right (it could say "missing header
or trailer data"), though. I'm open to if you think it should be
changed or not.

Since we didn't realize this bug at the time, the rest of the patch
message is worded correctly, I believe.

> The later line we adjust:
>
> > -       index->map_pos += sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
> > +       index->map_pos += header_size;
>
> is correct. It's just skipping past the header, and doesn't care about
> the trailer at all (and confusing the two is probably what led me to
> write the bug in the first place).

Right, makes sense.

> -Peff

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 03/23] pack-bitmap: bounds-check size of cache extension
  2020-11-13 21:49           ` Taylor Blau
@ 2020-11-13 22:11             ` Jeff King
  0 siblings, 0 replies; 174+ messages in thread
From: Jeff King @ 2020-11-13 22:11 UTC (permalink / raw)
  To: Taylor Blau
  Cc: Martin Ågren, Git Mailing List, Derrick Stolee, Junio C Hamano

On Fri, Nov 13, 2020 at 04:49:28PM -0500, Taylor Blau wrote:

> Thanks for your patient explanation. This hunk should instead read:
> 
> +       size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
> 
> -       if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
> +       if (index->map_size < header_size + the_hash_algo->rawsz)
>                 return error("Corrupted bitmap index (missing header data)");
> 
> That error might not necessarily be right (it could say "missing header
> or trailer data"), though. I'm open to if you think it should be
> changed or not.

Yeah, I agree it's misleading. In the idx code path we just say "%s is
too small", which is more accurate.

-Peff

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 17/23] pack-bitmap-write: build fewer intermediate bitmaps
  2020-11-11 19:43 ` [PATCH 17/23] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
@ 2020-11-13 22:23   ` SZEDER Gábor
  2020-11-13 23:03     ` Jeff King
  0 siblings, 1 reply; 174+ messages in thread
From: SZEDER Gábor @ 2020-11-13 22:23 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, dstolee, gitster, peff

On Wed, Nov 11, 2020 at 02:43:51PM -0500, Taylor Blau wrote:
> From: Derrick Stolee <dstolee@microsoft.com>
> 
> The bitmap_writer_build() method calls bitmap_builder_init() to
> construct a list of commits reachable from the selected commits along
> with a "reverse graph". This reverse graph has edges pointing from a
> commit to other commits that can reach that commit. After computing a
> reachability bitmap for a commit, the values in that bitmap are then
> copied to the reachability bitmaps across the edges in the reverse
> graph.
> 
> We can now relax the role of the reverse graph to greatly reduce the
> number of intermediate reachability bitmaps we compute during this
> reverse walk. The end result is that we walk objects the same number of
> times as before when constructing the reachability bitmaps, but we also
> spend much less time copying bits between bitmaps and have much lower
> memory pressure in the process.
> 
> The core idea is to select a set of "important" commits based on
> interactions among the sets of commits reachable from each selected commit.

This patch breaks the test 'truncated bitmap fails gracefully (ewah)'
when run with GIT_TEST_DEFAULT_HASH=sha256:

  expecting success of 5310.66 'truncated bitmap fails gracefully (ewah)':
          test_config pack.writebitmaphashcache false &&
          git repack -ad &&
          git rev-list --use-bitmap-index --count --all >expect &&
          bitmap=$(ls .git/objects/pack/*.bitmap) &&
          test_when_finished "rm -f $bitmap" &&
          test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
          mv -f $bitmap.tmp $bitmap &&
          git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
          test_cmp expect actual &&
          test_i18ngrep corrupt.ewah.bitmap stderr
  
  + test_config pack.writebitmaphashcache false
  + git repack -ad
  + git rev-list --use-bitmap-index --count --all
  + ls .git/objects/pack/pack-23fe19963d67a1d4797a39622c15144bbf35ab76a2c0638ba9288cc688c24c16.bitmap
  + bitmap=.git/objects/pack/pack-23fe19963d67a1d4797a39622c15144bbf35ab76a2c0638ba9288cc688c24c16.bitmap
  + test_when_finished rm -f .git/objects/pack/pack-23fe19963d67a1d4797a39622c15144bbf35ab76a2c0638ba9288cc688c24c16.bitmap
  + test_copy_bytes 256
  + mv -f .git/objects/pack/pack-23fe19963d67a1d4797a39622c15144bbf35ab76a2c0638ba9288cc688c24c16.bitmap.tmp .git/objects/pack/pack-23fe19963d67a1d4797a39622c15144bbf35ab76a2c0638ba9288cc688c24c16.bitmap
  + git rev-list --use-bitmap-index --count --all
  + test_cmp expect actual
  --- expect      2020-11-13 22:20:39.246355100 +0000
  +++ actual      2020-11-13 22:20:39.254355294 +0000
  @@ -1 +1 @@
  -239
  +236
  error: last command exited with $?=1


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 17/23] pack-bitmap-write: build fewer intermediate bitmaps
  2020-11-13 22:23   ` SZEDER Gábor
@ 2020-11-13 23:03     ` Jeff King
  2020-11-14  6:23       ` Jeff King
  0 siblings, 1 reply; 174+ messages in thread
From: Jeff King @ 2020-11-13 23:03 UTC (permalink / raw)
  To: SZEDER Gábor; +Cc: Taylor Blau, git, dstolee, gitster

On Fri, Nov 13, 2020 at 11:23:28PM +0100, SZEDER Gábor wrote:

> This patch breaks the test 'truncated bitmap fails gracefully (ewah)'
> when run with GIT_TEST_DEFAULT_HASH=sha256:

Thanks for reporting. It's mostly unluckiness that is unrelated to this
commit.

We're corrupting the bitmap by truncating it:

>           test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
>           mv -f $bitmap.tmp $bitmap &&

and then expecting to notice the problem. But it really depends on which
bitmaps we try to look at, and exactly where the truncation is. And this
commit just happens to rearrange the exact bytes we write to the bitmap
file.

If I do this:

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 68badd63cb..a83e7a93fb 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -436,7 +436,7 @@ test_expect_success 'truncated bitmap fails gracefully (ewah)' '
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
 	test_when_finished "rm -f $bitmap" &&
-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
+	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&

then it passes with both sha1 and sha256.

But what's slightly disturbing is this output:

>   --- expect      2020-11-13 22:20:39.246355100 +0000
>   +++ actual      2020-11-13 22:20:39.254355294 +0000
>   @@ -1 +1 @@
>   -239
>   +236
>   error: last command exited with $?=1

We're actually producing the wrong answer here, which implies that
ewah_read_mmap() is not being careful enough. Or possibly we are feeding
it extra bytes (e.g., letting it run over into the name-hash cache or
into the trailer checksum).

I think we'll have to dig further into this, probably running the sha256
case in a debugger to see what offsets we actually end up reading.

-Peff

^ permalink raw reply related	[flat|nested] 174+ messages in thread

* Re: [PATCH 17/23] pack-bitmap-write: build fewer intermediate bitmaps
  2020-11-13 23:03     ` Jeff King
@ 2020-11-14  6:23       ` Jeff King
  0 siblings, 0 replies; 174+ messages in thread
From: Jeff King @ 2020-11-14  6:23 UTC (permalink / raw)
  To: SZEDER Gábor; +Cc: Taylor Blau, git, dstolee, gitster

On Fri, Nov 13, 2020 at 06:03:24PM -0500, Jeff King wrote:

> But what's slightly disturbing is this output:
> 
> >   --- expect      2020-11-13 22:20:39.246355100 +0000
> >   +++ actual      2020-11-13 22:20:39.254355294 +0000
> >   @@ -1 +1 @@
> >   -239
> >   +236
> >   error: last command exited with $?=1
> 
> We're actually producing the wrong answer here, which implies that
> ewah_read_mmap() is not being careful enough. Or possibly we are feeding
> it extra bytes (e.g., letting it run over into the name-hash cache or
> into the trailer checksum).
> 
> I think we'll have to dig further into this, probably running the sha256
> case in a debugger to see what offsets we actually end up reading.

Yep, the problem is in the caller, which is not careful about size
checks before reading the header before the actual ewah.

The first hunk here fixes it (the second is just another possible
corruption I noticed, but not triggered by the test):

diff --git a/pack-bitmap.c b/pack-bitmap.c
index dc811ebae8..785009b04e 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -229,11 +229,16 @@ static int load_bitmap_entries_v1(struct bitmap_index *index)
 		uint32_t commit_idx_pos;
 		struct object_id oid;
 
+		if (index->map_size - index->map_pos < 6)
+			return error("corrupt ewah bitmap: truncated header for entry %d", i);
+
 		commit_idx_pos = read_be32(index->map, &index->map_pos);
 		xor_offset = read_u8(index->map, &index->map_pos);
 		flags = read_u8(index->map, &index->map_pos);
 
-		nth_packed_object_id(&oid, index->pack, commit_idx_pos);
+		if (nth_packed_object_id(&oid, index->pack, commit_idx_pos) < 0)
+			return error("corrupt ewah bitmap: commit index %u out of range",
+				     (unsigned)commit_idx_pos);
 
 		bitmap = read_bitmap_1(index);
 		if (!bitmap)

We should definitely do something like this, but there are some possible
further improvements:

  - I think that map_size includes the trailing hash, and almost
    certainly any post-index extensions. We could probably compute the
    correct boundary of the bitmaps themselves in the caller and make
    sure we don't read past it. I'm not sure if it's worth the effort,
    though. In a truncation situation, basically all bets are off (is
    the trailer still there and the bitmap entries malformed, or is the
    trailer truncated?). The best we can do is try to read what's there
    as if it's correct data (and protect ourselves when it's obviously
    bogus).

  - we could avoid the magic "6" if read_be32() and read_u8(), which are
    custom helpers for this function, checked sizes before advancing the
    pointers.

  - I'm hesitant to add more tests in this area. As you can see from the
    commit which "broke" the test, truncating at byte N is going to be
    sensitive to small variations in the bitmap generation. So unless
    we're actually parsing the bitmaps and doing targeted corruptions,
    the tests will be somewhat brittle.

-Peff

^ permalink raw reply related	[flat|nested] 174+ messages in thread

* Re: [PATCH 15/23] t5310: add branch-based checks
  2020-11-11 21:04     ` Junio C Hamano
@ 2020-11-15 23:26       ` Johannes Schindelin
  0 siblings, 0 replies; 174+ messages in thread
From: Johannes Schindelin @ 2020-11-15 23:26 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Derrick Stolee, Taylor Blau, git, dstolee, peff

Hi Junio,

On Wed, 11 Nov 2020, Junio C Hamano wrote:

> Derrick Stolee <stolee@gmail.com> writes:
>
> > On 11/11/2020 2:43 PM, Taylor Blau wrote:
> >> From: Derrick Stolee <dstolee@microsoft.com>
> >>
> >> The current rev-list tests that check the bitmap data only work on HEAD
> >> instead of multiple branches. Expand the test cases to handle both
> >> 'master' and 'other' branches.
> >
> > Adding Johannes to CC since this likely will start colliding with his
> > default branch rename efforts.

It's okay. It's not like this is the only topic I have to navigate around.

> >> +rev_list_tests () {
> >> +	state=$1
> >> +
> >> +	for branch in "master" "other"
> >> +	do
> >> +		rev_list_tests_head
> >> +	done
> >> +}
> >
> > Specifically, this is a _new_ instance of "master", but all the
> > other instances of "master" are likely being converted to "main"
> > in parallel. It would certainly be easier to convert this test
> > _after_ these changes are applied, but that's unlikely to happen
> > with the current schedule of things.
>
> In some tests, it may make sense to configure init.defaultbranchname
> in $HOME/.gitconfig upfront and either (1) leave instances of
> 'master' as they are (we may want to avoid 'slave', but 'master' is
> not all that wrong), or (2) rewrite instances of 'master' to 'main'
> (or 'primary' or whatever init.defaultbranchname gets configured).

I explored this option very early on (so long ago that I failed to mention
it). The problem with that is that `$HOME` is set thusly in `test-lib.sh`:

	HOME="$TRASH_DIRECTORY"

In other words, the test repository's top-level directory is the home
directory. Which means that `git status`, when run directly after `.
test-lib.sh` would already show `.gitignore` as untracked, something that
would trip up a couple of test scripts.

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (22 preceding siblings ...)
  2020-11-11 19:44 ` [PATCH 23/23] pack-bitmap-write: better reuse bitmaps Taylor Blau
@ 2020-11-17 21:46 ` Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 01/24] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
                     ` (25 more replies)
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
  25 siblings, 26 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:46 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

Here is an updated version of this series, which improves the
performance of generating reachability bitmaps in large repositories.

Not very much has changed since last time, but a range-diff is below
nonetheless. The major changes are:

  - Avoid an overflow when bounds checking in the second and third
    patches (thanks, Martin, for noticing).
  - Incorporate a fix to avoid reading beyond an EWAH bitmap by double
    checking our read before actually doing it (thanks, Peff).
  - Harden the tests so that they pass under sha256-mode (thanks SZEDER,
    and Peff).

Derrick Stolee (9):
  pack-bitmap-write: fill bitmap with commit history
  bitmap: add bitmap_diff_nonzero()
  commit: implement commit_list_contains()
  t5310: add branch-based checks
  pack-bitmap-write: rename children to reverse_edges
  pack-bitmap-write: build fewer intermediate bitmaps
  pack-bitmap-write: use existing bitmaps
  pack-bitmap-write: relax unique rewalk condition
  pack-bitmap-write: better reuse bitmaps

Jeff King (11):
  pack-bitmap: fix header size check
  pack-bitmap: bounds-check size of cache extension
  t5310: drop size of truncated ewah bitmap
  rev-list: die when --test-bitmap detects a mismatch
  ewah: factor out bitmap growth
  ewah: make bitmap growth less aggressive
  ewah: implement bitmap_or()
  ewah: add bitmap_dup() function
  pack-bitmap-write: reimplement bitmap writing
  pack-bitmap-write: pass ownership of intermediate bitmaps
  pack-bitmap-write: ignore BITMAP_FLAG_REUSE

Taylor Blau (4):
  ewah/ewah_bitmap.c: grow buffer past 1
  pack-bitmap.c: check reads more aggressively when loading
  pack-bitmap: factor out 'bitmap_for_commit()'
  pack-bitmap: factor out 'add_commit_to_bitmap()'

 builtin/pack-objects.c  |   1 -
 commit.c                |  11 +
 commit.h                |   2 +
 ewah/bitmap.c           |  54 ++++-
 ewah/ewah_bitmap.c      |   2 +-
 ewah/ewok.h             |   3 +-
 pack-bitmap-write.c     | 452 +++++++++++++++++++++++++---------------
 pack-bitmap.c           | 139 ++++++------
 pack-bitmap.h           |   8 +-
 t/t5310-pack-bitmaps.sh | 164 ++++++++++++---
 10 files changed, 555 insertions(+), 281 deletions(-)

Range-diff against v1:
 -:  ---------- >  1:  07054ff8ee ewah/ewah_bitmap.c: grow buffer past 1
 1:  1970a70207 !  2:  74a13b4a6e pack-bitmap: fix header size check
    @@ pack-bitmap.c: static struct ewah_bitmap *read_bitmap_1(struct bitmap_index *ind
     +	size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;

     -	if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
    -+	if (index->map_size < header_size)
    - 		return error("Corrupted bitmap index (missing header data)");
    +-		return error("Corrupted bitmap index (missing header data)");
    ++	if (index->map_size < header_size + the_hash_algo->rawsz)
    ++		return error("Corrupted bitmap index (too small)");

      	if (memcmp(header->magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE)) != 0)
    + 		return error("Corrupted bitmap index file (wrong header)");
     @@ pack-bitmap.c: static int load_bitmap_header(struct bitmap_index *index)
      	}

 2:  36b1815d03 !  3:  db11116dac pack-bitmap: bounds-check size of cache extension
    @@ Commit message
         pack-bitmap: bounds-check size of cache extension

         A .bitmap file may have a "name hash cache" extension, which puts a
    -    sequence of uint32_t bytes (one per object) at the end of the file. When
    -    we see a flag indicating this extension, we blindly subtract the
    +    sequence of uint32_t values (one per object) at the end of the file.
    +    When we see a flag indicating this extension, we blindly subtract the
         appropriate number of bytes from our available length. However, if the
         .bitmap file is too short, we'll underflow our length variable and wrap
         around, thinking we have a very large length. This can lead to reading
    @@ pack-bitmap.c: static int load_bitmap_header(struct bitmap_index *index)
      	/* Parse known bitmap format options */
      	{
      		uint32_t flags = ntohs(header->options);
    -+		uint32_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
    ++		size_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
     +		unsigned char *index_end = index->map + index->map_size - the_hash_algo->rawsz;

      		if ((flags & BITMAP_OPT_FULL_DAG) == 0)
    @@ pack-bitmap.c: static int load_bitmap_header(struct bitmap_index *index)
      		if (flags & BITMAP_OPT_HASH_CACHE) {
     -			unsigned char *end = index->map + index->map_size - the_hash_algo->rawsz;
     -			index->hashes = ((uint32_t *)end) - index->pack->num_objects;
    -+			if (index->map + header_size + cache_size > index_end)
    ++			if (cache_size > index_end - index->map - header_size)
     +				return error("corrupted bitmap index file (too short to fit hash cache)");
     +			index->hashes = (void *)(index_end - cache_size);
     +			index_end -= cache_size;
 3:  edfec2ea62 =  4:  f779e76f82 t5310: drop size of truncated ewah bitmap
 4:  f3fec466f7 =  5:  1a9ac1c4ae rev-list: die when --test-bitmap detects a mismatch
 5:  b35012f44d =  6:  9bb1ea3b19 ewah: factor out bitmap growth
 6:  53b8bea98c =  7:  f8426c7e8b ewah: make bitmap growth less aggressive
 7:  98e3bfc1b2 =  8:  674e31f98e ewah: implement bitmap_or()
 8:  1bd115fc51 =  9:  a903c949d8 ewah: add bitmap_dup() function
 9:  adf16557c2 = 10:  c951206729 pack-bitmap-write: reimplement bitmap writing
10:  27992687c9 = 11:  466dd3036a pack-bitmap-write: pass ownership of intermediate bitmaps
11:  d92fb0e1e1 = 12:  8e5607929d pack-bitmap-write: fill bitmap with commit history
12:  bf86cb6196 = 13:  4840c64c51 bitmap: add bitmap_diff_nonzero()
13:  78cdf847aa = 14:  63e846f4e8 commit: implement commit_list_contains()
14:  778e9e9c44 = 15:  8b5d239333 t5310: add branch-based checks
15:  526d3509ef = 16:  60a46091bb pack-bitmap-write: rename children to reverse_edges
 -:  ---------- > 17:  8f7bb2dd2e pack-bitmap.c: check reads more aggressively when loading
16:  86d77fd085 ! 18:  5262daa330 pack-bitmap-write: build fewer intermediate bitmaps
    @@ t/t5310-pack-bitmaps.sh: test_expect_success 'setup repo with moderate-sized his
      '

      test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
    +@@ t/t5310-pack-bitmaps.sh: test_expect_success 'truncated bitmap fails gracefully (ewah)' '
    + 	git rev-list --use-bitmap-index --count --all >expect &&
    + 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
    + 	test_when_finished "rm -f $bitmap" &&
    +-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
    ++	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
    + 	mv -f $bitmap.tmp $bitmap &&
    + 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
    + 	test_cmp expect actual &&
17:  e4f296100c = 19:  a206f48614 pack-bitmap-write: ignore BITMAP_FLAG_REUSE
18:  6e856bcf75 = 20:  9928b3c7da pack-bitmap: factor out 'bitmap_for_commit()'
19:  9b5f595f50 = 21:  f40a39a48a pack-bitmap: factor out 'add_commit_to_bitmap()'
20:  c458f98e11 = 22:  4bf5e78a54 pack-bitmap-write: use existing bitmaps
21:  3026876e7a = 23:  1da4fa0fb8 pack-bitmap-write: relax unique rewalk condition
22:  ce2716e291 = 24:  42399a1c2e pack-bitmap-write: better reuse bitmaps
--
2.29.2.312.gabc4d358d8

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [PATCH v2 01/24] ewah/ewah_bitmap.c: grow buffer past 1
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
@ 2020-11-17 21:46   ` Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 02/24] pack-bitmap: fix header size check Taylor Blau
                     ` (24 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:46 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

When the buffer size is exactly 1, we fail to grow it properly, since
the integer truncation means that 1 * 3 / 2 = 1. This can cause a bad
write on the line below.

Bandaid this by first padding the buffer by 16, and then growing it.
This still allows old blocks to fit into new ones, but fixes the case
where the block size equals 1.

Co-authored-by: Jeff King <peff@peff.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/ewah_bitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ewah/ewah_bitmap.c b/ewah/ewah_bitmap.c
index d59b1afe3d..3fae04ad00 100644
--- a/ewah/ewah_bitmap.c
+++ b/ewah/ewah_bitmap.c
@@ -45,7 +45,7 @@ static inline void buffer_grow(struct ewah_bitmap *self, size_t new_size)
 static inline void buffer_push(struct ewah_bitmap *self, eword_t value)
 {
 	if (self->buffer_size + 1 >= self->alloc_size)
-		buffer_grow(self, self->buffer_size * 3 / 2);
+		buffer_grow(self, (self->buffer_size + 16) * 3 / 2);
 
 	self->buffer[self->buffer_size++] = value;
 }
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 02/24] pack-bitmap: fix header size check
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 01/24] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
@ 2020-11-17 21:46   ` Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
                     ` (23 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:46 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

When we parse a .bitmap header, we first check that we have enough bytes
to make a valid header. We do that based on sizeof(struct
bitmap_disk_header). However, as of 0f4d6cada8 (pack-bitmap: make bitmap
header handling hash agnostic, 2019-02-19), that struct oversizes its
checksum member to GIT_MAX_RAWSZ. That means we need to adjust for the
difference between that constant and the size of the actual hash we're
using. That commit adjusted the code which moves our pointer forward,
but forgot to update the size check.

This meant we were overly strict about the header size (requiring room
for a 32-byte worst-case hash, when sha1 is only 20 bytes). But in
practice it didn't matter because bitmap files tend to have at least 12
bytes of actual data anyway, so it was unlikely for a valid file to be
caught by this.

Let's fix it by pulling the header size into a separate variable and
using it in both spots. That fixes the bug and simplifies the code to make
it harder to have a mismatch like this in the future. It will also come
in handy in the next patch for more bounds checking.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 4077e731e8..fe5647e72e 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -138,9 +138,10 @@ static struct ewah_bitmap *read_bitmap_1(struct bitmap_index *index)
 static int load_bitmap_header(struct bitmap_index *index)
 {
 	struct bitmap_disk_header *header = (void *)index->map;
+	size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
 
-	if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
-		return error("Corrupted bitmap index (missing header data)");
+	if (index->map_size < header_size + the_hash_algo->rawsz)
+		return error("Corrupted bitmap index (too small)");
 
 	if (memcmp(header->magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE)) != 0)
 		return error("Corrupted bitmap index file (wrong header)");
@@ -164,7 +165,7 @@ static int load_bitmap_header(struct bitmap_index *index)
 	}
 
 	index->entry_count = ntohl(header->entry_count);
-	index->map_pos += sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
+	index->map_pos += header_size;
 	return 0;
 }
 
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 03/24] pack-bitmap: bounds-check size of cache extension
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 01/24] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 02/24] pack-bitmap: fix header size check Taylor Blau
@ 2020-11-17 21:46   ` Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
                     ` (22 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:46 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

A .bitmap file may have a "name hash cache" extension, which puts a
sequence of uint32_t values (one per object) at the end of the file.
When we see a flag indicating this extension, we blindly subtract the
appropriate number of bytes from our available length. However, if the
.bitmap file is too short, we'll underflow our length variable and wrap
around, thinking we have a very large length. This can lead to reading
out-of-bounds bytes while loading individual ewah bitmaps.

We can fix this by checking the number of available bytes when we parse
the header. The existing "truncated bitmap" test is now split into two
tests: one where we don't have this extension at all (and hence actually
do try to read a truncated ewah bitmap) and one where we realize
up-front that we can't even fit in the cache structure. We'll check
stderr in each case to make sure we hit the error we're expecting.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c           |  8 ++++++--
 t/t5310-pack-bitmaps.sh | 17 +++++++++++++++--
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index fe5647e72e..074d9ac8f2 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -153,14 +153,18 @@ static int load_bitmap_header(struct bitmap_index *index)
 	/* Parse known bitmap format options */
 	{
 		uint32_t flags = ntohs(header->options);
+		size_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
+		unsigned char *index_end = index->map + index->map_size - the_hash_algo->rawsz;
 
 		if ((flags & BITMAP_OPT_FULL_DAG) == 0)
 			return error("Unsupported options for bitmap index file "
 				"(Git requires BITMAP_OPT_FULL_DAG)");
 
 		if (flags & BITMAP_OPT_HASH_CACHE) {
-			unsigned char *end = index->map + index->map_size - the_hash_algo->rawsz;
-			index->hashes = ((uint32_t *)end) - index->pack->num_objects;
+			if (cache_size > index_end - index->map - header_size)
+				return error("corrupted bitmap index file (too short to fit hash cache)");
+			index->hashes = (void *)(index_end - cache_size);
+			index_end -= cache_size;
 		}
 	}
 
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 8318781d2b..e2c3907a68 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -343,7 +343,8 @@ test_expect_success 'pack reuse respects --incremental' '
 	test_must_be_empty actual
 '
 
-test_expect_success 'truncated bitmap fails gracefully' '
+test_expect_success 'truncated bitmap fails gracefully (ewah)' '
+	test_config pack.writebitmaphashcache false &&
 	git repack -ad &&
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
@@ -352,7 +353,19 @@ test_expect_success 'truncated bitmap fails gracefully' '
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-	test_i18ngrep corrupt stderr
+	test_i18ngrep corrupt.ewah.bitmap stderr
+'
+
+test_expect_success 'truncated bitmap fails gracefully (cache)' '
+	git repack -ad &&
+	git rev-list --use-bitmap-index --count --all >expect &&
+	bitmap=$(ls .git/objects/pack/*.bitmap) &&
+	test_when_finished "rm -f $bitmap" &&
+	test_copy_bytes 512 <$bitmap >$bitmap.tmp &&
+	mv -f $bitmap.tmp $bitmap &&
+	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
+	test_cmp expect actual &&
+	test_i18ngrep corrupted.bitmap.index stderr
 '
 
 # have_delta <obj> <expected_base>
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 04/24] t5310: drop size of truncated ewah bitmap
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (2 preceding siblings ...)
  2020-11-17 21:46   ` [PATCH v2 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
@ 2020-11-17 21:46   ` Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
                     ` (21 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:46 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

We truncate the .bitmap file to 512 bytes and expect to run into
problems reading an individual ewah file. But this length is somewhat
arbitrary, and just happened to work when the test was added in
9d2e330b17 (ewah_read_mmap: bounds-check mmap reads, 2018-06-14).

An upcoming commit will change the size of the history we create in the
test repo, which will cause this test to fail. We can future-proof it a
bit more by reducing the size of the truncated bitmap file.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 t/t5310-pack-bitmaps.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index e2c3907a68..70a4fc4843 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -349,7 +349,7 @@ test_expect_success 'truncated bitmap fails gracefully (ewah)' '
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
 	test_when_finished "rm -f $bitmap" &&
-	test_copy_bytes 512 <$bitmap >$bitmap.tmp &&
+	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 05/24] rev-list: die when --test-bitmap detects a mismatch
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (3 preceding siblings ...)
  2020-11-17 21:46   ` [PATCH v2 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
@ 2020-11-17 21:46   ` Taylor Blau
  2020-11-17 21:46   ` [PATCH v2 06/24] ewah: factor out bitmap growth Taylor Blau
                     ` (20 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:46 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

You can use "git rev-list --test-bitmap HEAD" to check that bitmaps
produce the same answer we'd get from a regular traversal. But if we
detect an error, we only print "mismatch", and still exit with a
successful error code.

That makes the uses of --test-bitmap in the test suite (e.g., in t5310)
mostly pointless: even if we saw an error, the tests wouldn't notice.
Let's instead call die(), which will let these tests work as designed,
and alert us if the bitmaps are bogus.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 074d9ac8f2..4431f9f120 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1328,7 +1328,7 @@ void test_bitmap_walk(struct rev_info *revs)
 	if (bitmap_equals(result, tdata.base))
 		fprintf(stderr, "OK!\n");
 	else
-		fprintf(stderr, "Mismatch!\n");
+		die("mismatch in bitmap results");
 
 	free_bitmap_index(bitmap_git);
 }
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 06/24] ewah: factor out bitmap growth
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (4 preceding siblings ...)
  2020-11-17 21:46   ` [PATCH v2 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
@ 2020-11-17 21:46   ` Taylor Blau
  2020-11-17 21:47   ` [PATCH v2 07/24] ewah: make bitmap growth less aggressive Taylor Blau
                     ` (19 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:46 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

We auto-grow bitmaps when somebody asks to set a bit whose position is
outside of our currently allocated range. Other operations besides
single bit-setting might need to do this, too, so let's pull it into its
own function.

Note that we change the semantics a little: you now ask for the number
of words you'd like to have, not the id of the block you'd like to write
to.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index d8cec585af..7c1ecfa6fd 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -35,18 +35,22 @@ struct bitmap *bitmap_new(void)
 	return bitmap_word_alloc(32);
 }
 
-void bitmap_set(struct bitmap *self, size_t pos)
+static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
-	size_t block = EWAH_BLOCK(pos);
-
-	if (block >= self->word_alloc) {
+	if (word_alloc > self->word_alloc) {
 		size_t old_size = self->word_alloc;
-		self->word_alloc = block ? block * 2 : 1;
+		self->word_alloc = word_alloc * 2;
 		REALLOC_ARRAY(self->words, self->word_alloc);
 		memset(self->words + old_size, 0x0,
 			(self->word_alloc - old_size) * sizeof(eword_t));
 	}
+}
 
+void bitmap_set(struct bitmap *self, size_t pos)
+{
+	size_t block = EWAH_BLOCK(pos);
+
+	bitmap_grow(self, block + 1);
 	self->words[block] |= EWAH_MASK(pos);
 }
 
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 07/24] ewah: make bitmap growth less aggressive
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (5 preceding siblings ...)
  2020-11-17 21:46   ` [PATCH v2 06/24] ewah: factor out bitmap growth Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-17 21:47   ` [PATCH v2 08/24] ewah: implement bitmap_or() Taylor Blau
                     ` (18 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

If you ask to set a bit in the Nth word and we haven't yet allocated
that many slots in our array, we'll increase the bitmap size to 2*N.
This means we might frequently end up with bitmaps that are twice the
necessary size (as soon as you ask for the biggest bit, we'll size up to
twice that).

But if we just allocate as many words as were asked for, we may not grow
fast enough. The worst case there is setting bit 0, then 1, etc. Each
time we grow we'd just extend by one more word, giving us linear
reallocations (and quadratic memory copies).

Let's combine those by allocating the maximum of:

 - what the caller asked for

 - a geometric increase in existing size; we'll switch to 3/2 instead of
   2 here. That's less aggressive and may help avoid fragmenting memory
   (N + 3N/2 > 9N/4, so old chunks can be reused as we scale up).

Our worst case is still 3/2N wasted bits (you set bit N-1, then setting
bit N causes us to grow by 3/2), but our average should be much better.

This isn't usually that big a deal, but it will matter as we shift the
reachability bitmap generation code to store more bitmaps in memory.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 7c1ecfa6fd..43a59d7fed 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -39,7 +39,9 @@ static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
 	if (word_alloc > self->word_alloc) {
 		size_t old_size = self->word_alloc;
-		self->word_alloc = word_alloc * 2;
+		self->word_alloc = old_size * 3 / 2;
+		if (word_alloc > self->word_alloc)
+			self->word_alloc = word_alloc;
 		REALLOC_ARRAY(self->words, self->word_alloc);
 		memset(self->words + old_size, 0x0,
 			(self->word_alloc - old_size) * sizeof(eword_t));
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 08/24] ewah: implement bitmap_or()
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (6 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 07/24] ewah: make bitmap growth less aggressive Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-17 21:47   ` [PATCH v2 09/24] ewah: add bitmap_dup() function Taylor Blau
                     ` (17 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

We have a function to bitwise-OR an ewah into an uncompressed bitmap,
but not to OR two uncompressed bitmaps. Let's add it.

Interestingly, we have a public header declaration going back to
e1273106f6 (ewah: compressed bitmap implementation, 2013-11-14), but the
function was never implemented.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 43a59d7fed..c3f8e7242b 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -127,6 +127,15 @@ void bitmap_and_not(struct bitmap *self, struct bitmap *other)
 		self->words[i] &= ~other->words[i];
 }
 
+void bitmap_or(struct bitmap *self, const struct bitmap *other)
+{
+	size_t i;
+
+	bitmap_grow(self, other->word_alloc);
+	for (i = 0; i < other->word_alloc; i++)
+		self->words[i] |= other->words[i];
+}
+
 void bitmap_or_ewah(struct bitmap *self, struct ewah_bitmap *other)
 {
 	size_t original_size = self->word_alloc;
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 09/24] ewah: add bitmap_dup() function
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (7 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 08/24] ewah: implement bitmap_or() Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-17 21:47   ` [PATCH v2 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
                     ` (16 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

There's no easy way to make a copy of a bitmap. Obviously a caller can
iterate over the bits and set them one by one in a new bitmap, but we
can go much faster by copying whole words with memcpy().

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 7 +++++++
 ewah/ewok.h   | 1 +
 2 files changed, 8 insertions(+)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index c3f8e7242b..eb7e2539be 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -35,6 +35,13 @@ struct bitmap *bitmap_new(void)
 	return bitmap_word_alloc(32);
 }
 
+struct bitmap *bitmap_dup(const struct bitmap *src)
+{
+	struct bitmap *dst = bitmap_word_alloc(src->word_alloc);
+	COPY_ARRAY(dst->words, src->words, src->word_alloc);
+	return dst;
+}
+
 static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
 	if (word_alloc > self->word_alloc) {
diff --git a/ewah/ewok.h b/ewah/ewok.h
index 011852bef1..1fc555e672 100644
--- a/ewah/ewok.h
+++ b/ewah/ewok.h
@@ -173,6 +173,7 @@ struct bitmap {
 
 struct bitmap *bitmap_new(void);
 struct bitmap *bitmap_word_alloc(size_t word_alloc);
+struct bitmap *bitmap_dup(const struct bitmap *src);
 void bitmap_set(struct bitmap *self, size_t pos);
 void bitmap_unset(struct bitmap *self, size_t pos);
 int bitmap_get(struct bitmap *self, size_t pos);
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 10/24] pack-bitmap-write: reimplement bitmap writing
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (8 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 09/24] ewah: add bitmap_dup() function Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-25  0:53     ` Jonathan Tan
  2020-11-17 21:47   ` [PATCH v2 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
                     ` (15 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

The bitmap generation code works by iterating over the set of commits
for which we plan to write bitmaps, and then for each one performing a
traditional traversal over the reachable commits and trees, filling in
the bitmap. Between two traversals, we can often reuse the previous
bitmap result as long as the first commit is an ancestor of the second.
However, our worst case is that we may end up doing "n" complete
complete traversals to the root in order to create "n" bitmaps.

In a real-world case (the shared-storage repo consisting of all GitHub
forks of chromium/chromium), we perform very poorly: generating bitmaps
takes ~3 hours, whereas we can walk the whole object graph in ~3
minutes.

This commit completely rewrites the algorithm, with the goal of
accessing each object only once. It works roughly like this:

  - generate a list of commits in topo-order using a single traversal

  - invert the edges of the graph (so have parents point at their
    children)

  - make one pass in reverse topo-order, generating a bitmap for each
    commit and passing the result along to child nodes

We generate correct results because each node we visit has already had
all of its ancestors added to the bitmap. And we make only two linear
passes over the commits.

We also visit each tree usually only once. When filling in a bitmap, we
don't bother to recurse into trees whose bit is already set in the
bitmap (since we know we've already done so when setting their bit).
That means that if commit A references tree T, none of its descendants
will need to open T again. I say "usually", though, because it is
possible for a given tree to be mentioned in unrelated parts of history
(e.g., cherry-picking to a parallel branch).

So we've accomplished our goal, and the resulting algorithm is pretty
simple to understand. But there are some downsides, at least with this
initial implementation:

  - we no longer reuse the results of any on-disk bitmaps when
    generating. So we'd expect to sometimes be slower than the original
    when bitmaps already exist. However, this is something we'll be able
    to add back in later.

  - we use much more memory. Instead of keeping one bitmap in memory at
    a time, we're passing them up through the graph. So our memory use
    should scale with the graph width (times the size of a bitmap).

So how does it perform?

For a clone of linux.git, generating bitmaps from scratch with the old
algorithm took 63s. Using this algorithm it takes 205s. Which is much
worse, but _might_ be acceptable if it behaved linearly as the size
grew. It also increases peak heap usage by ~1G. That's not impossibly
large, but not encouraging.

On the complete fork-network of torvalds/linux, it increases the peak
RAM usage by 40GB. Yikes. (I forgot to record the time it took, but the
memory usage was too much to consider this reasonable anyway).

On the complete fork-network of chromium/chromium, I ran out of memory
before succeeding. Some back-of-the-envelope calculations indicate it
would need 80+GB to complete.

So at this stage, we've managed to make things much worse. But because
of the way this new algorithm is structured, there are a lot of
opportunities for optimization on top. We'll start implementing those in
the follow-on patches.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 303 ++++++++++++++++++++++++--------------------
 1 file changed, 169 insertions(+), 134 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 5e998bdaa7..f2f0b6b2c2 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -110,8 +110,6 @@ void bitmap_writer_build_type_index(struct packing_data *to_pack,
 /**
  * Compute the actual bitmaps
  */
-static struct object **seen_objects;
-static unsigned int seen_objects_nr, seen_objects_alloc;
 
 static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
 {
@@ -127,21 +125,6 @@ static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitm
 	writer.selected_nr++;
 }
 
-static inline void mark_as_seen(struct object *object)
-{
-	ALLOC_GROW(seen_objects, seen_objects_nr + 1, seen_objects_alloc);
-	seen_objects[seen_objects_nr++] = object;
-}
-
-static inline void reset_all_seen(void)
-{
-	unsigned int i;
-	for (i = 0; i < seen_objects_nr; ++i) {
-		seen_objects[i]->flags &= ~(SEEN | ADDED | SHOWN);
-	}
-	seen_objects_nr = 0;
-}
-
 static uint32_t find_object_pos(const struct object_id *oid)
 {
 	struct object_entry *entry = packlist_find(writer.to_pack, oid);
@@ -154,60 +137,6 @@ static uint32_t find_object_pos(const struct object_id *oid)
 	return oe_in_pack_pos(writer.to_pack, entry);
 }
 
-static void show_object(struct object *object, const char *name, void *data)
-{
-	struct bitmap *base = data;
-	bitmap_set(base, find_object_pos(&object->oid));
-	mark_as_seen(object);
-}
-
-static void show_commit(struct commit *commit, void *data)
-{
-	mark_as_seen((struct object *)commit);
-}
-
-static int
-add_to_include_set(struct bitmap *base, struct commit *commit)
-{
-	khiter_t hash_pos;
-	uint32_t bitmap_pos = find_object_pos(&commit->object.oid);
-
-	if (bitmap_get(base, bitmap_pos))
-		return 0;
-
-	hash_pos = kh_get_oid_map(writer.bitmaps, commit->object.oid);
-	if (hash_pos < kh_end(writer.bitmaps)) {
-		struct bitmapped_commit *bc = kh_value(writer.bitmaps, hash_pos);
-		bitmap_or_ewah(base, bc->bitmap);
-		return 0;
-	}
-
-	bitmap_set(base, bitmap_pos);
-	return 1;
-}
-
-static int
-should_include(struct commit *commit, void *_data)
-{
-	struct bitmap *base = _data;
-
-	if (!add_to_include_set(base, commit)) {
-		struct commit_list *parent = commit->parents;
-
-		mark_as_seen((struct object *)commit);
-
-		while (parent) {
-			parent->item->object.flags |= SEEN;
-			mark_as_seen((struct object *)parent->item);
-			parent = parent->next;
-		}
-
-		return 0;
-	}
-
-	return 1;
-}
-
 static void compute_xor_offsets(void)
 {
 	static const int MAX_XOR_OFFSET_SEARCH = 10;
@@ -248,79 +177,185 @@ static void compute_xor_offsets(void)
 	}
 }
 
-void bitmap_writer_build(struct packing_data *to_pack)
+struct bb_commit {
+	struct commit_list *children;
+	struct bitmap *bitmap;
+	unsigned selected:1;
+	unsigned idx; /* within selected array */
+};
+
+define_commit_slab(bb_data, struct bb_commit);
+
+struct bitmap_builder {
+	struct bb_data data;
+	struct commit **commits;
+	size_t commits_nr, commits_alloc;
+};
+
+static void bitmap_builder_init(struct bitmap_builder *bb,
+				struct bitmap_writer *writer)
 {
-	static const double REUSE_BITMAP_THRESHOLD = 0.2;
-
-	int i, reuse_after, need_reset;
-	struct bitmap *base = bitmap_new();
 	struct rev_info revs;
+	struct commit *commit;
+	unsigned int i;
+
+	memset(bb, 0, sizeof(*bb));
+	init_bb_data(&bb->data);
+
+	reset_revision_walk();
+	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
+	revs.topo_order = 1;
+
+	for (i = 0; i < writer->selected_nr; i++) {
+		struct commit *c = writer->selected[i].commit;
+		struct bb_commit *ent = bb_data_at(&bb->data, c);
+		ent->selected = 1;
+		ent->idx = i;
+		add_pending_object(&revs, &c->object, "");
+	}
+
+	if (prepare_revision_walk(&revs))
+		die("revision walk setup failed");
+
+	while ((commit = get_revision(&revs))) {
+		struct commit_list *p;
+
+		parse_commit_or_die(commit);
+
+		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+		bb->commits[bb->commits_nr++] = commit;
+
+		for (p = commit->parents; p; p = p->next) {
+			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
+			commit_list_insert(commit, &ent->children);
+		}
+	}
+}
+
+static void bitmap_builder_clear(struct bitmap_builder *bb)
+{
+	clear_bb_data(&bb->data);
+	free(bb->commits);
+	bb->commits_nr = bb->commits_alloc = 0;
+}
+
+static void fill_bitmap_tree(struct bitmap *bitmap,
+			     struct tree *tree)
+{
+	uint32_t pos;
+	struct tree_desc desc;
+	struct name_entry entry;
+
+	/*
+	 * If our bit is already set, then there is nothing to do. Both this
+	 * tree and all of its children will be set.
+	 */
+	pos = find_object_pos(&tree->object.oid);
+	if (bitmap_get(bitmap, pos))
+		return;
+	bitmap_set(bitmap, pos);
+
+	if (parse_tree(tree) < 0)
+		die("unable to load tree object %s",
+		    oid_to_hex(&tree->object.oid));
+	init_tree_desc(&desc, tree->buffer, tree->size);
+
+	while (tree_entry(&desc, &entry)) {
+		switch (object_type(entry.mode)) {
+		case OBJ_TREE:
+			fill_bitmap_tree(bitmap,
+					 lookup_tree(the_repository, &entry.oid));
+			break;
+		case OBJ_BLOB:
+			bitmap_set(bitmap, find_object_pos(&entry.oid));
+			break;
+		default:
+			/* Gitlink, etc; not reachable */
+			break;
+		}
+	}
+
+	free_tree_buffer(tree);
+}
+
+static void fill_bitmap_commit(struct bb_commit *ent,
+			       struct commit *commit)
+{
+	if (!ent->bitmap)
+		ent->bitmap = bitmap_new();
+
+	/*
+	 * mark ourselves, but do not bother with parents; their values
+	 * will already have been propagated to us
+	 */
+	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
+	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
+}
+
+static void store_selected(struct bb_commit *ent, struct commit *commit)
+{
+	struct bitmapped_commit *stored = &writer.selected[ent->idx];
+	khiter_t hash_pos;
+	int hash_ret;
+
+	/*
+	 * the "reuse bitmaps" phase may have stored something here, but
+	 * our new algorithm doesn't use it. Drop it.
+	 */
+	if (stored->bitmap)
+		ewah_free(stored->bitmap);
+
+	stored->bitmap = bitmap_to_ewah(ent->bitmap);
+
+	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
+	if (hash_ret == 0)
+		die("Duplicate entry when writing index: %s",
+		    oid_to_hex(&commit->object.oid));
+	kh_value(writer.bitmaps, hash_pos) = stored;
+}
+
+void bitmap_writer_build(struct packing_data *to_pack)
+{
+	struct bitmap_builder bb;
+	size_t i;
+	int nr_stored = 0; /* for progress */
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
 
 	if (writer.show_progress)
 		writer.progress = start_progress("Building bitmaps", writer.selected_nr);
-
-	repo_init_revisions(to_pack->repo, &revs, NULL);
-	revs.tag_objects = 1;
-	revs.tree_objects = 1;
-	revs.blob_objects = 1;
-	revs.no_walk = 0;
-
-	revs.include_check = should_include;
-	reset_revision_walk();
-
-	reuse_after = writer.selected_nr * REUSE_BITMAP_THRESHOLD;
-	need_reset = 0;
-
-	for (i = writer.selected_nr - 1; i >= 0; --i) {
-		struct bitmapped_commit *stored;
-		struct object *object;
-
-		khiter_t hash_pos;
-		int hash_ret;
-
-		stored = &writer.selected[i];
-		object = (struct object *)stored->commit;
-
-		if (stored->bitmap == NULL) {
-			if (i < writer.selected_nr - 1 &&
-			    (need_reset ||
-			     !in_merge_bases(writer.selected[i + 1].commit,
-					     stored->commit))) {
-			    bitmap_reset(base);
-			    reset_all_seen();
-			}
-
-			add_pending_object(&revs, object, "");
-			revs.include_check_data = base;
-
-			if (prepare_revision_walk(&revs))
-				die("revision walk setup failed");
-
-			traverse_commit_list(&revs, show_commit, show_object, base);
-
-			object_array_clear(&revs.pending);
-
-			stored->bitmap = bitmap_to_ewah(base);
-			need_reset = 0;
-		} else
-			need_reset = 1;
-
-		if (i >= reuse_after)
-			stored->flags |= BITMAP_FLAG_REUSE;
-
-		hash_pos = kh_put_oid_map(writer.bitmaps, object->oid, &hash_ret);
-		if (hash_ret == 0)
-			die("Duplicate entry when writing index: %s",
-			    oid_to_hex(&object->oid));
-
-		kh_value(writer.bitmaps, hash_pos) = stored;
-		display_progress(writer.progress, writer.selected_nr - i);
+	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
+		the_repository);
+
+	bitmap_builder_init(&bb, &writer);
+	for (i = bb.commits_nr; i > 0; i--) {
+		struct commit *commit = bb.commits[i-1];
+		struct bb_commit *ent = bb_data_at(&bb.data, commit);
+		struct commit *child;
+
+		fill_bitmap_commit(ent, commit);
+
+		if (ent->selected) {
+			store_selected(ent, commit);
+			nr_stored++;
+			display_progress(writer.progress, nr_stored);
+		}
+
+		while ((child = pop_commit(&ent->children))) {
+			struct bb_commit *child_ent =
+				bb_data_at(&bb.data, child);
+
+			if (child_ent->bitmap)
+				bitmap_or(child_ent->bitmap, ent->bitmap);
+			else
+				child_ent->bitmap = bitmap_dup(ent->bitmap);
+		}
+		bitmap_free(ent->bitmap);
+		ent->bitmap = NULL;
 	}
+	bitmap_builder_clear(&bb);
 
-	bitmap_free(base);
 	stop_progress(&writer.progress);
 
 	compute_xor_offsets();
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (9 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-25  1:00     ` Jonathan Tan
  2020-11-17 21:47   ` [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
                     ` (14 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

Our algorithm to generate reachability bitmaps walks through the commit
graph from the bottom up, passing bitmap data from each commit to its
descendants. For a linear stretch of history like:

  A -- B -- C

our sequence of steps is:

  - compute the bitmap for A by walking its trees, etc

  - duplicate A's bitmap as a starting point for B; we can now free A's
    bitmap, since we only needed it as an intermediate result

  - OR in any extra objects that B can reach into its bitmap

  - duplicate B's bitmap as a starting point for C; likewise, free B's
    bitmap

  - OR in objects for C, and so on...

Rather than duplicating bitmaps and immediately freeing the original, we
can just pass ownership from commit to commit. Note that this doesn't
always work:

  - the recipient may be a merge which already has an intermediate
    bitmap from its other ancestor. In that case we have to OR our
    result into it. Note that the first ancestor to reach the merge does
    get to pass ownership, though.

  - we may have multiple children; we can only pass ownership to one of
    them

However, it happens often enough and copying bitmaps is expensive enough
that this provides a noticeable speedup. On a clone of linux.git, this
reduces the time to generate bitmaps from 205s to 70s. This is about the
same amount of time it took to generate bitmaps using our old "many
traversals" algorithm (the previous commit measures the identical
scenario as taking 63s). It unfortunately provides only a very modest
reduction in the peak memory usage, though.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index f2f0b6b2c2..d2d46ff5f4 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -333,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *commit = bb.commits[i-1];
 		struct bb_commit *ent = bb_data_at(&bb.data, commit);
 		struct commit *child;
+		int reused = 0;
 
 		fill_bitmap_commit(ent, commit);
 
@@ -348,10 +349,15 @@ void bitmap_writer_build(struct packing_data *to_pack)
 
 			if (child_ent->bitmap)
 				bitmap_or(child_ent->bitmap, ent->bitmap);
-			else
+			else if (reused)
 				child_ent->bitmap = bitmap_dup(ent->bitmap);
+			else {
+				child_ent->bitmap = ent->bitmap;
+				reused = 1;
+			}
 		}
-		bitmap_free(ent->bitmap);
+		if (!reused)
+			bitmap_free(ent->bitmap);
 		ent->bitmap = NULL;
 	}
 	bitmap_builder_clear(&bb);
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (10 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-22 21:50     ` Junio C Hamano
  2020-11-25  1:14     ` Jonathan Tan
  2020-11-17 21:47   ` [PATCH v2 13/24] bitmap: add bitmap_diff_nonzero() Taylor Blau
                     ` (13 subsequent siblings)
  25 siblings, 2 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

The fill_bitmap_commit() method assumes that every parent of the given
commit is already part of the current bitmap. Instead of making that
assumption, let's walk parents until we reach commits already part of
the bitmap. Set the value for that parent immediately after querying to
save time doing double calls to find_object_pos() and to avoid inserting
the parent into the queue multiple times.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index d2d46ff5f4..361f3305a2 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -12,6 +12,7 @@
 #include "sha1-lookup.h"
 #include "pack-objects.h"
 #include "commit-reach.h"
+#include "prio-queue.h"
 
 struct bitmapped_commit {
 	struct commit *commit;
@@ -279,17 +280,30 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
 }
 
 static void fill_bitmap_commit(struct bb_commit *ent,
-			       struct commit *commit)
+			       struct commit *commit,
+			       struct prio_queue *queue)
 {
 	if (!ent->bitmap)
 		ent->bitmap = bitmap_new();
 
-	/*
-	 * mark ourselves, but do not bother with parents; their values
-	 * will already have been propagated to us
-	 */
 	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
-	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
+	prio_queue_put(queue, commit);
+
+	while (queue->nr) {
+		struct commit_list *p;
+		struct commit *c = prio_queue_get(queue);
+
+		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
+		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
+
+		for (p = c->parents; p; p = p->next) {
+			int pos = find_object_pos(&p->item->object.oid);
+			if (!bitmap_get(ent->bitmap, pos)) {
+				bitmap_set(ent->bitmap, pos);
+				prio_queue_put(queue, p->item);
+			}
+		}
+	}
 }
 
 static void store_selected(struct bb_commit *ent, struct commit *commit)
@@ -319,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	struct bitmap_builder bb;
 	size_t i;
 	int nr_stored = 0; /* for progress */
+	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
@@ -335,7 +350,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *child;
 		int reused = 0;
 
-		fill_bitmap_commit(ent, commit);
+		fill_bitmap_commit(ent, commit, &queue);
 
 		if (ent->selected) {
 			store_selected(ent, commit);
@@ -360,6 +375,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 			bitmap_free(ent->bitmap);
 		ent->bitmap = NULL;
 	}
+	clear_prio_queue(&queue);
 	bitmap_builder_clear(&bb);
 
 	stop_progress(&writer.progress);
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 13/24] bitmap: add bitmap_diff_nonzero()
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (11 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-22 22:01     ` Junio C Hamano
  2020-11-17 21:47   ` [PATCH v2 14/24] commit: implement commit_list_contains() Taylor Blau
                     ` (12 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_diff_nonzero() checks if the 'self' bitmap contains any bits
that are not on in the 'other' bitmap.

Also, delete the declaration of bitmap_is_subset() as it is not used or
implemented.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 24 ++++++++++++++++++++++++
 ewah/ewok.h   |  2 +-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index eb7e2539be..e2ebeac0e5 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -200,6 +200,30 @@ int bitmap_equals(struct bitmap *self, struct bitmap *other)
 	return 1;
 }
 
+int bitmap_diff_nonzero(struct bitmap *self, struct bitmap *other)
+{
+	struct bitmap *small;
+	size_t i;
+
+	if (self->word_alloc < other->word_alloc) {
+		small = self;
+	} else {
+		small = other;
+
+		for (i = other->word_alloc; i < self->word_alloc; i++) {
+			if (self->words[i] != 0)
+				return 1;
+		}
+	}
+
+	for (i = 0; i < small->word_alloc; i++) {
+		if ((self->words[i] & ~other->words[i]))
+			return 1;
+	}
+
+	return 0;
+}
+
 void bitmap_reset(struct bitmap *bitmap)
 {
 	memset(bitmap->words, 0x0, bitmap->word_alloc * sizeof(eword_t));
diff --git a/ewah/ewok.h b/ewah/ewok.h
index 1fc555e672..156c71d06d 100644
--- a/ewah/ewok.h
+++ b/ewah/ewok.h
@@ -180,7 +180,7 @@ int bitmap_get(struct bitmap *self, size_t pos);
 void bitmap_reset(struct bitmap *self);
 void bitmap_free(struct bitmap *self);
 int bitmap_equals(struct bitmap *self, struct bitmap *other);
-int bitmap_is_subset(struct bitmap *self, struct bitmap *super);
+int bitmap_diff_nonzero(struct bitmap *self, struct bitmap *other);
 
 struct ewah_bitmap * bitmap_to_ewah(struct bitmap *bitmap);
 struct bitmap *ewah_to_bitmap(struct ewah_bitmap *ewah);
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 14/24] commit: implement commit_list_contains()
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (12 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 13/24] bitmap: add bitmap_diff_nonzero() Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-17 21:47   ` [PATCH v2 15/24] t5310: add branch-based checks Taylor Blau
                     ` (11 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

It can be helpful to check if a commit_list contains a commit. Use
pointer equality, assuming lookup_commit() was used.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 commit.c | 11 +++++++++++
 commit.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/commit.c b/commit.c
index fe1fa3dc41..9a785bf906 100644
--- a/commit.c
+++ b/commit.c
@@ -544,6 +544,17 @@ struct commit_list *commit_list_insert(struct commit *item, struct commit_list *
 	return new_list;
 }
 
+int commit_list_contains(struct commit *item, struct commit_list *list)
+{
+	while (list) {
+		if (list->item == item)
+			return 1;
+		list = list->next;
+	}
+
+	return 0;
+}
+
 unsigned commit_list_count(const struct commit_list *l)
 {
 	unsigned c = 0;
diff --git a/commit.h b/commit.h
index 5467786c7b..742a6de460 100644
--- a/commit.h
+++ b/commit.h
@@ -167,6 +167,8 @@ int find_commit_subject(const char *commit_buffer, const char **subject);
 
 struct commit_list *commit_list_insert(struct commit *item,
 					struct commit_list **list);
+int commit_list_contains(struct commit *item,
+			 struct commit_list *list);
 struct commit_list **commit_list_append(struct commit *commit,
 					struct commit_list **next);
 unsigned commit_list_count(const struct commit_list *l);
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 15/24] t5310: add branch-based checks
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (13 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 14/24] commit: implement commit_list_contains() Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-25  1:17     ` Jonathan Tan
  2020-11-17 21:47   ` [PATCH v2 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
                     ` (10 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

The current rev-list tests that check the bitmap data only work on HEAD
instead of multiple branches. Expand the test cases to handle both
'master' and 'other' branches.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 t/t5310-pack-bitmaps.sh | 61 +++++++++++++++++++++++------------------
 1 file changed, 34 insertions(+), 27 deletions(-)

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 70a4fc4843..6bf68fee85 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -41,63 +41,70 @@ test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
 	git rev-list --test-bitmap HEAD
 '
 
-rev_list_tests() {
-	state=$1
-
-	test_expect_success "counting commits via bitmap ($state)" '
-		git rev-list --count HEAD >expect &&
-		git rev-list --use-bitmap-index --count HEAD >actual &&
+rev_list_tests_head () {
+	test_expect_success "counting commits via bitmap ($state, $branch)" '
+		git rev-list --count $branch >expect &&
+		git rev-list --use-bitmap-index --count $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting partial commits via bitmap ($state)" '
-		git rev-list --count HEAD~5..HEAD >expect &&
-		git rev-list --use-bitmap-index --count HEAD~5..HEAD >actual &&
+	test_expect_success "counting partial commits via bitmap ($state, $branch)" '
+		git rev-list --count $branch~5..$branch >expect &&
+		git rev-list --use-bitmap-index --count $branch~5..$branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting commits with limit ($state)" '
-		git rev-list --count -n 1 HEAD >expect &&
-		git rev-list --use-bitmap-index --count -n 1 HEAD >actual &&
+	test_expect_success "counting commits with limit ($state, $branch)" '
+		git rev-list --count -n 1 $branch >expect &&
+		git rev-list --use-bitmap-index --count -n 1 $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting non-linear history ($state)" '
+	test_expect_success "counting non-linear history ($state, $branch)" '
 		git rev-list --count other...master >expect &&
 		git rev-list --use-bitmap-index --count other...master >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting commits with limiting ($state)" '
-		git rev-list --count HEAD -- 1.t >expect &&
-		git rev-list --use-bitmap-index --count HEAD -- 1.t >actual &&
+	test_expect_success "counting commits with limiting ($state, $branch)" '
+		git rev-list --count $branch -- 1.t >expect &&
+		git rev-list --use-bitmap-index --count $branch -- 1.t >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting objects via bitmap ($state)" '
-		git rev-list --count --objects HEAD >expect &&
-		git rev-list --use-bitmap-index --count --objects HEAD >actual &&
+	test_expect_success "counting objects via bitmap ($state, $branch)" '
+		git rev-list --count --objects $branch >expect &&
+		git rev-list --use-bitmap-index --count --objects $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "enumerate commits ($state)" '
-		git rev-list --use-bitmap-index HEAD >actual &&
-		git rev-list HEAD >expect &&
+	test_expect_success "enumerate commits ($state, $branch)" '
+		git rev-list --use-bitmap-index $branch >actual &&
+		git rev-list $branch >expect &&
 		test_bitmap_traversal --no-confirm-bitmaps expect actual
 	'
 
-	test_expect_success "enumerate --objects ($state)" '
-		git rev-list --objects --use-bitmap-index HEAD >actual &&
-		git rev-list --objects HEAD >expect &&
+	test_expect_success "enumerate --objects ($state, $branch)" '
+		git rev-list --objects --use-bitmap-index $branch >actual &&
+		git rev-list --objects $branch >expect &&
 		test_bitmap_traversal expect actual
 	'
 
-	test_expect_success "bitmap --objects handles non-commit objects ($state)" '
-		git rev-list --objects --use-bitmap-index HEAD tagged-blob >actual &&
+	test_expect_success "bitmap --objects handles non-commit objects ($state, $branch)" '
+		git rev-list --objects --use-bitmap-index $branch tagged-blob >actual &&
 		grep $blob actual
 	'
 }
 
+rev_list_tests () {
+	state=$1
+
+	for branch in "master" "other"
+	do
+		rev_list_tests_head
+	done
+}
+
 rev_list_tests 'full bitmap'
 
 test_expect_success 'clone from bitmapped repository' '
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 16/24] pack-bitmap-write: rename children to reverse_edges
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (14 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 15/24] t5310: add branch-based checks Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-17 21:47   ` [PATCH v2 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
                     ` (9 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_builder_init() method walks the reachable commits in
topological order and constructs a "reverse graph" along the way. At the
moment, this reverse graph contains an edge from commit A to commit B if
and only if A is a parent of B. Thus, the name "children" is appropriate
for for this reverse graph.

In the next change, we will repurpose the reverse graph to not be
directly-adjacent commits in the commit-graph, but instead a more
abstract relationship. The previous changes have already incorporated
the necessary updates to fill_bitmap_commit() that allow these edges to
not be immediate children.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 361f3305a2..369c76a87c 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -179,7 +179,7 @@ static void compute_xor_offsets(void)
 }
 
 struct bb_commit {
-	struct commit_list *children;
+	struct commit_list *reverse_edges;
 	struct bitmap *bitmap;
 	unsigned selected:1;
 	unsigned idx; /* within selected array */
@@ -228,7 +228,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		for (p = commit->parents; p; p = p->next) {
 			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->children);
+			commit_list_insert(commit, &ent->reverse_edges);
 		}
 	}
 }
@@ -358,7 +358,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 			display_progress(writer.progress, nr_stored);
 		}
 
-		while ((child = pop_commit(&ent->children))) {
+		while ((child = pop_commit(&ent->reverse_edges))) {
 			struct bb_commit *child_ent =
 				bb_data_at(&bb.data, child);
 
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 17/24] pack-bitmap.c: check reads more aggressively when loading
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (15 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
@ 2020-11-17 21:47   ` Taylor Blau
  2020-11-17 21:48   ` [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
                     ` (8 subsequent siblings)
  25 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:47 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

Before 'load_bitmap_entries_v1()' reads an actual EWAH bitmap, it should
check that it can safely do so by ensuring that there are at least 6
bytes available to be read (four for the commit's index position, and
then two more for the xor offset and flags, respectively).

Likewise, it should check that the commit index it read refers to a
legitimate object in the pack.

The first fix catches a truncation bug that was exposed when testing,
and the second is purely precautionary.

There are some possible future improvements, not pursued here. They are:

  - Computing the correct boundary of the bitmap itself in the caller
    and ensuring that we don't read past it. This may or may not be
    worth it, since in a truncation situation, all bets are off: (is the
    trailer still there and the bitmap entries malformed, or is the
    trailer truncated?). The best we can do is try to read what's there
    as if it's correct data (and protect ourselves when it's obviously
    bogus).

  - Avoid the magic "6" by teaching read_be32() and read_u8() (both of
    which are custom helpers for this function) to check sizes before
    advancing the pointers.

  - Adding more tests in this area. Testing these truncation situations
    are remarkably fragile to even subtle changes in the bitmap
    generation. So, the resulting tests are likely to be quite brittle.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 4431f9f120..60c781d100 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -229,11 +229,16 @@ static int load_bitmap_entries_v1(struct bitmap_index *index)
 		uint32_t commit_idx_pos;
 		struct object_id oid;
 
+		if (index->map_size - index->map_pos < 6)
+			return error("corrupt ewah bitmap: truncated header for entry %d", i);
+
 		commit_idx_pos = read_be32(index->map, &index->map_pos);
 		xor_offset = read_u8(index->map, &index->map_pos);
 		flags = read_u8(index->map, &index->map_pos);
 
-		nth_packed_object_id(&oid, index->pack, commit_idx_pos);
+		if (nth_packed_object_id(&oid, index->pack, commit_idx_pos) < 0)
+			return error("corrupt ewah bitmap: commit index %u out of range",
+				     (unsigned)commit_idx_pos);
 
 		bitmap = read_bitmap_1(index);
 		if (!bitmap)
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (16 preceding siblings ...)
  2020-11-17 21:47   ` [PATCH v2 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
@ 2020-11-17 21:48   ` Taylor Blau
  2020-11-24  6:07     ` Jonathan Tan
  2020-11-25  1:46     ` Jonathan Tan
  2020-11-17 21:48   ` [PATCH v2 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
                     ` (7 subsequent siblings)
  25 siblings, 2 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:48 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_writer_build() method calls bitmap_builder_init() to
construct a list of commits reachable from the selected commits along
with a "reverse graph". This reverse graph has edges pointing from a
commit to other commits that can reach that commit. After computing a
reachability bitmap for a commit, the values in that bitmap are then
copied to the reachability bitmaps across the edges in the reverse
graph.

We can now relax the role of the reverse graph to greatly reduce the
number of intermediate reachability bitmaps we compute during this
reverse walk. The end result is that we walk objects the same number of
times as before when constructing the reachability bitmaps, but we also
spend much less time copying bits between bitmaps and have much lower
memory pressure in the process.

The core idea is to select a set of "important" commits based on
interactions among the sets of commits reachable from each selected commit.

The first technical concept is to create a new 'commit_mask' member in the
bb_commit struct. Note that the selected commits are provided in an
ordered array. The first thing to do is to mark the ith bit in the
commit_mask for the ith selected commit. As we walk the commit-graph, we
copy the bits in a commit's commit_mask to its parents. At the end of
the walk, the ith bit in the commit_mask for a commit C stores a boolean
representing "The ith selected commit can reach C."

As we walk, we will discover non-selected commits that are important. We
will get into this later, but those important commits must also receive
bit positions, growing the width of the bitmasks as we walk. At the true
end of the walk, the ith bit means "the ith _important_ commit can reach
C."

MAXIMAL COMMITS
---------------

We use a new 'maximal' bit in the bb_commit struct to represent whether
a commit is important or not. The term "maximal" comes from the
partially-ordered set of commits in the commit-graph where C >= P if P
is a parent of C, and then extending the relationship transitively.
Instead of taking the maximal commits across the entire commit-graph, we
instead focus on selecting each commit that is maximal among commits
with the same bits on in their commit_mask. This definition is
important, so let's consider an example.

Suppose we have three selected commits A, B, and C. These are assigned
bitmasks 100, 010, and 001 to start. Each of these can be marked as
maximal immediately because they each will be the uniquely maximal
commit that contains their own bit. Keep in mind that that these commits
may have different bitmasks after the walk; for example, if B can reach
C but A cannot, then the final bitmask for C is 011. Even in these
cases, C would still be a maximal commit among all commits with the
third bit on in their masks.

Now define sets X, Y, and Z to be the sets of commits reachable from A,
B, and C, respectively. The intersections of these sets correspond to
different bitmasks:

 * 100: X - (Y union Z)
 * 010: Y - (X union Z)
 * 001: Z - (X union Y)
 * 110: (X intersect Y) - Z
 * 101: (X intersect Z) - Y
 * 011: (Y intersect Z) - X
 * 111: X intersect Y intersect Z

This can be visualized with the following Hasse diagram:

	100    010    001
         | \  /   \  / |
         |  \/     \/  |
         |  /\     /\  |
         | /  \   /  \ |
        110    101    011
          \___  |  ___/
              \ | /
               111

Some of these bitmasks may not be represented, depending on the topology
of the commit-graph. In fact, we are counting on it, since the number of
possible bitmasks is exponential in the number of selected commits, but
is also limited by the total number of commits. In practice, very few
bitmasks are possible because most commits converge on a common "trunk"
in the commit history.

With this three-bit example, we wish to find commits that are maximal
for each bitmask. How can we identify this as we are walking?

As we walk, we visit a commit C. Since we are walking the commits in
topo-order, we know that C is visited after all of its children are
visited. Thus, when we get C from the revision walk we inspect the
'maximal' property of its bb_data and use that to determine if C is truly
important. Its commit_mask is also nearly final. If C is not one of the
originally-selected commits, then assign a bit position to C (by
incrementing num_maximal) and set that bit on in commit_mask. See
"MULTIPLE MAXIMAL COMMITS" below for more detail on this.

Now that the commit C is known to be maximal or not, consider each
parent P of C. Compute two new values:

 * c_not_p : true if and only if the commit_mask for C contains a bit
             that is not contained in the commit_mask for P.

 * p_not_c : true if and only if the commit_mask for P contains a bit
             that is not contained in the commit_mask for P.

If c_not_p is false, then P already has all of the bits that C would
provide to its commit_mask. In this case, move on to other parents as C
has nothing to contribute to P's state that was not already provided by
other children of P.

We continue with the case that c_not_p is true. This means there are
bits in C's commit_mask to copy to P's commit_mask, so use bitmap_or()
to add those bits.

If p_not_c is also true, then set the maximal bit for P to one. This means
that if no other commit has P as a parent, then P is definitely maximal.
This is because no child had the same bitmask. It is important to think
about the maximal bit for P at this point as a temporary state: "P is
maximal based on current information."

In contrast, if p_not_c is false, then set the maximal bit for P to
zero. Further, clear all reverse_edges for P since any edges that were
previously assigned to P are no longer important. P will gain all
reverse edges based on C.

The final thing we need to do is to update the reverse edges for P.
These reverse edges respresent "which closest maximal commits
contributed bits to my commit_mask?" Since C contributed bits to P's
commit_mask in this case, C must add to the reverse edges of P.

If C is maximal, then C is a 'closest' maximal commit that contributed
bits to P. Add C to P's reverse_edges list.

Otherwise, C has a list of maximal commits that contributed bits to its
bitmask (and this list is exactly one element). Add all of these items
to P's reverse_edges list. Be careful to ignore duplicates here.

After inspecting all parents P for a commit C, we can clear the
commit_mask for C. This reduces the memory load to be limited to the
"width" of the commit graph.

Consider our ABC/XYZ example from earlier and let's inspect the state of
the commits for an interesting bitmask, say 011. Suppose that D is the
only maximal commit with this bitmask (in the first three bits). All
other commits with bitmask 011 have D as the only entry in their
reverse_edges list. D's reverse_edges list contains B and C.

COMPUTING REACHABILITY BITMAPS
------------------------------

Now that we have our definition, let's zoom out and consider what
happens with our new reverse graph when computing reachability bitmaps.
We walk the reverse graph in reverse-topo-order, so we visit commits
with largest commit_masks first. After we compute the reachability
bitmap for a commit C, we push the bits in that bitmap to each commit D
in the reverse edge list for C. Then, when we finally visit D we already
have the bits for everything reachable from maximal commits that D can
reach and we only need to walk the objects in the set-difference.

In our ABC/XYZ example, when we finally walk for the commit A we only
need to walk commits with bitmask equal to A's bitmask. If that bitmask
is 100, then we are only walking commits in X - (Y union Z) because the
bitmap already contains the bits for objects reachable from (X intersect
Y) union (X intersect Z) (i.e. the bits from the reachability bitmaps
for the maximal commits with bitmasks 110 and 101).

The behavior is intended to walk each commit (and the trees that commit
introduces) at most once while allocating and copying fewer reachability
bitmaps. There is one caveat: what happens when there are multiple
maximal commits with the same bitmask, with respect to the initial set
of selected commits?

MULTIPLE MAXIMAL COMMITS
------------------------

Earlier, we mentioned that when we discover a new maximal commit, we
assign a new bit position to that commit and set that bit position to
one for that commit. This is absolutely important for interesting
commit-graphs such as git/git and torvalds/linux. The reason is due to
the existence of "butterflies" in the commit-graph partial order.

Here is an example of four commits forming a butterfly:

   I    J
   |\  /|
   | \/ |
   | /\ |
   |/  \|
   M    N
    \  /
     |/
     Q

Here, I and J both have parents M and N. In general, these do not need
to be exact parent relationships, but reachability relationships. The
most important part is that M and N cannot reach each other, so they are
independent in the partial order. If I had commit_mask 10 and J had
commit_mask 01, then M and N would both be assigned commit_mask 11 and
be maximal commits with the bitmask 11. Then, what happens when M and N
can both reach a commit Q? If Q is also assigned the bitmask 11, then it
is not maximal but is reachable from both M and N.

While this is not necessarily a deal-breaker for our abstract definition
of finding maximal commits according to a given bitmask, we have a few
issues that can come up in our larger picture of constructing
reachability bitmaps.

In particular, if we do not also consider Q to be a "maximal" commit,
then we will walk commits reachable from Q twice: once when computing
the reachability bitmap for M and another time when computing the
reachability bitmap for N. This becomes much worse if the topology
continues this pattern with multiple butterflies.

The solution has already been mentioned: each of M and N are assigned
their own bits to the bitmask and hence they become uniquely maximal for
their bitmasks. Finally, Q also becomes maximal and thus we do not need
to walk its commits multiple times. The final bitmasks for these commits
are as follows:

  I:10       J:01
   |\        /|
   | \ _____/ |
   | /\____   |
   |/      \  |
   M:111    N:1101
        \  /
       Q:1111

Further, Q's reverse edge list is { M, N }, while M and N both have
reverse edge list { I, J }.

PERFORMANCE MEASUREMENTS
------------------------

Now that we've spent a LOT of time on the theory of this algorithm,
let's show that this is actually worth all that effort.

To test the performance, use GIT_TRACE2_PERF=1 when running
'git repack -abd' in a repository with no existing reachability bitmaps.
This avoids any issues with keeping existing bitmaps to skew the
numbers.

Inspect the "building_bitmaps_total" region in the trace2 output to
focus on the portion of work that is affected by this change. Here are
the performance comparisons for a few repositories. The timings are for
the following versions of Git: "multi" is the timing from before any
reverse graph is constructed, where we might perform multiple
traversals. "reverse" is for the previous change where the reverse graph
has every reachable commit.  Finally "maximal" is the version introduced
here where the reverse graph only contains the maximal commits.

      Repository: git/git
           multi: 2.628 sec
         reverse: 2.344 sec
         maximal: 2.047 sec

      Repository: torvalds/linux
           multi: 64.7 sec
         reverse: 205.3 sec
         maximal: 44.7 sec

So in all cases we've not only recovered any time lost to switching to
the reverse-edge algorithm, but we come out ahead of "multi" in all
cases. Likewise, peak heap has gone back to something reasonable:

      Repository: torvalds/linux
           multi: 2.087 GB
         reverse: 3.141 GB
         maximal: 2.288 GB

While I do not have access to full fork networks on GitHub, Peff has run
this algorithm on the chromium/chromium fork network and reported a
change from 3 hours to ~233 seconds. That network is particularly
beneficial for this approach because it has a long, linear history along
with many tags. The "multi" approach was obviously quadratic and the new
approach is linear.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 72 +++++++++++++++++++++++++++++++---
 t/t5310-pack-bitmaps.sh | 87 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 149 insertions(+), 10 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 369c76a87c..7b4fc0f304 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -180,8 +180,10 @@ static void compute_xor_offsets(void)
 
 struct bb_commit {
 	struct commit_list *reverse_edges;
+	struct bitmap *commit_mask;
 	struct bitmap *bitmap;
-	unsigned selected:1;
+	unsigned selected:1,
+		 maximal:1;
 	unsigned idx; /* within selected array */
 };
 
@@ -198,7 +200,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i;
+	unsigned int i, num_maximal;
 
 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -210,27 +212,85 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
 		struct bb_commit *ent = bb_data_at(&bb->data, c);
+
 		ent->selected = 1;
+		ent->maximal = 1;
 		ent->idx = i;
+
+		ent->commit_mask = bitmap_new();
+		bitmap_set(ent->commit_mask, i);
+
 		add_pending_object(&revs, &c->object, "");
 	}
+	num_maximal = writer->selected_nr;
 
 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");
 
 	while ((commit = get_revision(&revs))) {
 		struct commit_list *p;
+		struct bb_commit *c_ent;
 
 		parse_commit_or_die(commit);
 
-		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
-		bb->commits[bb->commits_nr++] = commit;
+		c_ent = bb_data_at(&bb->data, commit);
+
+		if (c_ent->maximal) {
+			if (!c_ent->selected) {
+				bitmap_set(c_ent->commit_mask, num_maximal);
+				num_maximal++;
+			}
+
+			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+			bb->commits[bb->commits_nr++] = commit;
+		}
 
 		for (p = commit->parents; p; p = p->next) {
-			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->reverse_edges);
+			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
+			int c_not_p, p_not_c;
+
+			if (!p_ent->commit_mask) {
+				p_ent->commit_mask = bitmap_new();
+				c_not_p = 1;
+				p_not_c = 0;
+			} else {
+				c_not_p = bitmap_diff_nonzero(c_ent->commit_mask, p_ent->commit_mask);
+				p_not_c = bitmap_diff_nonzero(p_ent->commit_mask, c_ent->commit_mask);
+			}
+
+			if (!c_not_p)
+				continue;
+
+			bitmap_or(p_ent->commit_mask, c_ent->commit_mask);
+
+			if (p_not_c)
+				p_ent->maximal = 1;
+			else {
+				p_ent->maximal = 0;
+				free_commit_list(p_ent->reverse_edges);
+				p_ent->reverse_edges = NULL;
+			}
+
+			if (c_ent->maximal) {
+				commit_list_insert(commit, &p_ent->reverse_edges);
+			} else {
+				struct commit_list *cc = c_ent->reverse_edges;
+
+				for (; cc; cc = cc->next) {
+					if (!commit_list_contains(cc->item, p_ent->reverse_edges))
+						commit_list_insert(cc->item, &p_ent->reverse_edges);
+				}
+			}
 		}
+
+		bitmap_free(c_ent->commit_mask);
+		c_ent->commit_mask = NULL;
 	}
+
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_selected_commits", writer->selected_nr);
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_maximal_commits", num_maximal);
 }
 
 static void bitmap_builder_clear(struct bitmap_builder *bb)
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 6bf68fee85..1691710ec1 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -20,11 +20,87 @@ has_any () {
 	grep -Ff "$1" "$2"
 }
 
+# To ensure the logic for "maximal commits" is exercised, make
+# the repository a bit more complicated.
+#
+#    other                         master
+#      *                             *
+# (99 commits)                  (99 commits)
+#      *                             *
+#      |\                           /|
+#      | * octo-other  octo-master * |
+#      |/|\_________  ____________/|\|
+#      | \          \/  __________/  |
+#      |  | ________/\ /             |
+#      *  |/          * merge-right  *
+#      | _|__________/ \____________ |
+#      |/ |                         \|
+# (l1) *  * merge-left               * (r1)
+#      | / \________________________ |
+#      |/                           \|
+# (l2) *                             * (r2)
+#       \____________...____________ |
+#                                   \|
+#                                    * (base)
+#
+# The important part for the maximal commit algorithm is how
+# the bitmasks are extended. Assuming starting bit positions
+# for master (bit 0) and other (bit 1), and some flexibility
+# in the order that merge bases are visited, the bitmasks at
+# the end should be:
+#
+#      master: 1       (maximal, selected)
+#       other: 01      (maximal, selected)
+# octo-master: 1
+#  octo-other: 01
+# merge-right: 111     (maximal)
+#        (l1): 111
+#        (r1): 111
+#  merge-left: 1101    (maximal)
+#        (l2): 11111   (maximal)
+#        (r2): 111101  (maximal)
+#      (base): 1111111 (maximal)
+
 test_expect_success 'setup repo with moderate-sized history' '
-	test_commit_bulk --id=file 100 &&
+	test_commit_bulk --id=file 10 &&
 	git checkout -b other HEAD~5 &&
 	test_commit_bulk --id=side 10 &&
+
+	# add complicated history setup, including merges and
+	# ambiguous merge-bases
+
+	git checkout -b merge-left other~2 &&
+	git merge master~2 -m "merge-left" &&
+
+	git checkout -b merge-right master~1 &&
+	git merge other~1 -m "merge-right" &&
+
+	git checkout -b octo-master master &&
+	git merge merge-left merge-right -m "octopus-master" &&
+
+	git checkout -b octo-other other &&
+	git merge merge-left merge-right -m "octopus-other" &&
+
+	git checkout other &&
+	git merge octo-other -m "pull octopus" &&
+
 	git checkout master &&
+	git merge octo-master -m "pull octopus" &&
+
+	# Remove these branches so they are not selected
+	# as bitmap tips
+	git branch -D merge-left &&
+	git branch -D merge-right &&
+	git branch -D octo-other &&
+	git branch -D octo-master &&
+
+	# add padding to make these merges less interesting
+	# and avoid having them selected for bitmaps
+	test_commit_bulk --id=file 100 &&
+	git checkout other &&
+	test_commit_bulk --id=side 100 &&
+	git checkout master &&
+
 	bitmaptip=$(git rev-parse master) &&
 	blob=$(echo tagged-blob | git hash-object -w --stdin) &&
 	git tag tagged-blob $blob &&
@@ -32,9 +108,12 @@ test_expect_success 'setup repo with moderate-sized history' '
 '
 
 test_expect_success 'full repack creates bitmaps' '
-	git repack -ad &&
+	GIT_TRACE2_EVENT_NESTING=4 GIT_TRACE2_EVENT="$(pwd)/trace" \
+		git repack -ad &&
 	ls .git/objects/pack/ | grep bitmap >output &&
-	test_line_count = 1 output
+	test_line_count = 1 output &&
+	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
 '
 
 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
@@ -356,7 +435,7 @@ test_expect_success 'truncated bitmap fails gracefully (ewah)' '
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
 	test_when_finished "rm -f $bitmap" &&
-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
+	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (17 preceding siblings ...)
  2020-11-17 21:48   ` [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
@ 2020-11-17 21:48   ` Taylor Blau
  2020-12-02  7:13     ` Jonathan Tan
  2020-11-17 21:48   ` [PATCH v2 20/24] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
                     ` (6 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:48 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Jeff King <peff@peff.net>

The on-disk bitmap format has a flag to mark a bitmap to be "reused".
This is a rather curious feature, and works like this:

  - a run of pack-objects would decide to mark the last 80% of the
    bitmaps it generates with the reuse flag

  - the next time we generate bitmaps, we'd see those reuse flags from
    the last run, and mark those commits as special:

      - we'd be more likely to select those commits to get bitmaps in
        the new output

      - when generating the bitmap for a selected commit, we'd reuse the
        old bitmap as-is (rearranging the bits to match the new pack, of
        course)

However, neither of these behaviors particularly makes sense.

Just because a commit happened to be bitmapped last time does not make
it a good candidate for having a bitmap this time. In particular, we may
choose bitmaps based on how recent they are in history, or whether a ref
tip points to them, and those things will change. We're better off
re-considering fresh which commits are good candidates.

Reusing the existing bitmap _is_ a reasonable thing to do to save
computation. But only reusing exact bitmaps is a weak form of this. If
we have an old bitmap for A and now want a new bitmap for its child, we
should be able to compute that only by looking at trees and that are new
to the child. But this code would consider only exact reuse (which is
perhaps why it was eager to select those commits in the first place).

Furthermore, the recent switch to the reverse-edge algorithm for
generating bitmaps dropped this optimization entirely (and yet still
performs better).

So let's do a few cleanups:

 - drop the whole "reusing bitmaps" phase of generating bitmaps. It's
   not helping anything, and is mostly unused code (or worse, code that
   is using CPU but not doing anything useful)

 - drop the use of the on-disk reuse flag to select commits to bitmap

 - stop setting the on-disk reuse flag in bitmaps we generate (since
   nothing respects it anymore)

We will keep a few innards of the reuse code, which will help us
implement a more capable version of the "reuse" optimization:

 - simplify rebuild_existing_bitmaps() into a function that only builds
   the mapping of bits between the old and new orders, but doesn't
   actually convert any bitmaps

 - make rebuild_bitmap() public; we'll call it lazily to convert bitmaps
   as we traverse (using the mapping created above)

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 builtin/pack-objects.c |  1 -
 pack-bitmap-write.c    | 50 +++++-------------------------------------
 pack-bitmap.c          | 46 +++++---------------------------------
 pack-bitmap.h          |  6 ++++-
 4 files changed, 16 insertions(+), 87 deletions(-)

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 5617c01b5a..2a00358f34 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -1104,7 +1104,6 @@ static void write_pack_file(void)
 				stop_progress(&progress_state);
 
 				bitmap_writer_show_progress(progress);
-				bitmap_writer_reuse_bitmaps(&to_pack);
 				bitmap_writer_select_commits(indexed_commits, indexed_commits_nr, -1);
 				bitmap_writer_build(&to_pack);
 				bitmap_writer_finish(written_list, nr_written,
diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 7b4fc0f304..1995f75818 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -30,7 +30,6 @@ struct bitmap_writer {
 	struct ewah_bitmap *tags;
 
 	kh_oid_map_t *bitmaps;
-	kh_oid_map_t *reused;
 	struct packing_data *to_pack;
 
 	struct bitmapped_commit *selected;
@@ -112,7 +111,7 @@ void bitmap_writer_build_type_index(struct packing_data *to_pack,
  * Compute the actual bitmaps
  */
 
-static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
+static inline void push_bitmapped_commit(struct commit *commit)
 {
 	if (writer.selected_nr >= writer.selected_alloc) {
 		writer.selected_alloc = (writer.selected_alloc + 32) * 2;
@@ -120,7 +119,7 @@ static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitm
 	}
 
 	writer.selected[writer.selected_nr].commit = commit;
-	writer.selected[writer.selected_nr].bitmap = reused;
+	writer.selected[writer.selected_nr].bitmap = NULL;
 	writer.selected[writer.selected_nr].flags = 0;
 
 	writer.selected_nr++;
@@ -372,13 +371,6 @@ static void store_selected(struct bb_commit *ent, struct commit *commit)
 	khiter_t hash_pos;
 	int hash_ret;
 
-	/*
-	 * the "reuse bitmaps" phase may have stored something here, but
-	 * our new algorithm doesn't use it. Drop it.
-	 */
-	if (stored->bitmap)
-		ewah_free(stored->bitmap);
-
 	stored->bitmap = bitmap_to_ewah(ent->bitmap);
 
 	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
@@ -477,35 +469,6 @@ static int date_compare(const void *_a, const void *_b)
 	return (long)b->date - (long)a->date;
 }
 
-void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack)
-{
-	struct bitmap_index *bitmap_git;
-	if (!(bitmap_git = prepare_bitmap_git(to_pack->repo)))
-		return;
-
-	writer.reused = kh_init_oid_map();
-	rebuild_existing_bitmaps(bitmap_git, to_pack, writer.reused,
-				 writer.show_progress);
-	/*
-	 * NEEDSWORK: rebuild_existing_bitmaps() makes writer.reused reference
-	 * some bitmaps in bitmap_git, so we can't free the latter.
-	 */
-}
-
-static struct ewah_bitmap *find_reused_bitmap(const struct object_id *oid)
-{
-	khiter_t hash_pos;
-
-	if (!writer.reused)
-		return NULL;
-
-	hash_pos = kh_get_oid_map(writer.reused, *oid);
-	if (hash_pos >= kh_end(writer.reused))
-		return NULL;
-
-	return kh_value(writer.reused, hash_pos);
-}
-
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 				  unsigned int indexed_commits_nr,
 				  int max_bitmaps)
@@ -519,12 +482,11 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 
 	if (indexed_commits_nr < 100) {
 		for (i = 0; i < indexed_commits_nr; ++i)
-			push_bitmapped_commit(indexed_commits[i], NULL);
+			push_bitmapped_commit(indexed_commits[i]);
 		return;
 	}
 
 	for (;;) {
-		struct ewah_bitmap *reused_bitmap = NULL;
 		struct commit *chosen = NULL;
 
 		next = next_commit_index(i);
@@ -539,15 +501,13 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 
 		if (next == 0) {
 			chosen = indexed_commits[i];
-			reused_bitmap = find_reused_bitmap(&chosen->object.oid);
 		} else {
 			chosen = indexed_commits[i + next];
 
 			for (j = 0; j <= next; ++j) {
 				struct commit *cm = indexed_commits[i + j];
 
-				reused_bitmap = find_reused_bitmap(&cm->object.oid);
-				if (reused_bitmap || (cm->object.flags & NEEDS_BITMAP) != 0) {
+				if ((cm->object.flags & NEEDS_BITMAP) != 0) {
 					chosen = cm;
 					break;
 				}
@@ -557,7 +517,7 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 			}
 		}
 
-		push_bitmapped_commit(chosen, reused_bitmap);
+		push_bitmapped_commit(chosen);
 
 		i += next + 1;
 		display_progress(writer.progress, i);
diff --git a/pack-bitmap.c b/pack-bitmap.c
index 60c781d100..d1368b69bb 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1338,9 +1338,9 @@ void test_bitmap_walk(struct rev_info *revs)
 	free_bitmap_index(bitmap_git);
 }
 
-static int rebuild_bitmap(uint32_t *reposition,
-			  struct ewah_bitmap *source,
-			  struct bitmap *dest)
+int rebuild_bitmap(const uint32_t *reposition,
+		   struct ewah_bitmap *source,
+		   struct bitmap *dest)
 {
 	uint32_t pos = 0;
 	struct ewah_iterator it;
@@ -1369,19 +1369,11 @@ static int rebuild_bitmap(uint32_t *reposition,
 	return 0;
 }
 
-int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
-			     struct packing_data *mapping,
-			     kh_oid_map_t *reused_bitmaps,
-			     int show_progress)
+uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
+				struct packing_data *mapping)
 {
 	uint32_t i, num_objects;
 	uint32_t *reposition;
-	struct bitmap *rebuild;
-	struct stored_bitmap *stored;
-	struct progress *progress = NULL;
-
-	khiter_t hash_pos;
-	int hash_ret;
 
 	num_objects = bitmap_git->pack->num_objects;
 	reposition = xcalloc(num_objects, sizeof(uint32_t));
@@ -1399,33 +1391,7 @@ int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
 			reposition[i] = oe_in_pack_pos(mapping, oe) + 1;
 	}
 
-	rebuild = bitmap_new();
-	i = 0;
-
-	if (show_progress)
-		progress = start_progress("Reusing bitmaps", 0);
-
-	kh_foreach_value(bitmap_git->bitmaps, stored, {
-		if (stored->flags & BITMAP_FLAG_REUSE) {
-			if (!rebuild_bitmap(reposition,
-					    lookup_stored_bitmap(stored),
-					    rebuild)) {
-				hash_pos = kh_put_oid_map(reused_bitmaps,
-							  stored->oid,
-							  &hash_ret);
-				kh_value(reused_bitmaps, hash_pos) =
-					bitmap_to_ewah(rebuild);
-			}
-			bitmap_reset(rebuild);
-			display_progress(progress, ++i);
-		}
-	});
-
-	stop_progress(&progress);
-
-	free(reposition);
-	bitmap_free(rebuild);
-	return 0;
+	return reposition;
 }
 
 void free_bitmap_index(struct bitmap_index *b)
diff --git a/pack-bitmap.h b/pack-bitmap.h
index 1203120c43..afa4115136 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -73,7 +73,11 @@ void bitmap_writer_set_checksum(unsigned char *sha1);
 void bitmap_writer_build_type_index(struct packing_data *to_pack,
 				    struct pack_idx_entry **index,
 				    uint32_t index_nr);
-void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack);
+uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
+				struct packing_data *mapping);
+int rebuild_bitmap(const uint32_t *reposition,
+		   struct ewah_bitmap *source,
+		   struct bitmap *dest);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 20/24] pack-bitmap: factor out 'bitmap_for_commit()'
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (18 preceding siblings ...)
  2020-11-17 21:48   ` [PATCH v2 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
@ 2020-11-17 21:48   ` Taylor Blau
  2020-12-02  7:17     ` Jonathan Tan
  2020-11-17 21:48   ` [PATCH v2 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
                     ` (5 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:48 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

A couple of callers within pack-bitmap.c duplicate logic to lookup a
given object id in the bitamps khash. Factor this out into a new
function, 'bitmap_for_commit()' to reduce some code duplication.

Make this new function non-static, since it will be used in later
commits from outside of pack-bitmap.c.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 33 +++++++++++++++++++--------------
 pack-bitmap.h |  2 ++
 2 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index d1368b69bb..5efb8af121 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -380,6 +380,16 @@ struct include_data {
 	struct bitmap *seen;
 };
 
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit)
+{
+	khiter_t hash_pos = kh_get_oid_map(bitmap_git->bitmaps,
+					   commit->object.oid);
+	if (hash_pos >= kh_end(bitmap_git->bitmaps))
+		return NULL;
+	return lookup_stored_bitmap(kh_value(bitmap_git->bitmaps, hash_pos));
+}
+
 static inline int bitmap_position_extended(struct bitmap_index *bitmap_git,
 					   const struct object_id *oid)
 {
@@ -465,10 +475,10 @@ static void show_commit(struct commit *commit, void *data)
 
 static int add_to_include_set(struct bitmap_index *bitmap_git,
 			      struct include_data *data,
-			      const struct object_id *oid,
+			      struct commit *commit,
 			      int bitmap_pos)
 {
-	khiter_t hash_pos;
+	struct ewah_bitmap *partial;
 
 	if (data->seen && bitmap_get(data->seen, bitmap_pos))
 		return 0;
@@ -476,10 +486,9 @@ static int add_to_include_set(struct bitmap_index *bitmap_git,
 	if (bitmap_get(data->base, bitmap_pos))
 		return 0;
 
-	hash_pos = kh_get_oid_map(bitmap_git->bitmaps, *oid);
-	if (hash_pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, hash_pos);
-		bitmap_or_ewah(data->base, lookup_stored_bitmap(st));
+	partial = bitmap_for_commit(bitmap_git, commit);
+	if (partial) {
+		bitmap_or_ewah(data->base, partial);
 		return 0;
 	}
 
@@ -498,8 +507,7 @@ static int should_include(struct commit *commit, void *_data)
 						  (struct object *)commit,
 						  NULL);
 
-	if (!add_to_include_set(data->bitmap_git, data, &commit->object.oid,
-				bitmap_pos)) {
+	if (!add_to_include_set(data->bitmap_git, data, commit, bitmap_pos)) {
 		struct commit_list *parent = commit->parents;
 
 		while (parent) {
@@ -1282,10 +1290,10 @@ void test_bitmap_walk(struct rev_info *revs)
 {
 	struct object *root;
 	struct bitmap *result = NULL;
-	khiter_t pos;
 	size_t result_popcnt;
 	struct bitmap_test_data tdata;
 	struct bitmap_index *bitmap_git;
+	struct ewah_bitmap *bm;
 
 	if (!(bitmap_git = prepare_bitmap_git(revs->repo)))
 		die("failed to load bitmap indexes");
@@ -1297,12 +1305,9 @@ void test_bitmap_walk(struct rev_info *revs)
 		bitmap_git->version, bitmap_git->entry_count);
 
 	root = revs->pending.objects[0].item;
-	pos = kh_get_oid_map(bitmap_git->bitmaps, root->oid);
-
-	if (pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-		struct ewah_bitmap *bm = lookup_stored_bitmap(st);
+	bm = bitmap_for_commit(bitmap_git, (struct commit *)root);
 
+	if (bm) {
 		fprintf(stderr, "Found bitmap for %s. %d bits / %08x checksum\n",
 			oid_to_hex(&root->oid), (int)bm->bit_size, ewah_checksum(bm));
 
diff --git a/pack-bitmap.h b/pack-bitmap.h
index afa4115136..25dfcf5615 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -78,6 +78,8 @@ uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
 int rebuild_bitmap(const uint32_t *reposition,
 		   struct ewah_bitmap *source,
 		   struct bitmap *dest);
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()'
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (19 preceding siblings ...)
  2020-11-17 21:48   ` [PATCH v2 20/24] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
@ 2020-11-17 21:48   ` Taylor Blau
  2020-12-02  7:20     ` Jonathan Tan
  2020-11-17 21:48   ` [PATCH v2 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
                     ` (4 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:48 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

'find_objects()' currently needs to interact with the bitmaps khash
pretty closely. To make 'find_objects()' read a little more
straightforwardly, remove some of the khash-level details into a new
function that describes what it does: 'add_commit_to_bitmap()'.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 36 +++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 5efb8af121..d88745fb02 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -521,6 +521,23 @@ static int should_include(struct commit *commit, void *_data)
 	return 1;
 }
 
+static int add_commit_to_bitmap(struct bitmap_index *bitmap_git,
+				struct bitmap **base,
+				struct commit *commit)
+{
+	struct ewah_bitmap *or_with = bitmap_for_commit(bitmap_git, commit);
+
+	if (!or_with)
+		return 0;
+
+	if (*base == NULL)
+		*base = ewah_to_bitmap(or_with);
+	else
+		bitmap_or_ewah(*base, or_with);
+
+	return 1;
+}
+
 static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
 				   struct rev_info *revs,
 				   struct object_list *roots,
@@ -544,21 +561,10 @@ static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
 		struct object *object = roots->item;
 		roots = roots->next;
 
-		if (object->type == OBJ_COMMIT) {
-			khiter_t pos = kh_get_oid_map(bitmap_git->bitmaps, object->oid);
-
-			if (pos < kh_end(bitmap_git->bitmaps)) {
-				struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-				struct ewah_bitmap *or_with = lookup_stored_bitmap(st);
-
-				if (base == NULL)
-					base = ewah_to_bitmap(or_with);
-				else
-					bitmap_or_ewah(base, or_with);
-
-				object->flags |= SEEN;
-				continue;
-			}
+		if (object->type == OBJ_COMMIT &&
+		    add_commit_to_bitmap(bitmap_git, &base, (struct commit *)object)) {
+			object->flags |= SEEN;
+			continue;
 		}
 
 		object_list_insert(object, &not_mapped);
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 22/24] pack-bitmap-write: use existing bitmaps
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (20 preceding siblings ...)
  2020-11-17 21:48   ` [PATCH v2 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
@ 2020-11-17 21:48   ` Taylor Blau
  2020-12-02  7:28     ` Jonathan Tan
  2020-11-17 21:48   ` [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition Taylor Blau
                     ` (3 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:48 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

When constructing new bitmaps, we perform a commit and tree walk in
fill_bitmap_commit() and fill_bitmap_tree(). This walk would benefit
from using existing bitmaps when available. We must track the existing
bitmaps and translate them into the new object order, but this is
generally faster than parsing trees.

In fill_bitmap_commit(), we must reorder thing somewhat. The priority
queue walks commits from newest-to-oldest, which means we correctly stop
walking when reaching a commit with a bitmap. However, if we walk trees
from top to bottom, then we might be parsing trees that are actually
part of a re-used bitmap. To avoid over-walking trees, add them to a
LIFO queue and walk them from bottom-to-top after exploring commits
completely.

On git.git, this reduces a second immediate bitmap computation from 2.0s
to 1.0s. On linux.git, we go from 32s to 22s. On chromium's fork
network, we go from 227s to 198s.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 42 ++++++++++++++++++++++++++++++++++++++----
 1 file changed, 38 insertions(+), 4 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 1995f75818..37204b691c 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -340,20 +340,39 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
 
 static void fill_bitmap_commit(struct bb_commit *ent,
 			       struct commit *commit,
-			       struct prio_queue *queue)
+			       struct prio_queue *queue,
+			       struct prio_queue *tree_queue,
+			       struct bitmap_index *old_bitmap,
+			       const uint32_t *mapping)
 {
 	if (!ent->bitmap)
 		ent->bitmap = bitmap_new();
 
-	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
 	prio_queue_put(queue, commit);
 
 	while (queue->nr) {
 		struct commit_list *p;
 		struct commit *c = prio_queue_get(queue);
 
+		/*
+		 * If this commit has an old bitmap, then translate that
+		 * bitmap and add its bits to this one. No need to walk
+		 * parents or the tree for this commit.
+		 */
+		if (old_bitmap && mapping) {
+			struct ewah_bitmap *old;
+
+			old = bitmap_for_commit(old_bitmap, c);
+			if (old && !rebuild_bitmap(mapping, old, ent->bitmap))
+				continue;
+		}
+
+		/*
+		 * Mark ourselves and queue our tree. The commit
+		 * walk ensures we cover all parents.
+		 */
 		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
-		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
+		prio_queue_put(tree_queue, get_commit_tree(c));
 
 		for (p = c->parents; p; p = p->next) {
 			int pos = find_object_pos(&p->item->object.oid);
@@ -363,6 +382,9 @@ static void fill_bitmap_commit(struct bb_commit *ent,
 			}
 		}
 	}
+
+	while (tree_queue->nr)
+		fill_bitmap_tree(ent->bitmap, prio_queue_get(tree_queue));
 }
 
 static void store_selected(struct bb_commit *ent, struct commit *commit)
@@ -386,6 +408,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	size_t i;
 	int nr_stored = 0; /* for progress */
 	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
+	struct prio_queue tree_queue = { NULL };
+	struct bitmap_index *old_bitmap;
+	uint32_t *mapping;
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
@@ -395,6 +420,12 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
 		the_repository);
 
+	old_bitmap = prepare_bitmap_git(to_pack->repo);
+	if (old_bitmap)
+		mapping = create_bitmap_mapping(old_bitmap, to_pack);
+	else
+		mapping = NULL;
+
 	bitmap_builder_init(&bb, &writer);
 	for (i = bb.commits_nr; i > 0; i--) {
 		struct commit *commit = bb.commits[i-1];
@@ -402,7 +433,8 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *child;
 		int reused = 0;
 
-		fill_bitmap_commit(ent, commit, &queue);
+		fill_bitmap_commit(ent, commit, &queue, &tree_queue,
+				   old_bitmap, mapping);
 
 		if (ent->selected) {
 			store_selected(ent, commit);
@@ -428,7 +460,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		ent->bitmap = NULL;
 	}
 	clear_prio_queue(&queue);
+	clear_prio_queue(&tree_queue);
 	bitmap_builder_clear(&bb);
+	free(mapping);
 
 	stop_progress(&writer.progress);
 
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (21 preceding siblings ...)
  2020-11-17 21:48   ` [PATCH v2 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
@ 2020-11-17 21:48   ` Taylor Blau
  2020-12-02  7:44     ` Jonathan Tan
  2020-11-17 21:48   ` [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
                     ` (2 subsequent siblings)
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:48 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

The previous commits improved the bitmap computation process for very
long, linear histories with many refs by removing quadratic growth in
how many objects were walked. The strategy of computing "intermediate
commits" using bitmasks for which refs can reach those commits
partitioned the poset of reachable objects so each part could be walked
exactly once. This was effective for linear histories.

However, there was a (significant) drawback: wide histories with many
refs had an explosion of memory costs to compute the commit bitmasks
during the exploration that discovers these intermediate commits. Since
these wide histories are unlikely to repeat walking objects, the benefit
of walking objects multiple times was not expensive before. But now, the
commit walk *before computing bitmaps* is incredibly expensive.

In an effort to discover a happy medium, this change reduces the walk
for intermediate commits to only the first-parent history. This focuses
the walk on how the histories converge, which still has significant
reduction in repeat object walks. It is still possible to create
quadratic behavior in this version, but it is probably less likely in
realistic data shapes.

Here is some data taken on a fresh clone of the kernel:

             |   runtime (sec)    |   peak heap (GB)   |
             |                    |                    |
             |   from  |   with   |   from  |   with   |
             | scratch | existing | scratch | existing |
  -----------+---------+----------+---------+-----------
    original |  64.044 |   83.241 |   2.088 |    2.194 |
  last patch |  44.811 |   27.828 |   2.289 |    2.358 |
  this patch | 100.641 |   35.560 |   2.152 |    2.224 |

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 14 +++++---------
 t/t5310-pack-bitmaps.sh | 27 ++++++++++++++-------------
 2 files changed, 19 insertions(+), 22 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 37204b691c..b0493d971d 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -199,7 +199,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i, num_maximal;
+	unsigned int i, num_maximal = 0;
 
 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -207,6 +207,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	reset_revision_walk();
 	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
 	revs.topo_order = 1;
+	revs.first_parent_only = 1;
 
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
@@ -221,13 +222,12 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		add_pending_object(&revs, &c->object, "");
 	}
-	num_maximal = writer->selected_nr;
 
 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");
 
 	while ((commit = get_revision(&revs))) {
-		struct commit_list *p;
+		struct commit_list *p = commit->parents;
 		struct bb_commit *c_ent;
 
 		parse_commit_or_die(commit);
@@ -235,16 +235,12 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 		c_ent = bb_data_at(&bb->data, commit);
 
 		if (c_ent->maximal) {
-			if (!c_ent->selected) {
-				bitmap_set(c_ent->commit_mask, num_maximal);
-				num_maximal++;
-			}
-
+			num_maximal++;
 			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
 			bb->commits[bb->commits_nr++] = commit;
 		}
 
-		for (p = commit->parents; p; p = p->next) {
+		if (p) {
 			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
 			int c_not_p, p_not_c;
 
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 1691710ec1..a83e7a93fb 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -43,23 +43,24 @@ has_any () {
 #                                   \|
 #                                    * (base)
 #
+# We only push bits down the first-parent history, which
+# makes some of these commits unimportant!
+#
 # The important part for the maximal commit algorithm is how
 # the bitmasks are extended. Assuming starting bit positions
-# for master (bit 0) and other (bit 1), and some flexibility
-# in the order that merge bases are visited, the bitmasks at
-# the end should be:
+# for master (bit 0) and other (bit 1), the bitmasks at the
+# end should be:
 #
 #      master: 1       (maximal, selected)
 #       other: 01      (maximal, selected)
-# octo-master: 1
-#  octo-other: 01
-# merge-right: 111     (maximal)
-#        (l1): 111
-#        (r1): 111
-#  merge-left: 1101    (maximal)
-#        (l2): 11111   (maximal)
-#        (r2): 111101  (maximal)
-#      (base): 1111111 (maximal)
+#      (base): 11 (maximal)
+#
+# This complicated history was important for a previous
+# version of the walk that guarantees never walking a
+# commit multiple times. That goal might be important
+# again, so preserve this complicated case. For now, this
+# test will guarantee that the bitmaps are computed
+# correctly, even with the repeat calculations.
 
 test_expect_success 'setup repo with moderate-sized history' '
 	test_commit_bulk --id=file 10 &&
@@ -113,7 +114,7 @@ test_expect_success 'full repack creates bitmaps' '
 	ls .git/objects/pack/ | grep bitmap >output &&
 	test_line_count = 1 output &&
 	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
-	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"107\"" trace
 '
 
 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
-- 
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (22 preceding siblings ...)
  2020-11-17 21:48   ` [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition Taylor Blau
@ 2020-11-17 21:48   ` Taylor Blau
  2020-12-02  8:08     ` Jonathan Tan
  2020-11-18 18:32   ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements SZEDER Gábor
  2020-11-20  6:34   ` Martin Ågren
  25 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-17 21:48 UTC (permalink / raw)
  To: git; +Cc: dstolee, gitster, peff, martin.agren, szeder.dev

From: Derrick Stolee <dstolee@microsoft.com>

If the old bitmap file contains a bitmap for a given commit, then that
commit does not need help from intermediate commits in its history to
compute its final bitmap. Eject that commit from the walk and insert it
as a maximal commit in the list of commits for computing bitmaps.

This helps the repeat bitmap computation task, even if the selected
commits shift drastically. This helps when a previously-bitmapped commit
exists in the first-parent history of a newly-selected commit. Since we
stop the walk at these commits and we use a first-parent walk, it is
harder to walk "around" these bitmapped commits. It's not impossible,
but we can greatly reduce the computation time for many selected
commits.

             |   runtime (sec)    |   peak heap (GB)   |
             |                    |                    |
             |   from  |   with   |   from  |   with   |
             | scratch | existing | scratch | existing |
  -----------+---------+----------+---------+-----------
  last patch | 100.641 |   35.560 |   2.152 |    2.224 |
  this patch |  99.720 |   11.696 |   2.152 |    2.217 |

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index b0493d971d..3ac90ae410 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -195,7 +195,8 @@ struct bitmap_builder {
 };
 
 static void bitmap_builder_init(struct bitmap_builder *bb,
-				struct bitmap_writer *writer)
+				struct bitmap_writer *writer,
+				struct bitmap_index *old_bitmap)
 {
 	struct rev_info revs;
 	struct commit *commit;
@@ -234,12 +235,26 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		c_ent = bb_data_at(&bb->data, commit);
 
+		if (old_bitmap && bitmap_for_commit(old_bitmap, commit)) {
+			/*
+			 * This commit has an existing bitmap, so we can
+			 * get its bits immediately without an object
+			 * walk. There is no need to continue walking
+			 * beyond this commit.
+			 */
+			c_ent->maximal = 1;
+			p = NULL;
+		}
+
 		if (c_ent->maximal) {
 			num_maximal++;
 			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
 			bb->commits[bb->commits_nr++] = commit;
 		}
 
+		if (!c_ent->commit_mask)
+			continue;
+
 		if (p) {
 			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
 			int c_not_p, p_not_c;
@@ -422,7 +437,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	else
 		mapping = NULL;
 
-	bitmap_builder_init(&bb, &writer);
+	bitmap_builder_init(&bb, &writer, old_bitmap);
 	for (i = bb.commits_nr; i > 0; i--) {
 		struct commit *commit = bb.commits[i-1];
 		struct bb_commit *ent = bb_data_at(&bb.data, commit);
-- 
2.29.2.312.gabc4d358d8

^ permalink raw reply related	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (23 preceding siblings ...)
  2020-11-17 21:48   ` [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
@ 2020-11-18 18:32   ` SZEDER Gábor
  2020-11-18 19:51     ` Taylor Blau
  2020-11-20  6:34   ` Martin Ågren
  25 siblings, 1 reply; 174+ messages in thread
From: SZEDER Gábor @ 2020-11-18 18:32 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, dstolee, gitster, peff, martin.agren

On Tue, Nov 17, 2020 at 04:46:16PM -0500, Taylor Blau wrote:
>   - Harden the tests so that they pass under sha256-mode (thanks SZEDER,
>     and Peff).

Fixing this is good, of course, but...

> 16:  86d77fd085 ! 18:  5262daa330 pack-bitmap-write: build fewer intermediate bitmaps
>     @@ t/t5310-pack-bitmaps.sh: test_expect_success 'setup repo with moderate-sized his
>       '
> 
>       test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
>     +@@ t/t5310-pack-bitmaps.sh: test_expect_success 'truncated bitmap fails gracefully (ewah)' '
>     + 	git rev-list --use-bitmap-index --count --all >expect &&
>     + 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
>     + 	test_when_finished "rm -f $bitmap" &&
>     +-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
>     ++	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
>     + 	mv -f $bitmap.tmp $bitmap &&
>     + 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
>     + 	test_cmp expect actual &&

Please don't simply sneak in such a change without explaining it in
the commit message.


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-18 18:32   ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements SZEDER Gábor
@ 2020-11-18 19:51     ` Taylor Blau
  2020-11-22  2:17       ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-18 19:51 UTC (permalink / raw)
  To: SZEDER Gábor; +Cc: Taylor Blau, git, dstolee, gitster, peff, martin.agren

On Wed, Nov 18, 2020 at 07:32:25PM +0100, SZEDER Gábor wrote:
> On Tue, Nov 17, 2020 at 04:46:16PM -0500, Taylor Blau wrote:
> >   - Harden the tests so that they pass under sha256-mode (thanks SZEDER,
> >     and Peff).
>
> Fixing this is good, of course, but...
>
> > 16:  86d77fd085 ! 18:  5262daa330 pack-bitmap-write: build fewer intermediate bitmaps
> >     @@ t/t5310-pack-bitmaps.sh: test_expect_success 'setup repo with moderate-sized his
> >       '
> >
> >       test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
> >     +@@ t/t5310-pack-bitmaps.sh: test_expect_success 'truncated bitmap fails gracefully (ewah)' '
> >     + 	git rev-list --use-bitmap-index --count --all >expect &&
> >     + 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
> >     + 	test_when_finished "rm -f $bitmap" &&
> >     +-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
> >     ++	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
> >     + 	mv -f $bitmap.tmp $bitmap &&
> >     + 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
> >     + 	test_cmp expect actual &&
>
> Please don't simply sneak in such a change without explaining it in
> the commit message.

Ah, I certainly didn't mean to go under the radar, so to speak ;-). From
my perspective, the final patch looks like it picked a magic number in
the same way was the original version of this patch did, so I didn't
think to add any more detail there.

I did try and highlight this a little bit in the patch just before the
one you're commenting on, though:

  - Adding more tests in this area. Testing these truncation situations
    are remarkably fragile to even subtle changes in the bitmap
    generation. So, the resulting tests are likely to be quite brittle.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
                     ` (24 preceding siblings ...)
  2020-11-18 18:32   ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements SZEDER Gábor
@ 2020-11-20  6:34   ` Martin Ågren
  2020-11-21 19:37     ` Junio C Hamano
  25 siblings, 1 reply; 174+ messages in thread
From: Martin Ågren @ 2020-11-20  6:34 UTC (permalink / raw)
  To: Taylor Blau
  Cc: Git Mailing List, Derrick Stolee, Junio C Hamano, Jeff King,
	SZEDER Gábor

On Tue, 17 Nov 2020 at 22:46, Taylor Blau <me@ttaylorr.com> wrote:
> Not very much has changed since last time, but a range-diff is below
> nonetheless. The major changes are:
>
>   - Avoid an overflow when bounds checking in the second and third
>     patches (thanks, Martin, for noticing).

FWIW, the updates to patches 2 and 3 look exactly like what I was
expecting after the discussion on v1. I have nothing to add.


Martin

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-20  6:34   ` Martin Ågren
@ 2020-11-21 19:37     ` Junio C Hamano
  2020-11-21 20:11       ` Martin Ågren
  0 siblings, 1 reply; 174+ messages in thread
From: Junio C Hamano @ 2020-11-21 19:37 UTC (permalink / raw)
  To: Martin Ågren
  Cc: Taylor Blau, Git Mailing List, Derrick Stolee, Jeff King,
	SZEDER Gábor

Martin Ågren <martin.agren@gmail.com> writes:

> On Tue, 17 Nov 2020 at 22:46, Taylor Blau <me@ttaylorr.com> wrote:
>> Not very much has changed since last time, but a range-diff is below
>> nonetheless. The major changes are:
>>
>>   - Avoid an overflow when bounds checking in the second and third
>>     patches (thanks, Martin, for noticing).
>
> FWIW, the updates to patches 2 and 3 look exactly like what I was
> expecting after the discussion on v1. I have nothing to add.

Thanks, both.  Shall we move the topic down to 'next'?


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-21 19:37     ` Junio C Hamano
@ 2020-11-21 20:11       ` Martin Ågren
  2020-11-22  2:31         ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Martin Ågren @ 2020-11-21 20:11 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Taylor Blau, Git Mailing List, Derrick Stolee, Jeff King,
	SZEDER Gábor

On Sat, 21 Nov 2020 at 20:37, Junio C Hamano <gitster@pobox.com> wrote:
>
> Martin Ågren <martin.agren@gmail.com> writes:
>
> > On Tue, 17 Nov 2020 at 22:46, Taylor Blau <me@ttaylorr.com> wrote:
> >> Not very much has changed since last time, but a range-diff is below
> >> nonetheless. The major changes are:
> >>
> >>   - Avoid an overflow when bounds checking in the second and third
> >>     patches (thanks, Martin, for noticing).
> >
> > FWIW, the updates to patches 2 and 3 look exactly like what I was
> > expecting after the discussion on v1. I have nothing to add.
>
> Thanks, both.  Shall we move the topic down to 'next'?

I really only dug into those patches 2 and 3. I read the rest of the
patches of v1 and went "that makes sense", but that's about it. I
started looking at "pack-bitmap-write: build fewer intermediate bitmaps"
and went "this looks really cool -- I should try to understand this". :-)

There was SZEDER's comment on that last patch in v2, where future
readers of that patch will have to wonder why it does s/256/270/ in a
test. I agree with SZEDER that the change should be mentioned in the
commit message, even if it's just "unfortunately, we have some magicness
here, plus we want to pass both with SHA-1 and SHA-256; turns out 270
hits the problem we want to test for".

Martin

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-18 19:51     ` Taylor Blau
@ 2020-11-22  2:17       ` Taylor Blau
  2020-11-22  2:28         ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-22  2:17 UTC (permalink / raw)
  To: SZEDER Gábor; +Cc: git, dstolee, gitster, peff, martin.agren

On Wed, Nov 18, 2020 at 02:51:59PM -0500, Taylor Blau wrote:
> On Wed, Nov 18, 2020 at 07:32:25PM +0100, SZEDER Gábor wrote:
> > Please don't simply sneak in such a change without explaining it in
> > the commit message.
>
> Ah, I certainly didn't mean to go under the radar, so to speak ;-). From
> my perspective, the final patch looks like it picked a magic number in
> the same way was the original version of this patch did, so I didn't
> think to add any more detail there.

Oops; when I wrote that to you, I had in my mind that this patch already
changed that line in the tests, so the rerolled patch was simply
changing it to something different.

But, this isn't a new test from this patch's perspective, so I'll make a
note of why this changed in a replacement.

Thanks.

Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-22  2:17       ` Taylor Blau
@ 2020-11-22  2:28         ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-22  2:28 UTC (permalink / raw)
  To: SZEDER Gábor; +Cc: git, dstolee, gitster, peff, martin.agren

On Sat, Nov 21, 2020 at 09:17:43PM -0500, Taylor Blau wrote:
> But, this isn't a new test from this patch's perspective, so I'll make a
> note of why this changed in a replacement.

Here's that patch: let's use it as a replacement when queueing. Thanks
again for noticing.

--- 8< ---

From: Derrick Stolee <dstolee@microsoft.com>
Subject: [PATCH] pack-bitmap-write: build fewer intermediate bitmaps

The bitmap_writer_build() method calls bitmap_builder_init() to
construct a list of commits reachable from the selected commits along
with a "reverse graph". This reverse graph has edges pointing from a
commit to other commits that can reach that commit. After computing a
reachability bitmap for a commit, the values in that bitmap are then
copied to the reachability bitmaps across the edges in the reverse
graph.

We can now relax the role of the reverse graph to greatly reduce the
number of intermediate reachability bitmaps we compute during this
reverse walk. The end result is that we walk objects the same number of
times as before when constructing the reachability bitmaps, but we also
spend much less time copying bits between bitmaps and have much lower
memory pressure in the process.

The core idea is to select a set of "important" commits based on
interactions among the sets of commits reachable from each selected commit.

The first technical concept is to create a new 'commit_mask' member in the
bb_commit struct. Note that the selected commits are provided in an
ordered array. The first thing to do is to mark the ith bit in the
commit_mask for the ith selected commit. As we walk the commit-graph, we
copy the bits in a commit's commit_mask to its parents. At the end of
the walk, the ith bit in the commit_mask for a commit C stores a boolean
representing "The ith selected commit can reach C."

As we walk, we will discover non-selected commits that are important. We
will get into this later, but those important commits must also receive
bit positions, growing the width of the bitmasks as we walk. At the true
end of the walk, the ith bit means "the ith _important_ commit can reach
C."

MAXIMAL COMMITS
---------------

We use a new 'maximal' bit in the bb_commit struct to represent whether
a commit is important or not. The term "maximal" comes from the
partially-ordered set of commits in the commit-graph where C >= P if P
is a parent of C, and then extending the relationship transitively.
Instead of taking the maximal commits across the entire commit-graph, we
instead focus on selecting each commit that is maximal among commits
with the same bits on in their commit_mask. This definition is
important, so let's consider an example.

Suppose we have three selected commits A, B, and C. These are assigned
bitmasks 100, 010, and 001 to start. Each of these can be marked as
maximal immediately because they each will be the uniquely maximal
commit that contains their own bit. Keep in mind that that these commits
may have different bitmasks after the walk; for example, if B can reach
C but A cannot, then the final bitmask for C is 011. Even in these
cases, C would still be a maximal commit among all commits with the
third bit on in their masks.

Now define sets X, Y, and Z to be the sets of commits reachable from A,
B, and C, respectively. The intersections of these sets correspond to
different bitmasks:

 * 100: X - (Y union Z)
 * 010: Y - (X union Z)
 * 001: Z - (X union Y)
 * 110: (X intersect Y) - Z
 * 101: (X intersect Z) - Y
 * 011: (Y intersect Z) - X
 * 111: X intersect Y intersect Z

This can be visualized with the following Hasse diagram:

	100    010    001
         | \  /   \  / |
         |  \/     \/  |
         |  /\     /\  |
         | /  \   /  \ |
        110    101    011
          \___  |  ___/
              \ | /
               111

Some of these bitmasks may not be represented, depending on the topology
of the commit-graph. In fact, we are counting on it, since the number of
possible bitmasks is exponential in the number of selected commits, but
is also limited by the total number of commits. In practice, very few
bitmasks are possible because most commits converge on a common "trunk"
in the commit history.

With this three-bit example, we wish to find commits that are maximal
for each bitmask. How can we identify this as we are walking?

As we walk, we visit a commit C. Since we are walking the commits in
topo-order, we know that C is visited after all of its children are
visited. Thus, when we get C from the revision walk we inspect the
'maximal' property of its bb_data and use that to determine if C is truly
important. Its commit_mask is also nearly final. If C is not one of the
originally-selected commits, then assign a bit position to C (by
incrementing num_maximal) and set that bit on in commit_mask. See
"MULTIPLE MAXIMAL COMMITS" below for more detail on this.

Now that the commit C is known to be maximal or not, consider each
parent P of C. Compute two new values:

 * c_not_p : true if and only if the commit_mask for C contains a bit
             that is not contained in the commit_mask for P.

 * p_not_c : true if and only if the commit_mask for P contains a bit
             that is not contained in the commit_mask for P.

If c_not_p is false, then P already has all of the bits that C would
provide to its commit_mask. In this case, move on to other parents as C
has nothing to contribute to P's state that was not already provided by
other children of P.

We continue with the case that c_not_p is true. This means there are
bits in C's commit_mask to copy to P's commit_mask, so use bitmap_or()
to add those bits.

If p_not_c is also true, then set the maximal bit for P to one. This means
that if no other commit has P as a parent, then P is definitely maximal.
This is because no child had the same bitmask. It is important to think
about the maximal bit for P at this point as a temporary state: "P is
maximal based on current information."

In contrast, if p_not_c is false, then set the maximal bit for P to
zero. Further, clear all reverse_edges for P since any edges that were
previously assigned to P are no longer important. P will gain all
reverse edges based on C.

The final thing we need to do is to update the reverse edges for P.
These reverse edges respresent "which closest maximal commits
contributed bits to my commit_mask?" Since C contributed bits to P's
commit_mask in this case, C must add to the reverse edges of P.

If C is maximal, then C is a 'closest' maximal commit that contributed
bits to P. Add C to P's reverse_edges list.

Otherwise, C has a list of maximal commits that contributed bits to its
bitmask (and this list is exactly one element). Add all of these items
to P's reverse_edges list. Be careful to ignore duplicates here.

After inspecting all parents P for a commit C, we can clear the
commit_mask for C. This reduces the memory load to be limited to the
"width" of the commit graph.

Consider our ABC/XYZ example from earlier and let's inspect the state of
the commits for an interesting bitmask, say 011. Suppose that D is the
only maximal commit with this bitmask (in the first three bits). All
other commits with bitmask 011 have D as the only entry in their
reverse_edges list. D's reverse_edges list contains B and C.

COMPUTING REACHABILITY BITMAPS
------------------------------

Now that we have our definition, let's zoom out and consider what
happens with our new reverse graph when computing reachability bitmaps.
We walk the reverse graph in reverse-topo-order, so we visit commits
with largest commit_masks first. After we compute the reachability
bitmap for a commit C, we push the bits in that bitmap to each commit D
in the reverse edge list for C. Then, when we finally visit D we already
have the bits for everything reachable from maximal commits that D can
reach and we only need to walk the objects in the set-difference.

In our ABC/XYZ example, when we finally walk for the commit A we only
need to walk commits with bitmask equal to A's bitmask. If that bitmask
is 100, then we are only walking commits in X - (Y union Z) because the
bitmap already contains the bits for objects reachable from (X intersect
Y) union (X intersect Z) (i.e. the bits from the reachability bitmaps
for the maximal commits with bitmasks 110 and 101).

The behavior is intended to walk each commit (and the trees that commit
introduces) at most once while allocating and copying fewer reachability
bitmaps. There is one caveat: what happens when there are multiple
maximal commits with the same bitmask, with respect to the initial set
of selected commits?

MULTIPLE MAXIMAL COMMITS
------------------------

Earlier, we mentioned that when we discover a new maximal commit, we
assign a new bit position to that commit and set that bit position to
one for that commit. This is absolutely important for interesting
commit-graphs such as git/git and torvalds/linux. The reason is due to
the existence of "butterflies" in the commit-graph partial order.

Here is an example of four commits forming a butterfly:

   I    J
   |\  /|
   | \/ |
   | /\ |
   |/  \|
   M    N
    \  /
     |/
     Q

Here, I and J both have parents M and N. In general, these do not need
to be exact parent relationships, but reachability relationships. The
most important part is that M and N cannot reach each other, so they are
independent in the partial order. If I had commit_mask 10 and J had
commit_mask 01, then M and N would both be assigned commit_mask 11 and
be maximal commits with the bitmask 11. Then, what happens when M and N
can both reach a commit Q? If Q is also assigned the bitmask 11, then it
is not maximal but is reachable from both M and N.

While this is not necessarily a deal-breaker for our abstract definition
of finding maximal commits according to a given bitmask, we have a few
issues that can come up in our larger picture of constructing
reachability bitmaps.

In particular, if we do not also consider Q to be a "maximal" commit,
then we will walk commits reachable from Q twice: once when computing
the reachability bitmap for M and another time when computing the
reachability bitmap for N. This becomes much worse if the topology
continues this pattern with multiple butterflies.

The solution has already been mentioned: each of M and N are assigned
their own bits to the bitmask and hence they become uniquely maximal for
their bitmasks. Finally, Q also becomes maximal and thus we do not need
to walk its commits multiple times. The final bitmasks for these commits
are as follows:

  I:10       J:01
   |\        /|
   | \ _____/ |
   | /\____   |
   |/      \  |
   M:111    N:1101
        \  /
       Q:1111

Further, Q's reverse edge list is { M, N }, while M and N both have
reverse edge list { I, J }.

PERFORMANCE MEASUREMENTS
------------------------

Now that we've spent a LOT of time on the theory of this algorithm,
let's show that this is actually worth all that effort.

To test the performance, use GIT_TRACE2_PERF=1 when running
'git repack -abd' in a repository with no existing reachability bitmaps.
This avoids any issues with keeping existing bitmaps to skew the
numbers.

Inspect the "building_bitmaps_total" region in the trace2 output to
focus on the portion of work that is affected by this change. Here are
the performance comparisons for a few repositories. The timings are for
the following versions of Git: "multi" is the timing from before any
reverse graph is constructed, where we might perform multiple
traversals. "reverse" is for the previous change where the reverse graph
has every reachable commit.  Finally "maximal" is the version introduced
here where the reverse graph only contains the maximal commits.

      Repository: git/git
           multi: 2.628 sec
         reverse: 2.344 sec
         maximal: 2.047 sec

      Repository: torvalds/linux
           multi: 64.7 sec
         reverse: 205.3 sec
         maximal: 44.7 sec

So in all cases we've not only recovered any time lost to switching to
the reverse-edge algorithm, but we come out ahead of "multi" in all
cases. Likewise, peak heap has gone back to something reasonable:

      Repository: torvalds/linux
           multi: 2.087 GB
         reverse: 3.141 GB
         maximal: 2.288 GB

While I do not have access to full fork networks on GitHub, Peff has run
this algorithm on the chromium/chromium fork network and reported a
change from 3 hours to ~233 seconds. That network is particularly
beneficial for this approach because it has a long, linear history along
with many tags. The "multi" approach was obviously quadratic and the new
approach is linear.

MISCELLANEOUS
-------------

Unluckily, this patch causes bitmaps to change in such a way that
t5310.66 no longer passes as-is in SHA-256 mode. That test is asserting
that truncated .bitmap files fail gracefully. But noticing that
truncation depends on which bitmaps we try to look at, and exactly where
the truncation is.

Since this commit happens to rearrange the bytes in that exact region by
changing the way commits are selected (and thus shifting around other
bitmaps), move the truncation to a spot where it will be noticed in both
SHA-1 and SHA-256 mode.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 72 +++++++++++++++++++++++++++++++---
 t/t5310-pack-bitmaps.sh | 87 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 149 insertions(+), 10 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 369c76a87c..7b4fc0f304 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -180,8 +180,10 @@ static void compute_xor_offsets(void)

 struct bb_commit {
 	struct commit_list *reverse_edges;
+	struct bitmap *commit_mask;
 	struct bitmap *bitmap;
-	unsigned selected:1;
+	unsigned selected:1,
+		 maximal:1;
 	unsigned idx; /* within selected array */
 };

@@ -198,7 +200,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i;
+	unsigned int i, num_maximal;

 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -210,27 +212,85 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
 		struct bb_commit *ent = bb_data_at(&bb->data, c);
+
 		ent->selected = 1;
+		ent->maximal = 1;
 		ent->idx = i;
+
+		ent->commit_mask = bitmap_new();
+		bitmap_set(ent->commit_mask, i);
+
 		add_pending_object(&revs, &c->object, "");
 	}
+	num_maximal = writer->selected_nr;

 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");

 	while ((commit = get_revision(&revs))) {
 		struct commit_list *p;
+		struct bb_commit *c_ent;

 		parse_commit_or_die(commit);

-		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
-		bb->commits[bb->commits_nr++] = commit;
+		c_ent = bb_data_at(&bb->data, commit);
+
+		if (c_ent->maximal) {
+			if (!c_ent->selected) {
+				bitmap_set(c_ent->commit_mask, num_maximal);
+				num_maximal++;
+			}
+
+			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+			bb->commits[bb->commits_nr++] = commit;
+		}

 		for (p = commit->parents; p; p = p->next) {
-			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->reverse_edges);
+			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
+			int c_not_p, p_not_c;
+
+			if (!p_ent->commit_mask) {
+				p_ent->commit_mask = bitmap_new();
+				c_not_p = 1;
+				p_not_c = 0;
+			} else {
+				c_not_p = bitmap_diff_nonzero(c_ent->commit_mask, p_ent->commit_mask);
+				p_not_c = bitmap_diff_nonzero(p_ent->commit_mask, c_ent->commit_mask);
+			}
+
+			if (!c_not_p)
+				continue;
+
+			bitmap_or(p_ent->commit_mask, c_ent->commit_mask);
+
+			if (p_not_c)
+				p_ent->maximal = 1;
+			else {
+				p_ent->maximal = 0;
+				free_commit_list(p_ent->reverse_edges);
+				p_ent->reverse_edges = NULL;
+			}
+
+			if (c_ent->maximal) {
+				commit_list_insert(commit, &p_ent->reverse_edges);
+			} else {
+				struct commit_list *cc = c_ent->reverse_edges;
+
+				for (; cc; cc = cc->next) {
+					if (!commit_list_contains(cc->item, p_ent->reverse_edges))
+						commit_list_insert(cc->item, &p_ent->reverse_edges);
+				}
+			}
 		}
+
+		bitmap_free(c_ent->commit_mask);
+		c_ent->commit_mask = NULL;
 	}
+
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_selected_commits", writer->selected_nr);
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_maximal_commits", num_maximal);
 }

 static void bitmap_builder_clear(struct bitmap_builder *bb)
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 6bf68fee85..1691710ec1 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -20,11 +20,87 @@ has_any () {
 	grep -Ff "$1" "$2"
 }

+# To ensure the logic for "maximal commits" is exercised, make
+# the repository a bit more complicated.
+#
+#    other                         master
+#      *                             *
+# (99 commits)                  (99 commits)
+#      *                             *
+#      |\                           /|
+#      | * octo-other  octo-master * |
+#      |/|\_________  ____________/|\|
+#      | \          \/  __________/  |
+#      |  | ________/\ /             |
+#      *  |/          * merge-right  *
+#      | _|__________/ \____________ |
+#      |/ |                         \|
+# (l1) *  * merge-left               * (r1)
+#      | / \________________________ |
+#      |/                           \|
+# (l2) *                             * (r2)
+#       \____________...____________ |
+#                                   \|
+#                                    * (base)
+#
+# The important part for the maximal commit algorithm is how
+# the bitmasks are extended. Assuming starting bit positions
+# for master (bit 0) and other (bit 1), and some flexibility
+# in the order that merge bases are visited, the bitmasks at
+# the end should be:
+#
+#      master: 1       (maximal, selected)
+#       other: 01      (maximal, selected)
+# octo-master: 1
+#  octo-other: 01
+# merge-right: 111     (maximal)
+#        (l1): 111
+#        (r1): 111
+#  merge-left: 1101    (maximal)
+#        (l2): 11111   (maximal)
+#        (r2): 111101  (maximal)
+#      (base): 1111111 (maximal)
+
 test_expect_success 'setup repo with moderate-sized history' '
-	test_commit_bulk --id=file 100 &&
+	test_commit_bulk --id=file 10 &&
 	git checkout -b other HEAD~5 &&
 	test_commit_bulk --id=side 10 &&
+
+	# add complicated history setup, including merges and
+	# ambiguous merge-bases
+
+	git checkout -b merge-left other~2 &&
+	git merge master~2 -m "merge-left" &&
+
+	git checkout -b merge-right master~1 &&
+	git merge other~1 -m "merge-right" &&
+
+	git checkout -b octo-master master &&
+	git merge merge-left merge-right -m "octopus-master" &&
+
+	git checkout -b octo-other other &&
+	git merge merge-left merge-right -m "octopus-other" &&
+
+	git checkout other &&
+	git merge octo-other -m "pull octopus" &&
+
 	git checkout master &&
+	git merge octo-master -m "pull octopus" &&
+
+	# Remove these branches so they are not selected
+	# as bitmap tips
+	git branch -D merge-left &&
+	git branch -D merge-right &&
+	git branch -D octo-other &&
+	git branch -D octo-master &&
+
+	# add padding to make these merges less interesting
+	# and avoid having them selected for bitmaps
+	test_commit_bulk --id=file 100 &&
+	git checkout other &&
+	test_commit_bulk --id=side 100 &&
+	git checkout master &&
+
 	bitmaptip=$(git rev-parse master) &&
 	blob=$(echo tagged-blob | git hash-object -w --stdin) &&
 	git tag tagged-blob $blob &&
@@ -32,9 +108,12 @@ test_expect_success 'setup repo with moderate-sized history' '
 '

 test_expect_success 'full repack creates bitmaps' '
-	git repack -ad &&
+	GIT_TRACE2_EVENT_NESTING=4 GIT_TRACE2_EVENT="$(pwd)/trace" \
+		git repack -ad &&
 	ls .git/objects/pack/ | grep bitmap >output &&
-	test_line_count = 1 output
+	test_line_count = 1 output &&
+	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
 '

 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
@@ -356,7 +435,7 @@ test_expect_success 'truncated bitmap fails gracefully (ewah)' '
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
 	test_when_finished "rm -f $bitmap" &&
-	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
+	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
--
2.29.2.312.gabc4d358d8


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-21 20:11       ` Martin Ågren
@ 2020-11-22  2:31         ` Taylor Blau
  2020-11-24  2:43           ` Jeff King
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-22  2:31 UTC (permalink / raw)
  To: Martin Ågren
  Cc: Junio C Hamano, Taylor Blau, Git Mailing List, Derrick Stolee,
	Jeff King, SZEDER Gábor

On Sat, Nov 21, 2020 at 09:11:21PM +0100, Martin Ågren wrote:
> On Sat, 21 Nov 2020 at 20:37, Junio C Hamano <gitster@pobox.com> wrote:
> >
> > Martin Ågren <martin.agren@gmail.com> writes:
> >
> > > On Tue, 17 Nov 2020 at 22:46, Taylor Blau <me@ttaylorr.com> wrote:
> > >> Not very much has changed since last time, but a range-diff is below
> > >> nonetheless. The major changes are:
> > >>
> > >>   - Avoid an overflow when bounds checking in the second and third
> > >>     patches (thanks, Martin, for noticing).
> > >
> > > FWIW, the updates to patches 2 and 3 look exactly like what I was
> > > expecting after the discussion on v1. I have nothing to add.
> >
> > Thanks, both.  Shall we move the topic down to 'next'?
>
> I really only dug into those patches 2 and 3. I read the rest of the
> patches of v1 and went "that makes sense", but that's about it. I
> started looking at "pack-bitmap-write: build fewer intermediate bitmaps"
> and went "this looks really cool -- I should try to understand this". :-)
>
> There was SZEDER's comment on that last patch in v2, where future
> readers of that patch will have to wonder why it does s/256/270/ in a
> test. I agree with SZEDER that the change should be mentioned in the
> commit message, even if it's just "unfortunately, we have some magicness
> here, plus we want to pass both with SHA-1 and SHA-256; turns out 270
> hits the problem we want to test for".

Thanks for reviewing it, and noticing a couple of problems in the
earlier patches, too. If folks are happy with the replacement that I
sent [1], then I am too :-).

I don't think that the "big" patch generated a ton of review on the
list, but maybe that's OK. Peff, Stolee, and I all reviewed that patch
extensively when deploying it at GitHub (where it has been running since
late Summer).

> Martin

Thanks,
Taylor

[1]: https://lore.kernel.org/git/X7nMzzMfjm%2Fp9qfj@xnor.local/

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1
  2020-11-11 19:41 ` [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
@ 2020-11-22 19:36   ` Junio C Hamano
  2020-11-23 16:22     ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Junio C Hamano @ 2020-11-22 19:36 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, dstolee, peff

Taylor Blau <me@ttaylorr.com> writes:

> When the buffer size is exactly 1, we fail to grow it properly, since
> the integer truncation means that 1 * 3 / 2 = 1. This can cause a bad
> write on the line below.

When the buffer_size is exactly (alloc_size - 1), we can fit the new
element at the last word in the buffer array, but we still grow.  Is
this because we anticipate that we would need to add more soon?

> Bandaid this by first padding the buffer by 16, and then growing it.
> This still allows old blocks to fit into new ones, but fixes the case
> where the block size equals 1.

Adding 16 unconditionally is not "to pad".  If somebody really wants
"to pad", a likely implementation would be that the size resulting
from some computation (e.g. multiplying by 1.5) is round up to a
multiple of some number, than rounding up the original number before
multiplying it by 1.5, so the use of that verb in the explanation
did not help me understand what is going on.

Having said that, I see you used the word "bandaid" to signal that
we shouldn't worry about this being optimal or even correct and we
should be happy as long as it is not wrong ;-), but is there any
reason behind this 16 (as opposed to picking, say, 8 or 31), or is
that pulled out of thin air?

I think this probably mimics what alloc_nr() computes for ALLOC_GROW().
I wonder why buffer_grow() cannot be built around ALLOC_GROW() instead?

Nothing in the code is wrong per-se, but just what I noticed while
re-reading the patch.

Thanks.

> Co-authored-by: Jeff King <peff@peff.net>
> Signed-off-by: Jeff King <peff@peff.net>
> Signed-off-by: Taylor Blau <me@ttaylorr.com>
> ---
>  ewah/ewah_bitmap.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/ewah/ewah_bitmap.c b/ewah/ewah_bitmap.c
> index d59b1afe3d..3fae04ad00 100644
> --- a/ewah/ewah_bitmap.c
> +++ b/ewah/ewah_bitmap.c
> @@ -45,7 +45,7 @@ static inline void buffer_grow(struct ewah_bitmap *self, size_t new_size)
>  static inline void buffer_push(struct ewah_bitmap *self, eword_t value)
>  {
>  	if (self->buffer_size + 1 >= self->alloc_size)
> -		buffer_grow(self, self->buffer_size * 3 / 2);
> +		buffer_grow(self, (self->buffer_size + 16) * 3 / 2);
>  
>  	self->buffer[self->buffer_size++] = value;
>  }

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 07/23] ewah: make bitmap growth less aggressive
  2020-11-11 19:42 ` [PATCH 07/23] ewah: make bitmap growth less aggressive Taylor Blau
@ 2020-11-22 20:32   ` Junio C Hamano
  2020-11-23 16:49     ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Junio C Hamano @ 2020-11-22 20:32 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, dstolee, peff

Taylor Blau <me@ttaylorr.com> writes:

>  - a geometric increase in existing size; we'll switch to 3/2 instead of
>    2 here. That's less aggressive and may help avoid fragmenting memory
>    (N + 3N/2 > 9N/4, so old chunks can be reused as we scale up).

I am sure this is something obvious to bitmap folks, but where does
9N/4 come from (I get that the left-hand-side of the comparison is
the memory necessary to hold both the old and the new copy while
reallocating the words[] array)?

Thanks.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 08/23] ewah: implement bitmap_or()
  2020-11-11 19:43 ` [PATCH 08/23] ewah: implement bitmap_or() Taylor Blau
@ 2020-11-22 20:34   ` Junio C Hamano
  2020-11-23 16:52     ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Junio C Hamano @ 2020-11-22 20:34 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, dstolee, peff

Taylor Blau <me@ttaylorr.com> writes:

> From: Jeff King <peff@peff.net>
>
> We have a function to bitwise-OR an ewah into an uncompressed bitmap,
> but not to OR two uncompressed bitmaps. Let's add it.
>
> Interestingly, we have a public header declaration going back to
> e1273106f6 (ewah: compressed bitmap implementation, 2013-11-14), but the
> function was never implemented.

So we have had decl, no impl, but it did not matter because there
was no user?  Presumably we will see a real user soon in the series
;-)


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history
  2020-11-17 21:47   ` [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
@ 2020-11-22 21:50     ` Junio C Hamano
  2020-11-23 14:54       ` Derrick Stolee
  2020-11-25  1:14     ` Jonathan Tan
  1 sibling, 1 reply; 174+ messages in thread
From: Junio C Hamano @ 2020-11-22 21:50 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, dstolee, peff, martin.agren, szeder.dev

Taylor Blau <me@ttaylorr.com> writes:

> From: Derrick Stolee <dstolee@microsoft.com>
>
> The fill_bitmap_commit() method assumes that every parent of the given
> commit is already part of the current bitmap. Instead of making that
> assumption, let's walk parents until we reach commits already part of
> the bitmap. Set the value for that parent immediately after querying to
> save time doing double calls to find_object_pos() and to avoid inserting
> the parent into the queue multiple times.

Is it because somebody found a case where the assumption does not
hold and the code with the assumption produces a wrong result?  Is
it because we can get a better result without making the assumption
the current code does?

In other words, can we explain why we are making the change in the
proposed log message?

> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
> Signed-off-by: Taylor Blau <me@ttaylorr.com>
> ---
>  pack-bitmap-write.c | 30 +++++++++++++++++++++++-------
>  1 file changed, 23 insertions(+), 7 deletions(-)
>
> diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
> index d2d46ff5f4..361f3305a2 100644
> --- a/pack-bitmap-write.c
> +++ b/pack-bitmap-write.c
> @@ -12,6 +12,7 @@
>  #include "sha1-lookup.h"
>  #include "pack-objects.h"
>  #include "commit-reach.h"
> +#include "prio-queue.h"
>  
>  struct bitmapped_commit {
>  	struct commit *commit;
> @@ -279,17 +280,30 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
>  }
>  
>  static void fill_bitmap_commit(struct bb_commit *ent,
> -			       struct commit *commit)
> +			       struct commit *commit,
> +			       struct prio_queue *queue)
>  {
>  	if (!ent->bitmap)
>  		ent->bitmap = bitmap_new();
>  
> -	/*
> -	 * mark ourselves, but do not bother with parents; their values
> -	 * will already have been propagated to us
> -	 */
>  	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
> -	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
> +	prio_queue_put(queue, commit);
> +
> +	while (queue->nr) {
> +		struct commit_list *p;
> +		struct commit *c = prio_queue_get(queue);
> +
> +		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
> +		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
> +
> +		for (p = c->parents; p; p = p->next) {
> +			int pos = find_object_pos(&p->item->object.oid);
> +			if (!bitmap_get(ent->bitmap, pos)) {
> +				bitmap_set(ent->bitmap, pos);
> +				prio_queue_put(queue, p->item);
> +			}
> +		}
> +	}
>  }
>  
>  static void store_selected(struct bb_commit *ent, struct commit *commit)
> @@ -319,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
>  	struct bitmap_builder bb;
>  	size_t i;
>  	int nr_stored = 0; /* for progress */
> +	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
>  
>  	writer.bitmaps = kh_init_oid_map();
>  	writer.to_pack = to_pack;
> @@ -335,7 +350,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
>  		struct commit *child;
>  		int reused = 0;
>  
> -		fill_bitmap_commit(ent, commit);
> +		fill_bitmap_commit(ent, commit, &queue);
>  
>  		if (ent->selected) {
>  			store_selected(ent, commit);
> @@ -360,6 +375,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
>  			bitmap_free(ent->bitmap);
>  		ent->bitmap = NULL;
>  	}
> +	clear_prio_queue(&queue);
>  	bitmap_builder_clear(&bb);
>  
>  	stop_progress(&writer.progress);

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 13/24] bitmap: add bitmap_diff_nonzero()
  2020-11-17 21:47   ` [PATCH v2 13/24] bitmap: add bitmap_diff_nonzero() Taylor Blau
@ 2020-11-22 22:01     ` Junio C Hamano
  2020-11-23 20:19       ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Junio C Hamano @ 2020-11-22 22:01 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, dstolee, peff, martin.agren, szeder.dev

Taylor Blau <me@ttaylorr.com> writes:

> From: Derrick Stolee <dstolee@microsoft.com>
>
> The bitmap_diff_nonzero() checks if the 'self' bitmap contains any bits
> that are not on in the 'other' bitmap.

In other words, it yields false if and only if self is a subset of
other?  I have to say that "diff_nonzero" is much less helpful than
words like "subset" or "superset" when I try to imagine what the
function would compute.

If this were widely used helper function, I may insist on flipping
the polarity and call it bitmap_is_subset(), but I dunno...

> Also, delete the declaration of bitmap_is_subset() as it is not used or
> implemented.

;-)

> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
> Signed-off-by: Taylor Blau <me@ttaylorr.com>
> ---
>  ewah/bitmap.c | 24 ++++++++++++++++++++++++
>  ewah/ewok.h   |  2 +-
>  2 files changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/ewah/bitmap.c b/ewah/bitmap.c
> index eb7e2539be..e2ebeac0e5 100644
> --- a/ewah/bitmap.c
> +++ b/ewah/bitmap.c
> @@ -200,6 +200,30 @@ int bitmap_equals(struct bitmap *self, struct bitmap *other)
>  	return 1;
>  }
>  
> +int bitmap_diff_nonzero(struct bitmap *self, struct bitmap *other)
> +{
> +	struct bitmap *small;

It is not wrong per-se, but s/small/smaller/ would be more natural?

I actually think it would be easier to follow the logic to replace
this pointer with

	size_t common_size;

Then the code becomes

	if (self->word_alloc < other->word_alloc)
		common_size = self->word_alloc;
	else {
		common_size = other->word_alloc;
		for (i = common_size; i < self->word_alloc; i++)
			if (self->words[i])
				... self is *not* subset ...
	}

	for (i = 0; i < common_size; i++)
		if (self->words[i] & ~other->words[i]))
			... self is *not* subset ...


> +	size_t i;
> +
> +	if (self->word_alloc < other->word_alloc) {
> +		small = self;
> +	} else {
> +		small = other;
> +
> +		for (i = other->word_alloc; i < self->word_alloc; i++) {
> +			if (self->words[i] != 0)
> +				return 1;
> +		}
> +	}
> +
> +	for (i = 0; i < small->word_alloc; i++) {
> +		if ((self->words[i] & ~other->words[i]))
> +			return 1;
> +	}
> +
> +	return 0;
> +}
> +
>  void bitmap_reset(struct bitmap *bitmap)
>  {
>  	memset(bitmap->words, 0x0, bitmap->word_alloc * sizeof(eword_t));
> diff --git a/ewah/ewok.h b/ewah/ewok.h
> index 1fc555e672..156c71d06d 100644
> --- a/ewah/ewok.h
> +++ b/ewah/ewok.h
> @@ -180,7 +180,7 @@ int bitmap_get(struct bitmap *self, size_t pos);
>  void bitmap_reset(struct bitmap *self);
>  void bitmap_free(struct bitmap *self);
>  int bitmap_equals(struct bitmap *self, struct bitmap *other);
> -int bitmap_is_subset(struct bitmap *self, struct bitmap *super);
> +int bitmap_diff_nonzero(struct bitmap *self, struct bitmap *other);
>  
>  struct ewah_bitmap * bitmap_to_ewah(struct bitmap *bitmap);
>  struct bitmap *ewah_to_bitmap(struct ewah_bitmap *ewah);

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history
  2020-11-22 21:50     ` Junio C Hamano
@ 2020-11-23 14:54       ` Derrick Stolee
  0 siblings, 0 replies; 174+ messages in thread
From: Derrick Stolee @ 2020-11-23 14:54 UTC (permalink / raw)
  To: Junio C Hamano, Taylor Blau; +Cc: git, dstolee, peff, martin.agren, szeder.dev

On 11/22/2020 4:50 PM, Junio C Hamano wrote:
> Taylor Blau <me@ttaylorr.com> writes:
> 
>> From: Derrick Stolee <dstolee@microsoft.com>
>>
>> The fill_bitmap_commit() method assumes that every parent of the given
>> commit is already part of the current bitmap. Instead of making that
>> assumption, let's walk parents until we reach commits already part of
>> the bitmap. Set the value for that parent immediately after querying to
>> save time doing double calls to find_object_pos() and to avoid inserting
>> the parent into the queue multiple times.
> 
> Is it because somebody found a case where the assumption does not
> hold and the code with the assumption produces a wrong result?  Is
> it because we can get a better result without making the assumption
> the current code does?

The algorithm from "pack-bitmap-write: reimplement bitmap writing"
that calls fill_bitmap_commit() satisfies this assumption, since it
computes a reachability bitmap for every commit during the reverse
walk. We will soon change that algorithm to "skip" commits, so we
need this step in fill_bitmap_commit() to walk forward to fill the
gaps.

> In other words, can we explain why we are making the change in the
> proposed log message?
I'm sure Taylor and I can work out a better wording to make this
more clear.

Thanks,
-Stolee



^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1
  2020-11-22 19:36   ` Junio C Hamano
@ 2020-11-23 16:22     ` Taylor Blau
  2020-11-24  2:48       ` Jeff King
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-23 16:22 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git, dstolee, peff

On Sun, Nov 22, 2020 at 11:36:17AM -0800, Junio C Hamano wrote:
> Taylor Blau <me@ttaylorr.com> writes:
>
> > When the buffer size is exactly 1, we fail to grow it properly, since
> > the integer truncation means that 1 * 3 / 2 = 1. This can cause a bad
> > write on the line below.
>
> When the buffer_size is exactly (alloc_size - 1), we can fit the new
> element at the last word in the buffer array, but we still grow.  Is
> this because we anticipate that we would need to add more soon?

Right; the check 'if (self->buffer_size + 1 >= self->alloc_size)' could
probably be written as a strict inequality, but that check dates back to
the original ewah implementation that Vicent added.

But, that is not quite the point of this patch: instead we want to stop
the integer math on that line from preventing us from growing the
buffer.

I think that this paragraph would be clarified by adding "and we need to
grow" to the end of "when the buffer size is exactly 1".

> > Bandaid this by first padding the buffer by 16, and then growing it.
> > This still allows old blocks to fit into new ones, but fixes the case
> > where the block size equals 1.
>
> Adding 16 unconditionally is not "to pad".  If somebody really wants
> "to pad", a likely implementation would be that the size resulting
> from some computation (e.g. multiplying by 1.5) is round up to a
> multiple of some number, than rounding up the original number before
> multiplying it by 1.5, so the use of that verb in the explanation
> did not help me understand what is going on.
>
> Having said that, I see you used the word "bandaid" to signal that
> we shouldn't worry about this being optimal or even correct and we
> should be happy as long as it is not wrong ;-), but is there any
> reason behind this 16 (as opposed to picking, say, 8 or 31), or is
> that pulled out of thin air?

Any phrase that more accurately states what's going on is fine by me,
but...

> I think this probably mimics what alloc_nr() computes for ALLOC_GROW().
> I wonder why buffer_grow() cannot be built around ALLOC_GROW() instead?

I think that we probably could just use ALLOC_GROW() as you suggest.
Funny enough, reading through GitHub's chat logs, apparently this is
something that Peff and I talked about. So, 16 probably came from
alloc_nr(), but we probably stopped short of realizing that we could
just use ALLOC_GROW as-is.

So, maybe something along the lines of:

diff --git a/ewah/ewah_bitmap.c b/ewah/ewah_bitmap.c
index 3fae04ad00..9effcc0877 100644
--- a/ewah/ewah_bitmap.c
+++ b/ewah/ewah_bitmap.c
@@ -19,6 +19,7 @@
 #include "git-compat-util.h"
 #include "ewok.h"
 #include "ewok_rlw.h"
+#include "cache.h"

 static inline size_t min_size(size_t a, size_t b)
 {
@@ -33,12 +34,7 @@ static inline size_t max_size(size_t a, size_t b)
 static inline void buffer_grow(struct ewah_bitmap *self, size_t new_size)
 {
 	size_t rlw_offset = (uint8_t *)self->rlw - (uint8_t *)self->buffer;
-
-	if (self->alloc_size >= new_size)
-		return;
-
-	self->alloc_size = new_size;
-	REALLOC_ARRAY(self->buffer, self->alloc_size);
+	ALLOC_GROW(self->buffer, new_size, self->alloc_size);
 	self->rlw = self->buffer + (rlw_offset / sizeof(eword_t));
 }

> Nothing in the code is wrong per-se, but just what I noticed while
> re-reading the patch.
>
> Thanks.

Thanks,
Taylor

^ permalink raw reply related	[flat|nested] 174+ messages in thread

* Re: [PATCH 07/23] ewah: make bitmap growth less aggressive
  2020-11-22 20:32   ` Junio C Hamano
@ 2020-11-23 16:49     ` Taylor Blau
  2020-11-24  3:00       ` Jeff King
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-23 16:49 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git, dstolee, peff

On Sun, Nov 22, 2020 at 12:32:01PM -0800, Junio C Hamano wrote:
> Taylor Blau <me@ttaylorr.com> writes:
>
> >  - a geometric increase in existing size; we'll switch to 3/2 instead of
> >    2 here. That's less aggressive and may help avoid fragmenting memory
> >    (N + 3N/2 > 9N/4, so old chunks can be reused as we scale up).
>
> I am sure this is something obvious to bitmap folks, but where does
> 9N/4 come from (I get that the left-hand-side of the comparison is
> the memory necessary to hold both the old and the new copy while
> reallocating the words[] array)?

I thought that I was in the group of "bitmap folks", but since it's not
obvious to me either, I guess I'll have to hand in my bitmap folks
membership card ;).

Peff: where does 9N/4 come from? On a similar note: we could certainly
use ALLOC_GROW here, too, but it would change the behavior slightly (by
using alloc_nr()'s "add-16-first" behavior). Maybe we should be using
it, but I'll defer to your judgement.

> Thanks.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 08/23] ewah: implement bitmap_or()
  2020-11-22 20:34   ` Junio C Hamano
@ 2020-11-23 16:52     ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-23 16:52 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git, dstolee, peff

On Sun, Nov 22, 2020 at 12:34:03PM -0800, Junio C Hamano wrote:
> Taylor Blau <me@ttaylorr.com> writes:
>
> > From: Jeff King <peff@peff.net>
> >
> > We have a function to bitwise-OR an ewah into an uncompressed bitmap,
> > but not to OR two uncompressed bitmaps. Let's add it.
> >
> > Interestingly, we have a public header declaration going back to
> > e1273106f6 (ewah: compressed bitmap implementation, 2013-11-14), but the
> > function was never implemented.
>
> So we have had decl, no impl, but it did not matter because there
> was no user?  Presumably we will see a real user soon in the series
> ;-)

Indeed :-). I added a note to this patch's log message to indicate that
a new/first caller would be appearing in a couple of patches after this
one.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 13/24] bitmap: add bitmap_diff_nonzero()
  2020-11-22 22:01     ` Junio C Hamano
@ 2020-11-23 20:19       ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-23 20:19 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git, dstolee, peff, martin.agren, szeder.dev

On Sun, Nov 22, 2020 at 02:01:21PM -0800, Junio C Hamano wrote:
> I actually think it would be easier to follow the logic to replace
> this pointer with
>
> 	size_t common_size;
>   [ ... ]

Yep, much clearer indeed. Thanks.

Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-22  2:31         ` Taylor Blau
@ 2020-11-24  2:43           ` Jeff King
  2020-12-01 23:04             ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jeff King @ 2020-11-24  2:43 UTC (permalink / raw)
  To: Taylor Blau
  Cc: Martin Ågren, Junio C Hamano, Git Mailing List,
	Derrick Stolee, SZEDER Gábor

On Sat, Nov 21, 2020 at 09:31:50PM -0500, Taylor Blau wrote:

> > There was SZEDER's comment on that last patch in v2, where future
> > readers of that patch will have to wonder why it does s/256/270/ in a
> > test. I agree with SZEDER that the change should be mentioned in the
> > commit message, even if it's just "unfortunately, we have some magicness
> > here, plus we want to pass both with SHA-1 and SHA-256; turns out 270
> > hits the problem we want to test for".
> 
> Thanks for reviewing it, and noticing a couple of problems in the
> earlier patches, too. If folks are happy with the replacement that I
> sent [1], then I am too :-).
> 
> I don't think that the "big" patch generated a ton of review on the
> list, but maybe that's OK. Peff, Stolee, and I all reviewed that patch
> extensively when deploying it at GitHub (where it has been running since
> late Summer).

Hrm. I thought you were going to integrate the extra checks I suggested
for load_bitmap_entries_v1(). Which is looks like you did in patch 17.
After that, the s/256/270/ hack should not be necessary anymore (if it
is, then we should keep fixing more spots).

-Peff

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1
  2020-11-23 16:22     ` Taylor Blau
@ 2020-11-24  2:48       ` Jeff King
  2020-11-24  2:51         ` Jeff King
  0 siblings, 1 reply; 174+ messages in thread
From: Jeff King @ 2020-11-24  2:48 UTC (permalink / raw)
  To: Taylor Blau; +Cc: Junio C Hamano, git, dstolee

On Mon, Nov 23, 2020 at 11:22:04AM -0500, Taylor Blau wrote:

> > I think this probably mimics what alloc_nr() computes for ALLOC_GROW().
> > I wonder why buffer_grow() cannot be built around ALLOC_GROW() instead?
> 
> I think that we probably could just use ALLOC_GROW() as you suggest.
> Funny enough, reading through GitHub's chat logs, apparently this is
> something that Peff and I talked about. So, 16 probably came from
> alloc_nr(), but we probably stopped short of realizing that we could
> just use ALLOC_GROW as-is.

That would probably be OK. It's a bit more aggressive, which could
matter if you have a large number of very small bitmaps. My original
goal of the "grow less aggressively" patch was to keep memory usage
down, knowing that I was going to be holding a lot of bitmaps in memory
at once. But even with micro-optimizations like this, it turned out to
be far too big in practice (and hence Stolee's work on top to reduce the
total number we hold at once).

> @@ -33,12 +34,7 @@ static inline size_t max_size(size_t a, size_t b)
>  static inline void buffer_grow(struct ewah_bitmap *self, size_t new_size)
>  {
>  	size_t rlw_offset = (uint8_t *)self->rlw - (uint8_t *)self->buffer;
> -
> -	if (self->alloc_size >= new_size)
> -		return;
> -
> -	self->alloc_size = new_size;
> -	REALLOC_ARRAY(self->buffer, self->alloc_size);
> +	ALLOC_GROW(self->buffer, new_size, self->alloc_size);
>  	self->rlw = self->buffer + (rlw_offset / sizeof(eword_t));
>  }

I think the real test would be measuring the peak heap of the series as
you posted it in v2, and this version replacing this patch (and the
"grow less aggressively" one) with ALLOC_GROW(). On something big, like
repacking all of the torvalds/linux or git/git fork networks.

If there's no appreciable difference, then definitely I think it's worth
the simplicity of reusing ALLOC_GROW().

-Peff

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1
  2020-11-24  2:48       ` Jeff King
@ 2020-11-24  2:51         ` Jeff King
  2020-12-01 22:56           ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jeff King @ 2020-11-24  2:51 UTC (permalink / raw)
  To: Taylor Blau; +Cc: Junio C Hamano, git, dstolee

On Mon, Nov 23, 2020 at 09:48:22PM -0500, Jeff King wrote:

> > I think that we probably could just use ALLOC_GROW() as you suggest.
> > Funny enough, reading through GitHub's chat logs, apparently this is
> > something that Peff and I talked about. So, 16 probably came from
> > alloc_nr(), but we probably stopped short of realizing that we could
> > just use ALLOC_GROW as-is.
> 
> That would probably be OK. It's a bit more aggressive, which could
> matter if you have a large number of very small bitmaps. My original
> goal of the "grow less aggressively" patch was to keep memory usage
> down, knowing that I was going to be holding a lot of bitmaps in memory
> at once. But even with micro-optimizations like this, it turned out to
> be far too big in practice (and hence Stolee's work on top to reduce the
> total number we hold at once).

Oh, sorry, I was mixing this patch up with patches 6 and 7, which touch
buffer_grow().  This is a totally separate spot, and this is a pure
bug-fix.

I think the main reason we didn't use ALLOC_GROW() here in the beginning
is that the ewah code was originally designed to be a separate library
(a port of the java ewah library), and didn't depend on Git code.

These days we pull in xmalloc, etc, so we should be fine to use
ALLOC_GROW().

Likewise...

> I think the real test would be measuring the peak heap of the series as
> you posted it in v2, and this version replacing this patch (and the
> "grow less aggressively" one) with ALLOC_GROW(). On something big, like
> repacking all of the torvalds/linux or git/git fork networks.
> 
> If there's no appreciable difference, then definitely I think it's worth
> the simplicity of reusing ALLOC_GROW().

All of this is nonsense (though it does apply to the question of using
ALLOC_GROW() in bitmap_grow(), via patch 7).

We have many fewer ewah bitmaps in memory at one time, so I don't think
it's worth micro-managing a few extra bytes of growth. Using
ALLOC_GROW() for this case would be fine.

-Peff

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 07/23] ewah: make bitmap growth less aggressive
  2020-11-23 16:49     ` Taylor Blau
@ 2020-11-24  3:00       ` Jeff King
  2020-11-24 20:11         ` Junio C Hamano
  0 siblings, 1 reply; 174+ messages in thread
From: Jeff King @ 2020-11-24  3:00 UTC (permalink / raw)
  To: Taylor Blau; +Cc: Junio C Hamano, git, dstolee

On Mon, Nov 23, 2020 at 11:49:49AM -0500, Taylor Blau wrote:

> On Sun, Nov 22, 2020 at 12:32:01PM -0800, Junio C Hamano wrote:
> > Taylor Blau <me@ttaylorr.com> writes:
> >
> > >  - a geometric increase in existing size; we'll switch to 3/2 instead of
> > >    2 here. That's less aggressive and may help avoid fragmenting memory
> > >    (N + 3N/2 > 9N/4, so old chunks can be reused as we scale up).
> >
> > I am sure this is something obvious to bitmap folks, but where does
> > 9N/4 come from (I get that the left-hand-side of the comparison is
> > the memory necessary to hold both the old and the new copy while
> > reallocating the words[] array)?
> 
> I thought that I was in the group of "bitmap folks", but since it's not
> obvious to me either, I guess I'll have to hand in my bitmap folks
> membership card ;).
> 
> Peff: where does 9N/4 come from?

it is not a bitmap thing at all. We are growing a buffer, so if we
continually multiply it by 3/2, then our sequence of sizes is:

  - before growth: N
  - after 1 growth: 3N/2
  - after 2 growths: 9N/4

Meaning we can fit the third chunk into the memory vacated by the second
two. Whereas with a factor of, say 2:

  - before growth: N
  - after 1 growth: 2N
  - after 2 growth: 4N

which does not fit, and fragments your memory.

There's a slight lie there, which is that you'll typically still hold
the growth G-1 while doing growth G (after all, that is where you will
copy the data from). But it still works out that you eventually get to
use old chunks. The breakeven point is actually the golden ratio, but a)
it's irrational and b) it probably makes sense to give some slop for
malloc chunk overhead. 1.6 would probably be fine, too, though. :)

> On a similar note: we could certainly
> use ALLOC_GROW here, too, but it would change the behavior slightly (by
> using alloc_nr()'s "add-16-first" behavior). Maybe we should be using
> it, but I'll defer to your judgement.

That would be OK, modulo the measurement question I asked in the other
(wrong) part of the thread.

-Peff

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps
  2020-11-17 21:48   ` [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
@ 2020-11-24  6:07     ` Jonathan Tan
  2020-11-25  1:46     ` Jonathan Tan
  1 sibling, 0 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-11-24  6:07 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

I think this is the "big patch" mentioned in IRC [1]? I'll review just
the commit message of this one first, then go back and review from patch
10 ("pack-bitmap-write: reimplement bitmap writing") up to this
one inclusive.

[1] https://colabti.org/irclogger/irclogger_log/git-devel?date=2020-11-23

> From: Derrick Stolee <dstolee@microsoft.com>
> 
> The bitmap_writer_build() method calls bitmap_builder_init() to
> construct a list of commits reachable from the selected commits along
> with a "reverse graph". This reverse graph has edges pointing from a
> commit to other commits that can reach that commit. After computing a
> reachability bitmap for a commit, the values in that bitmap are then
> copied to the reachability bitmaps across the edges in the reverse
> graph.
> 
> We can now relax the role of the reverse graph to greatly reduce the
> number of intermediate reachability bitmaps we compute during this
> reverse walk. The end result is that we walk objects the same number of
> times as before when constructing the reachability bitmaps, but we also
> spend much less time copying bits between bitmaps and have much lower
> memory pressure in the process.

OK - as I have seen in the previous patches, and as said here, the edges
of the graph were previously parent to immediate child, but I believe
that patch 12 ("pack-bitmap-write: fill bitmap with commit history")
makes it so that the edges don't need to be direct parent-child
relationships. That patch does not, however, decide what the vertices
and edges should thus be, so that ability could not be used. This patch
seems to do so, and thus makes use of that ability.

> The core idea is to select a set of "important" commits based on
> interactions among the sets of commits reachable from each selected commit.

Makes sense.

> The first technical concept is to create a new 'commit_mask' member in the
> bb_commit struct. Note that the selected commits are provided in an
> ordered array. The first thing to do is to mark the ith bit in the
> commit_mask for the ith selected commit. 

OK - so this commit_mask is like the bitmaps in Git. There is an array,
initially populated with the list of selected commits (which are
selected using another algorithm, which is a separate concern from this
patch set), and each bit in commit_mask corresponds to the corresponding
entry in that array.

From this, I assume that the commit_mask values in the selected
bb_commit structs will start with a nonzero value (written in binary,
having 1 bit set), and the other commit_mask values will start with
zero.

> As we walk the commit-graph, we
> copy the bits in a commit's commit_mask to its parents. At the end of
> the walk, the ith bit in the commit_mask for a commit C stores a boolean
> representing "The ith selected commit can reach C."

The walk is done in topological order - visiting children before
parents. Copying makes sense - if a commit can reach me, it can reach my
parents as well.

> As we walk, we will discover non-selected commits that are important. We
> will get into this later, but those important commits must also receive
> bit positions, growing the width of the bitmasks as we walk. At the true
> end of the walk, the ith bit means "the ith _important_ commit can reach
> C."

OK - so the initial array, initially populated with the list of selected
commits, can be grown and will include other important commits as well.
This is similar to the bitmap revwalk algorithm - the bitmaps in that
algorithm can be grown to include other objects as well.

> MAXIMAL COMMITS
> ---------------
> 
> We use a new 'maximal' bit in the bb_commit struct to represent whether
> a commit is important or not. The term "maximal" comes from the
> partially-ordered set of commits in the commit-graph where C >= P if P
> is a parent of C, and then extending the relationship transitively.

I had to look up what "maximal" means in a partial order. :-P An element
of a partially ordered set is "maximal" if there is no other element
that is "greater" than it. Here, all descendants are "greater" than
their ancestors.

> Instead of taking the maximal commits across the entire commit-graph, 

I was wondering about this :-)

> we
> instead focus on selecting each commit that is maximal among commits
> with the same bits on in their commit_mask. 

Ah, OK. Two commits will have the same commit_mask if the exact same set
of important commits can reach them.

> This definition is
> important, so let's consider an example.
> 
> Suppose we have three selected commits A, B, and C. These are assigned
> bitmasks 100, 010, and 001 to start. Each of these can be marked as
> maximal immediately because they each will be the uniquely maximal
> commit that contains their own bit. 

That is correct. To further elaborate on this explanation, let's say we
have a selected commit C (and since it is selected, the commit_mask in
each commit will have a bit corresponding to whether C can reach it).
Each other commit is either an ancestor, a descendant, or unrelated.

 - C cannot reach descendants.
 - C cannot reach unrelated commits.
 - C can reach all ancestors, but in the partial order, C compares
   "greater" to them anyway.

So every other commit cannot affect C's maximal status.

> Keep in mind that that these commits
> may have different bitmasks after the walk; for example, if B can reach
> C but A cannot, then the final bitmask for C is 011. Even in these
> cases, C would still be a maximal commit among all commits with the
> third bit on in their masks.

Yes.

> Now define sets X, Y, and Z to be the sets of commits reachable from A,
> B, and C, respectively. The intersections of these sets correspond to
> different bitmasks:
> 
>  * 100: X - (Y union Z)
>  * 010: Y - (X union Z)
>  * 001: Z - (X union Y)
>  * 110: (X intersect Y) - Z
>  * 101: (X intersect Z) - Y
>  * 011: (Y intersect Z) - X
>  * 111: X intersect Y intersect Z
> 
> This can be visualized with the following Hasse diagram:
> 
> 	100    010    001
>          | \  /   \  / |
>          |  \/     \/  |
>          |  /\     /\  |
>          | /  \   /  \ |
>         110    101    011
>           \___  |  ___/
>               \ | /
>                111
> 
> Some of these bitmasks may not be represented, depending on the topology
> of the commit-graph. In fact, we are counting on it, since the number of
> possible bitmasks is exponential in the number of selected commits, but
> is also limited by the total number of commits. In practice, very few
> bitmasks are possible because most commits converge on a common "trunk"
> in the commit history.

This section wasn't very useful to me - but I would appreciate it if
others chimed in to say it was useful to them.

> With this three-bit example, we wish to find commits that are maximal
> for each bitmask. How can we identify this as we are walking?

OK - now we come to the algorithm. I presume the algorithm doesn't only
find commits that are maximal for each bitmask, but also updates the
list of important commits (and thus increasing the size of the bitmask)?
Reading below, I see that the answer to my question is yes. Ah...it
wasn't clear to me that the purpose of finding the maximal commits is
also to add to the list of important commits, but perhaps it will be
obvious to other reasons.

I'll work through the algorithm using the butterfly example below,
reproduced here:

>    I    J
>    |\  /|
>    | \/ |
>    | /\ |
>    |/  \|
>    M    N
>     \  /
>      |/
>      Q

I was going to suggest that we suppose that there are no selected
commits, but it looks like the algorithm would optimize itself out
(meaning that it won't make any commit maximal - which makes sense, I
guess). The example below had I and J as selected commits (which I know
because the commit_mask values for I and J are "0b10" and "0b01"
respectively), so let's go with that.

> As we walk, we visit a commit C. Since we are walking the commits in
> topo-order, we know that C is visited after all of its children are
> visited. Thus, when we get C from the revision walk we inspect the
> 'maximal' property of its bb_data and use that to determine if C is truly
> important. Its commit_mask is also nearly final. 

OK - when a commit is visited, we would already know its "maximal"
status because when we had visited its parents, we already modified
"maximal" (because we update a commit's children when we visit it -
details about this are to follow, presumably).

> If C is not one of the
> originally-selected commits, then assign a bit position to C (by
> incrementing num_maximal) and set that bit on in commit_mask. See
> "MULTIPLE MAXIMAL COMMITS" below for more detail on this.

I presume we only assign a bit position to C if it is "maximal"?

> Now that the commit C is known to be maximal or not, consider each
> parent P of C. Compute two new values:
> 
>  * c_not_p : true if and only if the commit_mask for C contains a bit
>              that is not contained in the commit_mask for P.
> 
>  * p_not_c : true if and only if the commit_mask for P contains a bit
>              that is not contained in the commit_mask for P.

OK, let's try this with I. I'll use the same <commit
letter>:<commit_mask in little-endian order> notation as the one the
commit author uses below to indicate the commit_mask of a commit. We
have I:10 with 2 parents M:00 and N:00, so for both parents, c_not_p is
true and p_not_c is false.

> If c_not_p is false, then P already has all of the bits that C would
> provide to its commit_mask. In this case, move on to other parents as C
> has nothing to contribute to P's state that was not already provided by
> other children of P.

To emphasize, we "move on" regardless of what p_not_c is. In our
example, this is not true in I's case, so let's read on.

After the analysis below, I see why we can "move on".

> We continue with the case that c_not_p is true. This means there are
> bits in C's commit_mask to copy to P's commit_mask, so use bitmap_or()
> to add those bits.

OK. So we have I:10 (unchanged), M:10, N:10. Which as I said above,
makes sense, since if a commit can reach I, it can reach M and N.

> If p_not_c is also true, then set the maximal bit for P to one. This means
> that if no other commit has P as a parent, then P is definitely maximal.
> This is because no child had the same bitmask. It is important to think
> about the maximal bit for P at this point as a temporary state: "P is
> maximal based on current information."
> 
> In contrast, if p_not_c is false, then set the maximal bit for P to
> zero.

At first it was confusing that (1) the last maximal bit is used when
there is no guarantee that the children are iterated in any particular
order, and (2) the maximal bit is never updated when c_not_p is false.
So I was thinking of a counterexample, but couldn't think of one. So
this algorithm looks correct so far.

For (2), I think of it this way:

    C1 C2 C3
      \ |/
        P

Let's say that C1 has a commit_mask, and C2 and C3 have the exact same
commit_mask. P's maximal bit will be the exact same in the following
situation:

    C1 C2
      \ |
        P

So skipping over C3 is correct. And C3 must be skipped because the
algorithm as written is not idempotent (during the C2 iteration,
commit_mask of P is updated).

For (1), I think of it as follows. The only one that counts is the very
last calculation (which you can see from the algorithm - the maximal bit
is constantly being overridden). During the very last iteration, does P
have any information that Cx does not? If yes, P has a commit_mask that
is unique w.r.t. all its children (since it combines unique information
from its other children that Cx does not, plus some other unique
information that Cx has and its other children do not) and is
therefore maximal. If not, all the information P got is also known by
Cx, so it is definitely not maximal (Cx shares the commit_mask with P,
and is "greater" than P).

> Further, clear all reverse_edges for P since any edges that were
> previously assigned to P are no longer important. P will gain all
> reverse edges based on C.

(I read ahead to see what the reverse edges are.) Indeed, C has all the
information.

> The final thing we need to do is to update the reverse edges for P.
> These reverse edges respresent "which closest maximal commits
> contributed bits to my commit_mask?" Since C contributed bits to P's
> commit_mask in this case, C must add to the reverse edges of P.
> 
> If C is maximal, then C is a 'closest' maximal commit that contributed
> bits to P. Add C to P's reverse_edges list.
> 
> Otherwise, C has a list of maximal commits that contributed bits to its
> bitmask (and this list is exactly one element). Add all of these items
> to P's reverse_edges list. Be careful to ignore duplicates here.

OK - the other end of reverse edges are always to maximal commits.
Propagation in this way (if C is not maximal) makes sense.

> After inspecting all parents P for a commit C, we can clear the
> commit_mask for C. This reduces the memory load to be limited to the
> "width" of the commit graph.

Optimization - OK.

I might as well finish working through the example. Let's start from the
beginning.

    I    J I:10()M J:01()M
    |\  /|
    | \/ |
    | /\ |
    |/  \|
    M    N
     \  /
      |/
      Q

(Brackets are the destinations of reverse edges. M is maximal, ~M is not
maximal.)

Iteration starts with I. I is maximal, but it is also one of the
selected commits, so we need not do anything else. Look at its parents,
starting with M. c_of_p is true, so copy the bits over. p_of_c is false,
so the maximal bit of M is zero, clear all its reverse edges (no-op in
this case, since it has none), and update its reverse edges: C is
maximal so it will just be C. The procedure for N is exactly the same.
So we have:

    I    J I:10()M J:01()M
    |\  /|
    | \/ |
    | /\ |
    |/  \|
    M    N M:10(I)~M N:10(I)~M
     \  /
      |/
      Q

Now onto J. J is maximal, but it is also one of the selected commits, so
we need not do anything else. Look at its parent M. c_of_p is true, so
copy the bits over. p_of_c is true this time. So set the maximal bit to
true. (Indeed, M has information that is independent of J - the stuff
that it got from I.) We need not clear any reverse edges, but must still
update them: J is maximal so we add J. The procedure for N is exactly
the same. So we have:

    I    J I:10()M J:01()M
    |\  /|
    | \/ |
    | /\ |
    |/  \|
    M    N M:11(I,J)M N:11(I,J)M
     \  /
      |/
      Q

Let's go to M. M is maximal, and it is not one of the selected commits,
so widen the commit_mask and set the corresponding bit on M. Look at its
only parent Q. c_of_p is true, so copy the bits over. p_of_c is false,
so the maximal bit of Q is zero, and update its reverse edges as usual.

    I    J I:10()M J:01()M
    |\  /|
    | \/ |
    | /\ |
    |/  \|
    M    N M:111(I,J)M N:11(I,J)M
     \  /
      |/
      Q Q:111(M)~M

Now, N. N is maximal, and it is not one of the selected commits, so
widen the commit_mask and set the corresponding bit on N. Look at its
only parent Q. c_of_p is true; copy bits; but this time p_of_c is true,
so set the maximal bit to true. Don't clear reverse edges; N is maximal
so add it.

    I    J I:10()M J:01()M
    |\  /|
    | \/ |
    | /\ |
    |/  \|
    M    N M:111(I,J)M N:1101(I,J)M
     \  /
      |/
      Q Q:1111(M,N)M

Checking the answer below, it looks like the commit author and I agree.

> Consider our ABC/XYZ example from earlier and let's inspect the state of
> the commits for an interesting bitmask, say 011. Suppose that D is the
> only maximal commit with this bitmask (in the first three bits). All
> other commits with bitmask 011 have D as the only entry in their
> reverse_edges list. D's reverse_edges list contains B and C.

Yes, this makes sense.

Let me write about "D's reverse_edges list contains B and C" first: The
fact that D has a bitmask of 011 shows no important commits can reach it
other than B or C. Any intermediate commits between B and D would not be
maximal (because B is "greater" than such an intermediate commit, and we
already established that no other important commit can reach D, and
therefore no other important commit can reach any of the commits on the
path between B and D). Same analysis applies for C instead of B. So the
propagated reverse edges would just be B and C.

Now "all other commits with bitmask 011 have D as the only entry in
their reverse_edges list". Since D is the only maximal commit with 011,
any other commit that has 011 (1) must have got it from D, and (2)
cannot be reached by any other important commit. A similar analysis as
in the previous paragraph shows why the reverse_edges list for all these
commits would only contain D.

> COMPUTING REACHABILITY BITMAPS
> ------------------------------
> 
> Now that we have our definition, let's zoom out and consider what
> happens with our new reverse graph when computing reachability bitmaps.
> We walk the reverse graph in reverse-topo-order, so we visit commits
> with largest commit_masks first. After we compute the reachability
> bitmap for a commit C, we push the bits in that bitmap to each commit D
> in the reverse edge list for C. 

That makes sense - the reachability bitmap for D is the reachability
bitmap for C + the reachability bitmap of (D-C). Here we have the first
operand.

> Then, when we finally visit D we already
> have the bits for everything reachable from maximal commits that D can
> reach and we only need to walk the objects in the set-difference.

Walking the objects in the set-difference gives us the second operand.
Makes sense.

> In our ABC/XYZ example, when we finally walk for the commit A we only
> need to walk commits with bitmask equal to A's bitmask. If that bitmask
> is 100, then we are only walking commits in X - (Y union Z) because the
> bitmap already contains the bits for objects reachable from (X intersect
> Y) union (X intersect Z) (i.e. the bits from the reachability bitmaps
> for the maximal commits with bitmasks 110 and 101).

This is probably correct, but I've lost track of what X, Y, and Z are so
I'll just skip this paragraph.

> The behavior is intended to walk each commit (and the trees that commit
> introduces) at most once while allocating and copying fewer reachability
> bitmaps. 

Yes, I can see how this algorithm causes this behavior.

> There is one caveat: what happens when there are multiple
> maximal commits with the same bitmask, with respect to the initial set
> of selected commits?

I'm going to be lazy here and ask, is this possible? As described
below in "MULTIPLE MAXIMAL COMMITS", if a non-selected commit turns out
to be maximal, it will have its very own bit, and thus become the
"progenitor" of all commits with that bit set. (This is not a true
progenitor, because this bit propagates from the children to the
parents and not the other way round - unlike a gene.)

> MULTIPLE MAXIMAL COMMITS
> ------------------------

I think I discussed everything here earlier, so [skip].

> PERFORMANCE MEASUREMENTS
> ------------------------

Numbers look good. [skip]

As discussed above, I'll not review the code this round. [skip code]

Phew...this took longer than expected. I'll see if I can review the rest
of the patches tomorrow.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 07/23] ewah: make bitmap growth less aggressive
  2020-11-24  3:00       ` Jeff King
@ 2020-11-24 20:11         ` Junio C Hamano
  0 siblings, 0 replies; 174+ messages in thread
From: Junio C Hamano @ 2020-11-24 20:11 UTC (permalink / raw)
  To: Jeff King; +Cc: Taylor Blau, git, dstolee

Jeff King <peff@peff.net> writes:

> On Mon, Nov 23, 2020 at 11:49:49AM -0500, Taylor Blau wrote:
>
>> On Sun, Nov 22, 2020 at 12:32:01PM -0800, Junio C Hamano wrote:
>> > Taylor Blau <me@ttaylorr.com> writes:
>> >
>> > >  - a geometric increase in existing size; we'll switch to 3/2 instead of
>> > >    2 here. That's less aggressive and may help avoid fragmenting memory
>> > >    (N + 3N/2 > 9N/4, so old chunks can be reused as we scale up).
>> >
>> > I am sure this is something obvious to bitmap folks, but where does
>> > 9N/4 come from (I get that the left-hand-side of the comparison is
>> > the memory necessary to hold both the old and the new copy while
>> > reallocating the words[] array)?
>> 
>> I thought that I was in the group of "bitmap folks", but since it's not
>> obvious to me either, I guess I'll have to hand in my bitmap folks
>> membership card ;).
>> 
>> Peff: where does 9N/4 come from?
>
> it is not a bitmap thing at all. We are growing a buffer, so if we
> continually multiply it by 3/2, then our sequence of sizes is:
>
>   - before growth: N
>   - after 1 growth: 3N/2
>   - after 2 growths: 9N/4

AH, OK.  I feel stupid not to have thought of this myself X-<.

Thanks.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 10/24] pack-bitmap-write: reimplement bitmap writing
  2020-11-17 21:47   ` [PATCH v2 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
@ 2020-11-25  0:53     ` Jonathan Tan
  2020-11-28 17:27       ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-11-25  0:53 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

[snip commit message]

Thanks for the very clear commit message explaining the new algorithm.

> +struct bb_commit {
> +	struct commit_list *children;
> +	struct bitmap *bitmap;
> +	unsigned selected:1;
> +	unsigned idx; /* within selected array */
> +};
> +
> +define_commit_slab(bb_data, struct bb_commit);
> +
> +struct bitmap_builder {
> +	struct bb_data data;
> +	struct commit **commits;
> +	size_t commits_nr, commits_alloc;
> +};
> +
> +static void bitmap_builder_init(struct bitmap_builder *bb,
> +				struct bitmap_writer *writer)
>  {
>  	struct rev_info revs;
> +	struct commit *commit;
> +	unsigned int i;
> +
> +	memset(bb, 0, sizeof(*bb));
> +	init_bb_data(&bb->data);
> +
> +	reset_revision_walk();
> +	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
> +	revs.topo_order = 1;
> +
> +	for (i = 0; i < writer->selected_nr; i++) {
> +		struct commit *c = writer->selected[i].commit;
> +		struct bb_commit *ent = bb_data_at(&bb->data, c);
> +		ent->selected = 1;
> +		ent->idx = i;
> +		add_pending_object(&revs, &c->object, "");
> +	}
> +
> +	if (prepare_revision_walk(&revs))
> +		die("revision walk setup failed");
> +
> +	while ((commit = get_revision(&revs))) {
> +		struct commit_list *p;
> +
> +		parse_commit_or_die(commit);
> +
> +		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
> +		bb->commits[bb->commits_nr++] = commit;
> +
> +		for (p = commit->parents; p; p = p->next) {
> +			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
> +			commit_list_insert(commit, &ent->children);
> +		}
> +	}
> +}

Looks straightforward.

> +static void bitmap_builder_clear(struct bitmap_builder *bb)
> +{
> +	clear_bb_data(&bb->data);
> +	free(bb->commits);
> +	bb->commits_nr = bb->commits_alloc = 0;
> +}

I was wondering why the commit list and the children in struct bb_commit
weren't cleared, but that's because they are cleared during the
algorithm. So this is fine.

> +static void fill_bitmap_tree(struct bitmap *bitmap,
> +			     struct tree *tree)
> +{
> +	uint32_t pos;
> +	struct tree_desc desc;
> +	struct name_entry entry;
> +
> +	/*
> +	 * If our bit is already set, then there is nothing to do. Both this
> +	 * tree and all of its children will be set.
> +	 */
> +	pos = find_object_pos(&tree->object.oid);
> +	if (bitmap_get(bitmap, pos))
> +		return;
> +	bitmap_set(bitmap, pos);
> +
> +	if (parse_tree(tree) < 0)
> +		die("unable to load tree object %s",
> +		    oid_to_hex(&tree->object.oid));
> +	init_tree_desc(&desc, tree->buffer, tree->size);
> +
> +	while (tree_entry(&desc, &entry)) {
> +		switch (object_type(entry.mode)) {
> +		case OBJ_TREE:
> +			fill_bitmap_tree(bitmap,
> +					 lookup_tree(the_repository, &entry.oid));
> +			break;
> +		case OBJ_BLOB:
> +			bitmap_set(bitmap, find_object_pos(&entry.oid));
> +			break;
> +		default:
> +			/* Gitlink, etc; not reachable */
> +			break;
> +		}
> +	}
> +
> +	free_tree_buffer(tree);
> +}

Looks straightforward.

> +static void fill_bitmap_commit(struct bb_commit *ent,
> +			       struct commit *commit)
> +{
> +	if (!ent->bitmap)
> +		ent->bitmap = bitmap_new();
> +
> +	/*
> +	 * mark ourselves, but do not bother with parents; their values
> +	 * will already have been propagated to us
> +	 */
> +	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
> +	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
> +}

OK - when filling the bitmap for a commit, we only set the specific bit
for the commit itself, and all the bits for the commit's tree and the
tree's descendants. This is consistent with the explanation of the
algorithm in the commit message.

> +static void store_selected(struct bb_commit *ent, struct commit *commit)
> +{
> +	struct bitmapped_commit *stored = &writer.selected[ent->idx];
> +	khiter_t hash_pos;
> +	int hash_ret;
> +
> +	/*
> +	 * the "reuse bitmaps" phase may have stored something here, but
> +	 * our new algorithm doesn't use it. Drop it.
> +	 */
> +	if (stored->bitmap)
> +		ewah_free(stored->bitmap);

I tried to figure out how the "reuse bitmaps" phase stores things in
this field, but that led me down a rabbit hole that I didn't pursue.
But anyway, the new bitmap is correctly generated, so clearing the old
bitmap is safe (except, possibly, wasting time, but I see that in a
subsequent patch, existing bitmaps will be reused in a new wawy).

> +
> +	stored->bitmap = bitmap_to_ewah(ent->bitmap);
> +
> +	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
> +	if (hash_ret == 0)
> +		die("Duplicate entry when writing index: %s",
> +		    oid_to_hex(&commit->object.oid));
> +	kh_value(writer.bitmaps, hash_pos) = stored;
> +}
> +
> +void bitmap_writer_build(struct packing_data *to_pack)
> +{
> +	struct bitmap_builder bb;
> +	size_t i;
> +	int nr_stored = 0; /* for progress */
>  
>  	writer.bitmaps = kh_init_oid_map();
>  	writer.to_pack = to_pack;
>  
>  	if (writer.show_progress)
>  		writer.progress = start_progress("Building bitmaps", writer.selected_nr);
> +	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
> +		the_repository);
> +
> +	bitmap_builder_init(&bb, &writer);
> +	for (i = bb.commits_nr; i > 0; i--) {
> +		struct commit *commit = bb.commits[i-1];
> +		struct bb_commit *ent = bb_data_at(&bb.data, commit);
> +		struct commit *child;
> +
> +		fill_bitmap_commit(ent, commit);
> +
> +		if (ent->selected) {
> +			store_selected(ent, commit);
> +			nr_stored++;
> +			display_progress(writer.progress, nr_stored);
> +		}
> +
> +		while ((child = pop_commit(&ent->children))) {

Here the children (specifically, the struct commit_list) are freed (one
by one).

> +			struct bb_commit *child_ent =
> +				bb_data_at(&bb.data, child);
> +
> +			if (child_ent->bitmap)
> +				bitmap_or(child_ent->bitmap, ent->bitmap);
> +			else
> +				child_ent->bitmap = bitmap_dup(ent->bitmap);
> +		}
> +		bitmap_free(ent->bitmap);
> +		ent->bitmap = NULL;

Here the bitmap is freed.

>  	}
> +	bitmap_builder_clear(&bb);
>  
>  	stop_progress(&writer.progress);
>  
>  	compute_xor_offsets();

Thanks - overall this looks straightforward.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps
  2020-11-17 21:47   ` [PATCH v2 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
@ 2020-11-25  1:00     ` Jonathan Tan
  0 siblings, 0 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-11-25  1:00 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
> index f2f0b6b2c2..d2d46ff5f4 100644
> --- a/pack-bitmap-write.c
> +++ b/pack-bitmap-write.c
> @@ -333,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
>  		struct commit *commit = bb.commits[i-1];
>  		struct bb_commit *ent = bb_data_at(&bb.data, commit);
>  		struct commit *child;
> +		int reused = 0;
>  
>  		fill_bitmap_commit(ent, commit);
>  

Before the following chunk is the start of a loop:

"while ((child = pop_commit(&ent->children))) {"

> @@ -348,10 +349,15 @@ void bitmap_writer_build(struct packing_data *to_pack)
>  
>  			if (child_ent->bitmap)
>  				bitmap_or(child_ent->bitmap, ent->bitmap);
> -			else
> +			else if (reused)
>  				child_ent->bitmap = bitmap_dup(ent->bitmap);
> +			else {
> +				child_ent->bitmap = ent->bitmap;
> +				reused = 1;
> +			}
>  		}
> -		bitmap_free(ent->bitmap);
> +		if (!reused)
> +			bitmap_free(ent->bitmap);
>  		ent->bitmap = NULL;
>  	}
>  	bitmap_builder_clear(&bb);
> -- 
> 2.29.2.312.gabc4d358d8

So this is clearly correct.

I asked myself if this optimization is worth it when we're going to
drastically reduce the number of steps in patch 18, but I think that the
answer is still yes.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history
  2020-11-17 21:47   ` [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
  2020-11-22 21:50     ` Junio C Hamano
@ 2020-11-25  1:14     ` Jonathan Tan
  2020-11-28 17:21       ` Taylor Blau
  1 sibling, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-11-25  1:14 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> From: Derrick Stolee <dstolee@microsoft.com>
> 
> The fill_bitmap_commit() method assumes that every parent of the given
> commit is already part of the current bitmap. Instead of making that
> assumption, let's walk parents until we reach commits already part of
> the bitmap. Set the value for that parent immediately after querying to
> save time doing double calls to find_object_pos() and to avoid inserting
> the parent into the queue multiple times.

I see from the later patches that this has no effect until the part
where we can skip commits, but as Junio says [1], it's worth mentioning
it here. Maybe something like:

  The fill_bitmap_commit() method assumes that every parent of the given
  commit is already part of the current bitmap. This is currently
  correct, but a subsequent patch will change the nature of the edges of
  the graph from parent-child to ancestor-descendant. In preparation for
  that, let's walk parents...

>  static void fill_bitmap_commit(struct bb_commit *ent,
> -			       struct commit *commit)
> +			       struct commit *commit,
> +			       struct prio_queue *queue)

As far as I can see, this function expects an empty queue and always
ends with the queue empty, and the only reason why we don't instantiate
a new queue every time is so that we can save on the internal array
allocation/deallocation. Maybe add a comment to that effect.

>  {
>  	if (!ent->bitmap)
>  		ent->bitmap = bitmap_new();
>  
> -	/*
> -	 * mark ourselves, but do not bother with parents; their values
> -	 * will already have been propagated to us
> -	 */
>  	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
> -	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
> +	prio_queue_put(queue, commit);
> +
> +	while (queue->nr) {
> +		struct commit_list *p;
> +		struct commit *c = prio_queue_get(queue);
> +
> +		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
> +		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
> +
> +		for (p = c->parents; p; p = p->next) {
> +			int pos = find_object_pos(&p->item->object.oid);
> +			if (!bitmap_get(ent->bitmap, pos)) {
> +				bitmap_set(ent->bitmap, pos);
> +				prio_queue_put(queue, p->item);
> +			}
> +		}
> +	}
>  }

[snip rest of code]

Everything else makes sense.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 15/24] t5310: add branch-based checks
  2020-11-17 21:47   ` [PATCH v2 15/24] t5310: add branch-based checks Taylor Blau
@ 2020-11-25  1:17     ` Jonathan Tan
  2020-11-28 17:30       ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-11-25  1:17 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> From: Derrick Stolee <dstolee@microsoft.com>
> 
> The current rev-list tests that check the bitmap data only work on HEAD
> instead of multiple branches. Expand the test cases to handle both
> 'master' and 'other' branches.

[snip]

> +rev_list_tests () {
> +	state=$1
> +
> +	for branch in "master" "other"

Would it be worth including "HEAD" in the list here? It would make more
sense with the commit message saying "exstend" instead of "replace".

[snip rest]

The 2 prior patches (13 and 14) look good to me.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps
  2020-11-17 21:48   ` [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
  2020-11-24  6:07     ` Jonathan Tan
@ 2020-11-25  1:46     ` Jonathan Tan
  2020-11-30 18:41       ` Derrick Stolee
  1 sibling, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-11-25  1:46 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

I've reviewed the commit message already [1]; now let's look at the code.

[1] https://lore.kernel.org/git/20201124060738.762751-1-jonathantanmy@google.com/

>  struct bb_commit {
>  	struct commit_list *reverse_edges;
> +	struct bitmap *commit_mask;
>  	struct bitmap *bitmap;
> -	unsigned selected:1;
> +	unsigned selected:1,
> +		 maximal:1;

The code itself probably should contain comments about the algorithm,
but I'm not sure of the best way to do it. (E.g. I would say that
"maximal" should be "When iteration in bitmap_builder_init() reaches
this bb_commit, this is true iff none of its descendants has or will
ever have the exact same commit_mask" - but then when do we explain why
the commit_mask matters?)

>  	unsigned idx; /* within selected array */
>  };
>  
> @@ -198,7 +200,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
>  {
>  	struct rev_info revs;
>  	struct commit *commit;
> -	unsigned int i;
> +	unsigned int i, num_maximal;
>  
>  	memset(bb, 0, sizeof(*bb));
>  	init_bb_data(&bb->data);
> @@ -210,27 +212,85 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
>  	for (i = 0; i < writer->selected_nr; i++) {
>  		struct commit *c = writer->selected[i].commit;
>  		struct bb_commit *ent = bb_data_at(&bb->data, c);
> +
>  		ent->selected = 1;
> +		ent->maximal = 1;
>  		ent->idx = i;
> +
> +		ent->commit_mask = bitmap_new();
> +		bitmap_set(ent->commit_mask, i);
> +
>  		add_pending_object(&revs, &c->object, "");
>  	}
> +	num_maximal = writer->selected_nr;
>  
>  	if (prepare_revision_walk(&revs))
>  		die("revision walk setup failed");
>  
>  	while ((commit = get_revision(&revs))) {
>  		struct commit_list *p;
> +		struct bb_commit *c_ent;
>  
>  		parse_commit_or_die(commit);
>  
> -		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
> -		bb->commits[bb->commits_nr++] = commit;
> +		c_ent = bb_data_at(&bb->data, commit);
> +
> +		if (c_ent->maximal) {
> +			if (!c_ent->selected) {
> +				bitmap_set(c_ent->commit_mask, num_maximal);
> +				num_maximal++;
> +			}
> +
> +			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
> +			bb->commits[bb->commits_nr++] = commit;

So the order of bit assignments in the commit_mask and the order of
commits in bb->commits are not the same. In the commit_mask, bits are
first assigned for selected commits and then the rest for commits we
discover to be maximal. But in bb->commits, the order follows the
topologically-sorted iteration. This is fine, but might be worth a
comment (to add to the already big comment burden...)

> +		}
>  
>  		for (p = commit->parents; p; p = p->next) {
> -			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
> -			commit_list_insert(commit, &ent->reverse_edges);
> +			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
> +			int c_not_p, p_not_c;
> +
> +			if (!p_ent->commit_mask) {
> +				p_ent->commit_mask = bitmap_new();
> +				c_not_p = 1;
> +				p_not_c = 0;
> +			} else {
> +				c_not_p = bitmap_diff_nonzero(c_ent->commit_mask, p_ent->commit_mask);
> +				p_not_c = bitmap_diff_nonzero(p_ent->commit_mask, c_ent->commit_mask);
> +			}
> +
> +			if (!c_not_p)
> +				continue;
> +
> +			bitmap_or(p_ent->commit_mask, c_ent->commit_mask);
> +
> +			if (p_not_c)
> +				p_ent->maximal = 1;
> +			else {
> +				p_ent->maximal = 0;
> +				free_commit_list(p_ent->reverse_edges);
> +				p_ent->reverse_edges = NULL;
> +			}
> +
> +			if (c_ent->maximal) {
> +				commit_list_insert(commit, &p_ent->reverse_edges);
> +			} else {
> +				struct commit_list *cc = c_ent->reverse_edges;
> +
> +				for (; cc; cc = cc->next) {
> +					if (!commit_list_contains(cc->item, p_ent->reverse_edges))
> +						commit_list_insert(cc->item, &p_ent->reverse_edges);
> +				}
> +			}
>  		}
> +
> +		bitmap_free(c_ent->commit_mask);
> +		c_ent->commit_mask = NULL;
>  	}
> +
> +	trace2_data_intmax("pack-bitmap-write", the_repository,
> +			   "num_selected_commits", writer->selected_nr);
> +	trace2_data_intmax("pack-bitmap-write", the_repository,
> +			   "num_maximal_commits", num_maximal);
>  }

The rest looks like a faithful implementation of the algorithm.

Now let's look at the tests.

> +# To ensure the logic for "maximal commits" is exercised, make
> +# the repository a bit more complicated.
> +#
> +#    other                         master
> +#      *                             *
> +# (99 commits)                  (99 commits)
> +#      *                             *
> +#      |\                           /|
> +#      | * octo-other  octo-master * |
> +#      |/|\_________  ____________/|\|
> +#      | \          \/  __________/  |
> +#      |  | ________/\ /             |
> +#      *  |/          * merge-right  *
> +#      | _|__________/ \____________ |
> +#      |/ |                         \|
> +# (l1) *  * merge-left               * (r1)
> +#      | / \________________________ |
> +#      |/                           \|
> +# (l2) *                             * (r2)
> +#       \____________...____________ |

What does the ... represent? If a certain number of commits, it would be
clearer to write that there.

> +#                                   \|
> +#                                    * (base)

OK - some of the crosses are unclear, but from the bitmask given below,
I know where the lines should go.

> +#
> +# The important part for the maximal commit algorithm is how
> +# the bitmasks are extended. Assuming starting bit positions
> +# for master (bit 0) and other (bit 1), and some flexibility
> +# in the order that merge bases are visited, the bitmasks at
> +# the end should be:
> +#
> +#      master: 1       (maximal, selected)
> +#       other: 01      (maximal, selected)
> +# octo-master: 1
> +#  octo-other: 01
> +# merge-right: 111     (maximal)
> +#        (l1): 111
> +#        (r1): 111
> +#  merge-left: 1101    (maximal)
> +#        (l2): 11111   (maximal)
> +#        (r2): 111101  (maximal)
> +#      (base): 1111111 (maximal)

This makes sense. (l1) and (r1) are not maximal because everything that
can reach merge-right can also reach them.

[snip]

>  test_expect_success 'full repack creates bitmaps' '
> -	git repack -ad &&
> +	GIT_TRACE2_EVENT_NESTING=4 GIT_TRACE2_EVENT="$(pwd)/trace" \
> +		git repack -ad &&
>  	ls .git/objects/pack/ | grep bitmap >output &&
> -	test_line_count = 1 output
> +	test_line_count = 1 output &&
> +	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
> +	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace

From the diagram and bit masks, I see that the important number for
"maximal" is 7. Could this test be run twice - one without the crosses
and one with, and we can verify that the difference between the maximal
commits is 7? As it is, this 111 number is hard to verify.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history
  2020-11-25  1:14     ` Jonathan Tan
@ 2020-11-28 17:21       ` Taylor Blau
  2020-11-30 18:33         ` Jonathan Tan
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-11-28 17:21 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, git, dstolee, gitster, peff, martin.agren, szeder.dev

On Tue, Nov 24, 2020 at 05:14:09PM -0800, Jonathan Tan wrote:
> > From: Derrick Stolee <dstolee@microsoft.com>
> >
> > The fill_bitmap_commit() method assumes that every parent of the given
> > commit is already part of the current bitmap. Instead of making that
> > assumption, let's walk parents until we reach commits already part of
> > the bitmap. Set the value for that parent immediately after querying to
> > save time doing double calls to find_object_pos() and to avoid inserting
> > the parent into the queue multiple times.
>
> I see from the later patches that this has no effect until the part
> where we can skip commits, but as Junio says [1], it's worth mentioning
> it here. Maybe something like:
>
>   The fill_bitmap_commit() method assumes that every parent of the given
>   commit is already part of the current bitmap. This is currently
>   correct, but a subsequent patch will change the nature of the edges of
>   the graph from parent-child to ancestor-descendant. In preparation for
>   that, let's walk parents...

Thanks. Stolee and I worked a little on revising this last week, and I
think that the current log message is more along these lines. Here's
what we wrote:

    pack-bitmap-write: fill bitmap with commit history

    The current implementation of bitmap_writer_build() creates a
    reachability bitmap for every walked commit. After computing a bitmap
    for a commit, those bits are pushed to an in-progress bitmap for its
    children.

    fill_bitmap_commit() assumes the bits corresponding to objects
    reachable from the parents of a commit are already set. This means that
    when visiting a new commit, we only have to walk the objects reachable
    between it and any of its parents.

    A future change to bitmap_writer_build() will relax this condition so
    not all parents have their reachable objects set in the in-progress
    bitmap. Prepare for that by having 'fill_bitmap_commit()' walk
    parents until reaching commits whose bits are already set. Then, walk
    the trees for these commits as well.

    This has no functional change with the current implementation of
    bitmap_writer_build().

> >  static void fill_bitmap_commit(struct bb_commit *ent,
> > -			       struct commit *commit)
> > +			       struct commit *commit,
> > +			       struct prio_queue *queue)
>
> As far as I can see, this function expects an empty queue and always
> ends with the queue empty, and the only reason why we don't instantiate
> a new queue every time is so that we can save on the internal array
> allocation/deallocation. Maybe add a comment to that effect.

Sure. Would you find a comment like that more helpful above
'fill_bitmap_commit()', or above the declaration of 'queue' (in
'bitmap_writer_build()') itself?

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 10/24] pack-bitmap-write: reimplement bitmap writing
  2020-11-25  0:53     ` Jonathan Tan
@ 2020-11-28 17:27       ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-28 17:27 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, git, dstolee, gitster, peff, martin.agren, szeder.dev

On Tue, Nov 24, 2020 at 04:53:44PM -0800, Jonathan Tan wrote:
> > +static void store_selected(struct bb_commit *ent, struct commit *commit)
> > +{
> > +	struct bitmapped_commit *stored = &writer.selected[ent->idx];
> > +	khiter_t hash_pos;
> > +	int hash_ret;
> > +
> > +	/*
> > +	 * the "reuse bitmaps" phase may have stored something here, but
> > +	 * our new algorithm doesn't use it. Drop it.
> > +	 */
> > +	if (stored->bitmap)
> > +		ewah_free(stored->bitmap);
>
> I tried to figure out how the "reuse bitmaps" phase stores things in
> this field, but that led me down a rabbit hole that I didn't pursue.
> But anyway, the new bitmap is correctly generated, so clearing the old
> bitmap is safe (except, possibly, wasting time, but I see that in a
> subsequent patch, existing bitmaps will be reused in a new wawy).

Yep. The existing reuse mechanism is thrown out in a later patch.

> Thanks - overall this looks straightforward.

Thanks for taking a look! Very much appreciated.

Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 15/24] t5310: add branch-based checks
  2020-11-25  1:17     ` Jonathan Tan
@ 2020-11-28 17:30       ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-11-28 17:30 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, git, dstolee, gitster, peff, martin.agren, szeder.dev

On Tue, Nov 24, 2020 at 05:17:48PM -0800, Jonathan Tan wrote:
> > From: Derrick Stolee <dstolee@microsoft.com>
> >
> > The current rev-list tests that check the bitmap data only work on HEAD
> > instead of multiple branches. Expand the test cases to handle both
> > 'master' and 'other' branches.
>
> [snip]
>
> > +rev_list_tests () {
> > +	state=$1
> > +
> > +	for branch in "master" "other"
>
> Would it be worth including "HEAD" in the list here? It would make more
> sense with the commit message saying "exstend" instead of "replace".

I don't think so. These tests were run with master checked out, so
testing HEAD would be no different than including "master" in the list
of branches here, so the commit message is correct that this is an
extension, too.

> [snip rest]
>
> The 2 prior patches (13 and 14) look good to me.

Thanks for taking a look.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history
  2020-11-28 17:21       ` Taylor Blau
@ 2020-11-30 18:33         ` Jonathan Tan
  0 siblings, 0 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-11-30 18:33 UTC (permalink / raw)
  To: me; +Cc: jonathantanmy, git, dstolee, gitster, peff, martin.agren, szeder.dev

> Thanks. Stolee and I worked a little on revising this last week, and I
> think that the current log message is more along these lines. Here's
> what we wrote:
> 
>     pack-bitmap-write: fill bitmap with commit history
> 
>     The current implementation of bitmap_writer_build() creates a
>     reachability bitmap for every walked commit. After computing a bitmap
>     for a commit, those bits are pushed to an in-progress bitmap for its
>     children.
> 
>     fill_bitmap_commit() assumes the bits corresponding to objects
>     reachable from the parents of a commit are already set. This means that
>     when visiting a new commit, we only have to walk the objects reachable
>     between it and any of its parents.
> 
>     A future change to bitmap_writer_build() will relax this condition so
>     not all parents have their reachable objects set in the in-progress

I would write "not all parents have their bits set" instead, but this is
fine too.

>     bitmap. Prepare for that by having 'fill_bitmap_commit()' walk
>     parents until reaching commits whose bits are already set. Then, walk
>     the trees for these commits as well.
> 
>     This has no functional change with the current implementation of
>     bitmap_writer_build().
> 
> > >  static void fill_bitmap_commit(struct bb_commit *ent,
> > > -			       struct commit *commit)
> > > +			       struct commit *commit,
> > > +			       struct prio_queue *queue)
> >
> > As far as I can see, this function expects an empty queue and always
> > ends with the queue empty, and the only reason why we don't instantiate
> > a new queue every time is so that we can save on the internal array
> > allocation/deallocation. Maybe add a comment to that effect.
> 
> Sure. Would you find a comment like that more helpful above
> 'fill_bitmap_commit()', or above the declaration of 'queue' (in
> 'bitmap_writer_build()') itself?

I think it's better with fill_bitmap_commit(), as it's the one in
control of how "queue" will be used.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps
  2020-11-25  1:46     ` Jonathan Tan
@ 2020-11-30 18:41       ` Derrick Stolee
  0 siblings, 0 replies; 174+ messages in thread
From: Derrick Stolee @ 2020-11-30 18:41 UTC (permalink / raw)
  To: Jonathan Tan, me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev

On 11/24/2020 8:46 PM, Jonathan Tan wrote:
> I've reviewed the commit message already [1]; now let's look at the code.
> 
> [1] https://lore.kernel.org/git/20201124060738.762751-1-jonathantanmy@google.com/
> 
>>  struct bb_commit {
>>  	struct commit_list *reverse_edges;
>> +	struct bitmap *commit_mask;
>>  	struct bitmap *bitmap;
>> -	unsigned selected:1;
>> +	unsigned selected:1,
>> +		 maximal:1;
> 
> The code itself probably should contain comments about the algorithm,
> but I'm not sure of the best way to do it. (E.g. I would say that
> "maximal" should be "When iteration in bitmap_builder_init() reaches
> this bb_commit, this is true iff none of its descendants has or will
> ever have the exact same commit_mask" - but then when do we explain why
> the commit_mask matters?)

Comments are tricky, as they are likely to go stale. In fact,
this algorithm changes dramatically later in this very series!

How much can we expect a reader to inspect the commit history to
discover the lengthy commit message? The message explains the
algorithm and its many subtleties versus reading comments that may
be too specific to this initial version.

At this point, "maximal" is a property that doesn't mean much
without inspecting the places where we set or check that bit.

>> +		if (c_ent->maximal) {
>> +			if (!c_ent->selected) {
>> +				bitmap_set(c_ent->commit_mask, num_maximal);
>> +				num_maximal++;
>> +			}
>> +
>> +			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
>> +			bb->commits[bb->commits_nr++] = commit;
> 
> So the order of bit assignments in the commit_mask and the order of
> commits in bb->commits are not the same. In the commit_mask, bits are
> first assigned for selected commits and then the rest for commits we
> discover to be maximal. But in bb->commits, the order follows the
> topologically-sorted iteration. This is fine, but might be worth a
> comment (to add to the already big comment burden...)

This one I waffled on a bit. There _is_ a difference between the
bitmask order and this list order. This isn't a problem as long as
no one attempts to use the bitmasks to navigate into this list.

Further, it is entirely possible that the tests to never demonstrate
that difference, so pointing out may help a future developer from
making that mistake. Could be good to add this comment over the
bb->commits definition:

	/*
	 * 'commits' stores the list of maximal commits, in visited
	 * order. This can be different than the bitmask order for
	 * the selected commits.
	 */

>> +# To ensure the logic for "maximal commits" is exercised, make
>> +# the repository a bit more complicated.
>> +#
>> +#    other                         master
>> +#      *                             *
>> +# (99 commits)                  (99 commits)
>> +#      *                             *
>> +#      |\                           /|
>> +#      | * octo-other  octo-master * |
>> +#      |/|\_________  ____________/|\|
>> +#      | \          \/  __________/  |
>> +#      |  | ________/\ /             |
>> +#      *  |/          * merge-right  *
>> +#      | _|__________/ \____________ |
>> +#      |/ |                         \|
>> +# (l1) *  * merge-left               * (r1)
>> +#      | / \________________________ |
>> +#      |/                           \|
>> +# (l2) *                             * (r2)
>> +#       \____________...____________ |
> 
> What does the ... represent? If a certain number of commits, it would be
> clearer to write that there.

The ... are unnecessary and should be ___ instead. Thanks.
 
>> +#                                   \|
>> +#                                    * (base)
> 
> OK - some of the crosses are unclear, but from the bitmask given below,
> I know where the lines should go.
> 
>> +#
>> +# The important part for the maximal commit algorithm is how
>> +# the bitmasks are extended. Assuming starting bit positions
>> +# for master (bit 0) and other (bit 1), and some flexibility
>> +# in the order that merge bases are visited, the bitmasks at
>> +# the end should be:
>> +#
>> +#      master: 1       (maximal, selected)
>> +#       other: 01      (maximal, selected)
>> +# octo-master: 1
>> +#  octo-other: 01
>> +# merge-right: 111     (maximal)
>> +#        (l1): 111
>> +#        (r1): 111
>> +#  merge-left: 1101    (maximal)
>> +#        (l2): 11111   (maximal)
>> +#        (r2): 111101  (maximal)
>> +#      (base): 1111111 (maximal)
> 
> This makes sense. (l1) and (r1) are not maximal because everything that
> can reach merge-right can also reach them.
> 
> [snip]
> 
>>  test_expect_success 'full repack creates bitmaps' '
>> -	git repack -ad &&
>> +	GIT_TRACE2_EVENT_NESTING=4 GIT_TRACE2_EVENT="$(pwd)/trace" \
>> +		git repack -ad &&
>>  	ls .git/objects/pack/ | grep bitmap >output &&
>> -	test_line_count = 1 output
>> +	test_line_count = 1 output &&
>> +	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
>> +	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
> 
> From the diagram and bit masks, I see that the important number for
> "maximal" is 7. Could this test be run twice - one without the crosses
> and one with, and we can verify that the difference between the maximal
> commits is 7? As it is, this 111 number is hard to verify.

This number _is_ hard to verify. It is fragile to many behaviors inside
the code of Git, including the selection algorithm and some hard-coded
limits (there's a reason we insert 100 commits on top of each side).
Further, this number changes as the algorithm is modified.

Perhaps the best way to recognize this number is that it changes by adding
5 to the previous number (these are the 5 "newly maximal" commits, since
two are already selected as tips).

This number changes again later, and the difference is justified by
the number of maximal commits dropping by 4.

Thanks,
-Stolee

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1
  2020-11-24  2:51         ` Jeff King
@ 2020-12-01 22:56           ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-01 22:56 UTC (permalink / raw)
  To: Jeff King; +Cc: Junio C Hamano, git, dstolee

On Mon, Nov 23, 2020 at 09:51:41PM -0500, Jeff King wrote:
> On Mon, Nov 23, 2020 at 09:48:22PM -0500, Jeff King wrote:
>
> > > I think that we probably could just use ALLOC_GROW() as you suggest.
> > > Funny enough, reading through GitHub's chat logs, apparently this is
> > > something that Peff and I talked about. So, 16 probably came from
> > > alloc_nr(), but we probably stopped short of realizing that we could
> > > just use ALLOC_GROW as-is.
> >
> > That would probably be OK. It's a bit more aggressive, which could
> > matter if you have a large number of very small bitmaps. My original
> > goal of the "grow less aggressively" patch was to keep memory usage
> > down, knowing that I was going to be holding a lot of bitmaps in memory
> > at once. But even with micro-optimizations like this, it turned out to
> > be far too big in practice (and hence Stolee's work on top to reduce the
> > total number we hold at once).
>
> Oh, sorry, I was mixing this patch up with patches 6 and 7, which touch
> buffer_grow().  This is a totally separate spot, and this is a pure
> bug-fix.
>
> I think the main reason we didn't use ALLOC_GROW() here in the beginning
> is that the ewah code was originally designed to be a separate library
> (a port of the java ewah library), and didn't depend on Git code.
>
> These days we pull in xmalloc, etc, so we should be fine to use
> ALLOC_GROW().
>
> Likewise...
>
> > I think the real test would be measuring the peak heap of the series as
> > you posted it in v2, and this version replacing this patch (and the
> > "grow less aggressively" one) with ALLOC_GROW(). On something big, like
> > repacking all of the torvalds/linux or git/git fork networks.
> >
> > If there's no appreciable difference, then definitely I think it's worth
> > the simplicity of reusing ALLOC_GROW().
>
> All of this is nonsense (though it does apply to the question of using
> ALLOC_GROW() in bitmap_grow(), via patch 7).

You and I timed this a week or two ago, but I only just returned to this
topic today. Switching to ALLOC_GROW() doesn't affect the final memory
usage at all, so I changed patch 7 up to use that instead of more or
less open-coding alloc_nr().

> -Peff

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-11-24  2:43           ` Jeff King
@ 2020-12-01 23:04             ` Taylor Blau
  2020-12-01 23:37               ` Jonathan Tan
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-12-01 23:04 UTC (permalink / raw)
  To: Jeff King
  Cc: Martin Ågren, Junio C Hamano, Git Mailing List,
	Derrick Stolee, SZEDER Gábor

On Mon, Nov 23, 2020 at 09:43:36PM -0500, Jeff King wrote:
> On Sat, Nov 21, 2020 at 09:31:50PM -0500, Taylor Blau wrote:
>
> > > There was SZEDER's comment on that last patch in v2, where future
> > > readers of that patch will have to wonder why it does s/256/270/ in a
> > > test. I agree with SZEDER that the change should be mentioned in the
> > > commit message, even if it's just "unfortunately, we have some magicness
> > > here, plus we want to pass both with SHA-1 and SHA-256; turns out 270
> > > hits the problem we want to test for".
> >
> > Thanks for reviewing it, and noticing a couple of problems in the
> > earlier patches, too. If folks are happy with the replacement that I
> > sent [1], then I am too :-).
> >
> > I don't think that the "big" patch generated a ton of review on the
> > list, but maybe that's OK. Peff, Stolee, and I all reviewed that patch
> > extensively when deploying it at GitHub (where it has been running since
> > late Summer).
>
> Hrm. I thought you were going to integrate the extra checks I suggested
> for load_bitmap_entries_v1(). Which is looks like you did in patch 17.
> After that, the s/256/270/ hack should not be necessary anymore (if it
> is, then we should keep fixing more spots).

Oops. I even wrote down a big "S/256/270" in my notebook after you and I
talked last about it, and then promptly forgot about it before sending
v2.

In any case, I have all of that fixed up, as well as other comments and
suggestions from review, all of which were very helpful. (Thanks
everybody who took a look at this monstrously large series, and
apologies in advance for more to come ;-)).

I think I would like a little more clarification on the discussion in
[1]. From my reading, Jonathan Tan wants comments about the algorithm in
the code, but Stolee would rather rely on the commits, especially since
the algorithm changes later on in the series relative to the patch that
this is downthread from.

Once we can reach a good decision there, I'll send a v3 (which currently
lives in my fork[2]).

> -Peff

Thanks,
Taylor

[1]: https://lore.kernel.org/git/ea0c8c5d-6bc3-0dca-4fa1-fb461ed7ccb9@gmail.com/
[2]: https://github.com/ttaylorr/git/compare/tb/bitmap-build-fast-for-upstream

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-12-01 23:04             ` Taylor Blau
@ 2020-12-01 23:37               ` Jonathan Tan
  2020-12-01 23:43                 ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-12-01 23:37 UTC (permalink / raw)
  To: me; +Cc: peff, martin.agren, gitster, git, dstolee, szeder.dev, Jonathan Tan

> I think I would like a little more clarification on the discussion in
> [1]. From my reading, Jonathan Tan wants comments about the algorithm in
> the code, but Stolee would rather rely on the commits, especially since
> the algorithm changes later on in the series relative to the patch that
> this is downthread from.
> 
> Once we can reach a good decision there, I'll send a v3 (which currently
> lives in my fork[2]).

I did, but Stolee has a point that the algorithm will change later on.
I'm OK with the parts I reviewed (patches 10 to 18).

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-12-01 23:37               ` Jonathan Tan
@ 2020-12-01 23:43                 ` Taylor Blau
  2020-12-02  8:11                   ` Jonathan Tan
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-12-01 23:43 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, peff, martin.agren, gitster, git, dstolee, szeder.dev

On Tue, Dec 01, 2020 at 03:37:25PM -0800, Jonathan Tan wrote:
> > Once we can reach a good decision there, I'll send a v3 (which currently
> > lives in my fork[2]).
>
> I did, but Stolee has a point that the algorithm will change later on.
> I'm OK with the parts I reviewed (patches 10 to 18).

Ah, good to know. I'll hold off on a v3, then, until you have had a
chance to look through the remaining handful of patches (if you were
planning on doing that). I haven't touched those locally, so it'll be
good to hear any comments you might have before sending another version.

Thanks for all of your very helpful review :-).

Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE
  2020-11-17 21:48   ` [PATCH v2 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
@ 2020-12-02  7:13     ` Jonathan Tan
  0 siblings, 0 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-12-02  7:13 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> Just because a commit happened to be bitmapped last time does not make
> it a good candidate for having a bitmap this time. In particular, we may
> choose bitmaps based on how recent they are in history, or whether a ref
> tip points to them, and those things will change. We're better off
> re-considering fresh which commits are good candidates.
> 
> Reusing the existing bitmap _is_ a reasonable thing to do to save
> computation. But only reusing exact bitmaps is a weak form of this. If
> we have an old bitmap for A and now want a new bitmap for its child, we
> should be able to compute that only by looking at trees and that are new
> to the child. But this code would consider only exact reuse (which is
> perhaps why it was eager to select those commits in the first place).

Makes sense.

> -int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
> -			     struct packing_data *mapping,
> -			     kh_oid_map_t *reused_bitmaps,
> -			     int show_progress)
> +uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
> +				struct packing_data *mapping)

[snip body of function]

Here, a lot of the function is deleted, and only the part that creates
the mapping from old indices to new indices remains - hence, the
renaming of the function. OK.

Overall this looks good. I was wondering if there would be any functions
now unused, but looking at the deleted lines, that doesn't seem to be
the case.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 20/24] pack-bitmap: factor out 'bitmap_for_commit()'
  2020-11-17 21:48   ` [PATCH v2 20/24] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
@ 2020-12-02  7:17     ` Jonathan Tan
  0 siblings, 0 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-12-02  7:17 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> +struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
> +				      struct commit *commit)
> +{
> +	khiter_t hash_pos = kh_get_oid_map(bitmap_git->bitmaps,
> +					   commit->object.oid);
> +	if (hash_pos >= kh_end(bitmap_git->bitmaps))
> +		return NULL;
> +	return lookup_stored_bitmap(kh_value(bitmap_git->bitmaps, hash_pos));
> +}

The new function.

>  static int add_to_include_set(struct bitmap_index *bitmap_git,
>  			      struct include_data *data,
> -			      const struct object_id *oid,
> +			      struct commit *commit,
>  			      int bitmap_pos)
>  {
> -	khiter_t hash_pos;
> +	struct ewah_bitmap *partial;
>  
>  	if (data->seen && bitmap_get(data->seen, bitmap_pos))
>  		return 0;
> @@ -476,10 +486,9 @@ static int add_to_include_set(struct bitmap_index *bitmap_git,
>  	if (bitmap_get(data->base, bitmap_pos))
>  		return 0;
>  
> -	hash_pos = kh_get_oid_map(bitmap_git->bitmaps, *oid);
> -	if (hash_pos < kh_end(bitmap_git->bitmaps)) {
> -		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, hash_pos);
> -		bitmap_or_ewah(data->base, lookup_stored_bitmap(st));
> +	partial = bitmap_for_commit(bitmap_git, commit);
> +	if (partial) {
> +		bitmap_or_ewah(data->base, partial);
>  		return 0;
>  	}

A straightforward mechanical change. The function invocation replaces
conversion from commit to oid (which is why add_to_include_set() now
takes a struct commit * instead of a struct object_id *) and all the
other deleted lines here.

> @@ -1297,12 +1305,9 @@ void test_bitmap_walk(struct rev_info *revs)
>  		bitmap_git->version, bitmap_git->entry_count);
>  
>  	root = revs->pending.objects[0].item;
> -	pos = kh_get_oid_map(bitmap_git->bitmaps, root->oid);
> -
> -	if (pos < kh_end(bitmap_git->bitmaps)) {
> -		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
> -		struct ewah_bitmap *bm = lookup_stored_bitmap(st);
> +	bm = bitmap_for_commit(bitmap_git, (struct commit *)root);
>  
> +	if (bm) {
>  		fprintf(stderr, "Found bitmap for %s. %d bits / %08x checksum\n",
>  			oid_to_hex(&root->oid), (int)bm->bit_size, ewah_checksum(bm));
>  

Same here. LGTM.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()'
  2020-11-17 21:48   ` [PATCH v2 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
@ 2020-12-02  7:20     ` Jonathan Tan
  0 siblings, 0 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-12-02  7:20 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> +static int add_commit_to_bitmap(struct bitmap_index *bitmap_git,
> +				struct bitmap **base,
> +				struct commit *commit)
> +{
> +	struct ewah_bitmap *or_with = bitmap_for_commit(bitmap_git, commit);
> +
> +	if (!or_with)
> +		return 0;
> +
> +	if (*base == NULL)
> +		*base = ewah_to_bitmap(or_with);
> +	else
> +		bitmap_or_ewah(*base, or_with);
> +
> +	return 1;
> +}
> +
>  static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
>  				   struct rev_info *revs,
>  				   struct object_list *roots,
> @@ -544,21 +561,10 @@ static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
>  		struct object *object = roots->item;
>  		roots = roots->next;
>  
> -		if (object->type == OBJ_COMMIT) {
> -			khiter_t pos = kh_get_oid_map(bitmap_git->bitmaps, object->oid);
> -
> -			if (pos < kh_end(bitmap_git->bitmaps)) {
> -				struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
> -				struct ewah_bitmap *or_with = lookup_stored_bitmap(st);

The code from kh_get_oid_map() to lookup_stored_bitmap() here now
exists, in add_commit_to_bitmap(), in the form of an invocation to
bitmap_for_commit(). Which is correct - that is exactly what
bitmap_for_commit() does.

Looks good.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 22/24] pack-bitmap-write: use existing bitmaps
  2020-11-17 21:48   ` [PATCH v2 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
@ 2020-12-02  7:28     ` Jonathan Tan
  2020-12-02 16:21       ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-12-02  7:28 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> From: Derrick Stolee <dstolee@microsoft.com>
> 
> When constructing new bitmaps, we perform a commit and tree walk in
> fill_bitmap_commit() and fill_bitmap_tree(). This walk would benefit
> from using existing bitmaps when available. We must track the existing
> bitmaps and translate them into the new object order, but this is
> generally faster than parsing trees.

Makes sense.

> In fill_bitmap_commit(), we must reorder thing somewhat. The priority
> queue walks commits from newest-to-oldest, which means we correctly stop
> walking when reaching a commit with a bitmap. 

Makes sense.

> However, if we walk trees
> from top to bottom, then we might be parsing trees that are actually
> part of a re-used bitmap. 

Isn't the issue that we shouldn't walk trees at all before exhausting
our commit search, not the direction that we walk the trees in (top to
bottom or bottom to top or whatever)?

> To avoid over-walking trees, add them to a
> LIFO queue and walk them from bottom-to-top after exploring commits
> completely.

Just to clarify - would it work just as well with a FIFO queue (not LIFO
queue)? It seems to me that the most important part is doing this after
exploring commits completely.

> On git.git, this reduces a second immediate bitmap computation from 2.0s
> to 1.0s. On linux.git, we go from 32s to 22s. On chromium's fork
> network, we go from 227s to 198s.

Nice timings.

>  static void fill_bitmap_commit(struct bb_commit *ent,
>  			       struct commit *commit,
> -			       struct prio_queue *queue)
> +			       struct prio_queue *queue,
> +			       struct prio_queue *tree_queue,
> +			       struct bitmap_index *old_bitmap,
> +			       const uint32_t *mapping)
>  {
>  	if (!ent->bitmap)
>  		ent->bitmap = bitmap_new();
>  
> -	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
>  	prio_queue_put(queue, commit);
>  
>  	while (queue->nr) {
>  		struct commit_list *p;
>  		struct commit *c = prio_queue_get(queue);
>  
> +		/*
> +		 * If this commit has an old bitmap, then translate that
> +		 * bitmap and add its bits to this one. No need to walk
> +		 * parents or the tree for this commit.
> +		 */

This comment should be right before "if (old && ...", I think. Here, it
is a bit misleading. It leads me to think that "this commit has an old
bitmap" means old_bitmap != NULL, but it is actually old != NULL.

> +		if (old_bitmap && mapping) {

This is defensive in that if we somehow calculate old_bitmap without
mapping (or the other way around) (which is a bug), things just slow
down instead of breaking. I'm OK with this, but I still wanted to call
it out.

> +			struct ewah_bitmap *old;
> +
> +			old = bitmap_for_commit(old_bitmap, c);
> +			if (old && !rebuild_bitmap(mapping, old, ent->bitmap))
> +				continue;
> +		}
> +
> +		/*
> +		 * Mark ourselves and queue our tree. The commit
> +		 * walk ensures we cover all parents.
> +		 */
>  		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
> -		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
> +		prio_queue_put(tree_queue, get_commit_tree(c));
>  
>  		for (p = c->parents; p; p = p->next) {
>  			int pos = find_object_pos(&p->item->object.oid);

[snip]

> @@ -386,6 +408,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
>  	size_t i;
>  	int nr_stored = 0; /* for progress */
>  	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
> +	struct prio_queue tree_queue = { NULL };

NULL here does mean LIFO queue. OK.

> @@ -395,6 +420,12 @@ void bitmap_writer_build(struct packing_data *to_pack)
>  	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
>  		the_repository);
>  
> +	old_bitmap = prepare_bitmap_git(to_pack->repo);
> +	if (old_bitmap)
> +		mapping = create_bitmap_mapping(old_bitmap, to_pack);
> +	else
> +		mapping = NULL;

Here, we prepare the old_bitmap and mapping arguments. OK.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition
  2020-11-17 21:48   ` [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition Taylor Blau
@ 2020-12-02  7:44     ` Jonathan Tan
  2020-12-02 16:30       ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-12-02  7:44 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> However, there was a (significant) drawback: wide histories with many
> refs had an explosion of memory costs to compute the commit bitmasks
> during the exploration that discovers these intermediate commits. Since
> these wide histories are unlikely to repeat walking objects, the benefit
> of walking objects multiple times was not expensive before. But now, the
> commit walk *before computing bitmaps* is incredibly expensive.

Do you have numbers of how large the commit bitmasks are?

> In an effort to discover a happy medium, this change reduces the walk
> for intermediate commits to only the first-parent history. This focuses
> the walk on how the histories converge, which still has significant
> reduction in repeat object walks. It is still possible to create
> quadratic behavior in this version, but it is probably less likely in
> realistic data shapes.

Would this work? I agree that the width of the commit bitmasks would go
down (and there would also be fewer commit bitmasks generated, further
increasing the memory savings). But intuitively, if there is a commit
that is selected and only accessible through non-1st-parent links, then
any bitmaps generated for it cannot be contributed to its descendants
(since there was no descendant-to-ancestor walk that could reach it in
order to form the reverse edge).

> Here is some data taken on a fresh clone of the kernel:
> 
>              |   runtime (sec)    |   peak heap (GB)   |
>              |                    |                    |
>              |   from  |   with   |   from  |   with   |
>              | scratch | existing | scratch | existing |
>   -----------+---------+----------+---------+-----------
>     original |  64.044 |   83.241 |   2.088 |    2.194 |
>   last patch |  44.811 |   27.828 |   2.289 |    2.358 |
>   this patch | 100.641 |   35.560 |   2.152 |    2.224 |

Hmm...the jump from 44 to 100 seems rather large.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps
  2020-11-17 21:48   ` [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
@ 2020-12-02  8:08     ` Jonathan Tan
  2020-12-02 16:35       ` Taylor Blau
  0 siblings, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-12-02  8:08 UTC (permalink / raw)
  To: me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev, Jonathan Tan

> diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
> index b0493d971d..3ac90ae410 100644
> --- a/pack-bitmap-write.c
> +++ b/pack-bitmap-write.c
> @@ -195,7 +195,8 @@ struct bitmap_builder {
>  };
>  
>  static void bitmap_builder_init(struct bitmap_builder *bb,
> -				struct bitmap_writer *writer)
> +				struct bitmap_writer *writer,
> +				struct bitmap_index *old_bitmap)
>  {
>  	struct rev_info revs;
>  	struct commit *commit;
> @@ -234,12 +235,26 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
>  
>  		c_ent = bb_data_at(&bb->data, commit);
>  
> +		if (old_bitmap && bitmap_for_commit(old_bitmap, commit)) {
> +			/*
> +			 * This commit has an existing bitmap, so we can
> +			 * get its bits immediately without an object
> +			 * walk. There is no need to continue walking
> +			 * beyond this commit.
> +			 */

OK - as far as I understand, the reason for continuing the walk would be
to find reverse edges that connect this commit and its ancestors so that
this commit's ancestors can contribute bitmaps to this commit, but we do
not need such contributions, so we do not need to continue the walk.
Makes sense.

> +			c_ent->maximal = 1;
> +			p = NULL;

Here, we're setting maximal without also setting a bit in this commit's
commit_mask. This is fine because we're not propagating this commit's
commit_mask to any parents (we're not continuing the walk from this
commit), but it seems like a code smell. Suggested fix is below.

> +		}
> +
>  		if (c_ent->maximal) {
>  			num_maximal++;
>  			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
>  			bb->commits[bb->commits_nr++] = commit;
>  		}

As far as I can tell, this means that this commit occupies a bit
position in the commit mask that it doesn't need. Could this go into a
separate list instead, to be appended to bb->commits at the very end?

We could even skip the whole maximal stuff (for commits with existing
bitmaps) and replace "c_ent->maximal = 1;" above with "add to list that
we're going to append to bb->commits at the very end". That has the
advantage of not having to redefine "maximal".

>  
> +		if (!c_ent->commit_mask)
> +			continue;

I think this should be moved as far up as possible (right after
the call to bb_data_at()) and commented, something like:

  If there is no commit_mask, there is no reason to iterate over this
  commit; it is not selected (if it were, it would not have a blank
  commit mask) and all its children have existing bitmaps (see the
  comment starting with "This commit has an existing bitmap" below), so
  it does not contribute anything to the final bitmap file or its
  descendants.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 00/24] pack-bitmap: bitmap generation improvements
  2020-12-01 23:43                 ` Taylor Blau
@ 2020-12-02  8:11                   ` Jonathan Tan
  0 siblings, 0 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-12-02  8:11 UTC (permalink / raw)
  To: me; +Cc: jonathantanmy, peff, martin.agren, gitster, git, dstolee, szeder.dev

> On Tue, Dec 01, 2020 at 03:37:25PM -0800, Jonathan Tan wrote:
> > > Once we can reach a good decision there, I'll send a v3 (which currently
> > > lives in my fork[2]).
> >
> > I did, but Stolee has a point that the algorithm will change later on.
> > I'm OK with the parts I reviewed (patches 10 to 18).
> 
> Ah, good to know. I'll hold off on a v3, then, until you have had a
> chance to look through the remaining handful of patches (if you were
> planning on doing that). I haven't touched those locally, so it'll be
> good to hear any comments you might have before sending another version.
> 
> Thanks for all of your very helpful review :-).
> 
> Taylor

You're welcome! I've gone ahead and reviewed all the patches subsequent
to 18. I think others have looked at the patches prior to 10, so I'm not
planning to review the others.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 22/24] pack-bitmap-write: use existing bitmaps
  2020-12-02  7:28     ` Jonathan Tan
@ 2020-12-02 16:21       ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-02 16:21 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, git, dstolee, gitster, peff, martin.agren, szeder.dev

On Tue, Dec 01, 2020 at 11:28:11PM -0800, Jonathan Tan wrote:
> > However, if we walk trees
> > from top to bottom, then we might be parsing trees that are actually
> > part of a re-used bitmap.
>
> Isn't the issue that we shouldn't walk trees at all before exhausting
> our commit search, not the direction that we walk the trees in (top to
> bottom or bottom to top or whatever)?

Right, the direction that we explore trees in isn't important: what
matters is that we consider them after the commits. I've clarified the
commit message to reflect this.

> > To avoid over-walking trees, add them to a
> > LIFO queue and walk them from bottom-to-top after exploring commits
> > completely.
>
> Just to clarify - would it work just as well with a FIFO queue (not LIFO
> queue)? It seems to me that the most important part is doing this after
> exploring commits completely.

Yup, see above.

> >  static void fill_bitmap_commit(struct bb_commit *ent,
> >  			       struct commit *commit,
> > -			       struct prio_queue *queue)
> > +			       struct prio_queue *queue,
> > +			       struct prio_queue *tree_queue,
> > +			       struct bitmap_index *old_bitmap,
> > +			       const uint32_t *mapping)
> >  {
> >  	if (!ent->bitmap)
> >  		ent->bitmap = bitmap_new();
> >
> > -	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
> >  	prio_queue_put(queue, commit);
> >
> >  	while (queue->nr) {
> >  		struct commit_list *p;
> >  		struct commit *c = prio_queue_get(queue);
> >
> > +		/*
> > +		 * If this commit has an old bitmap, then translate that
> > +		 * bitmap and add its bits to this one. No need to walk
> > +		 * parents or the tree for this commit.
> > +		 */
>
> This comment should be right before "if (old && ...", I think. Here, it
> is a bit misleading. It leads me to think that "this commit has an old
> bitmap" means old_bitmap != NULL, but it is actually old != NULL.

Yup, the comment is much more clear when placed there, thanks.

> > +		if (old_bitmap && mapping) {
>
> This is defensive in that if we somehow calculate old_bitmap without
> mapping (or the other way around) (which is a bug), things just slow
> down instead of breaking. I'm OK with this, but I still wanted to call
> it out.

Right, we should never have one without the other, so in that sense this
is a defensive check. IOW, this could easily be written as `if
(old_bitmap)` or `if (mapping)`, but being extra defensive here doesn't
hurt.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition
  2020-12-02  7:44     ` Jonathan Tan
@ 2020-12-02 16:30       ` Taylor Blau
  2020-12-07 18:19         ` Jonathan Tan
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-12-02 16:30 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, git, dstolee, gitster, peff, martin.agren, szeder.dev

On Tue, Dec 01, 2020 at 11:44:39PM -0800, Jonathan Tan wrote:
> Do you have numbers of how large the commit bitmasks are?

No, I didn't measure the size of the commit bitmasks directly, but they
are captured in the peak heap measurements that I took below.

> > In an effort to discover a happy medium, this change reduces the walk
> > for intermediate commits to only the first-parent history. This focuses
> > the walk on how the histories converge, which still has significant
> > reduction in repeat object walks. It is still possible to create
> > quadratic behavior in this version, but it is probably less likely in
> > realistic data shapes.
>
> Would this work? I agree that the width of the commit bitmasks would go
> down (and there would also be fewer commit bitmasks generated, further
> increasing the memory savings). But intuitively, if there is a commit
> that is selected and only accessible through non-1st-parent links, then
> any bitmaps generated for it cannot be contributed to its descendants
> (since there was no descendant-to-ancestor walk that could reach it in
> order to form the reverse edge).

s/bitmaps/bitmasks. We'll select commits independent of their first
parent histories, and so in the situation that you're describing, if C
reaches A only through non-1st-parent history, then A's bitmask will not
contain the bits from C.

But when generating the reachability bitmap for C, we'll still find that
we've generated a bitmap for A, and we can copy its bits directly. If
this differs from an ancestor P that _is_ in the first-parent history,
then P pushed its bits to C before calling fill_bitmap_commit() through
the reverse edges.

> > Here is some data taken on a fresh clone of the kernel:
> >
> >              |   runtime (sec)    |   peak heap (GB)   |
> >              |                    |                    |
> >              |   from  |   with   |   from  |   with   |
> >              | scratch | existing | scratch | existing |
> >   -----------+---------+----------+---------+-----------
> >     original |  64.044 |   83.241 |   2.088 |    2.194 |
> >   last patch |  44.811 |   27.828 |   2.289 |    2.358 |
> >   this patch | 100.641 |   35.560 |   2.152 |    2.224 |
>
> Hmm...the jump from 44 to 100 seems rather large.

Indeed. It's ameliorated a little bit in the later patches. We are
over-walking some objects (as in we are walking them multiple times),
but the return we get is reducing the peak heap usage from what it was
in the last patch.

In the "unfathomably large" category, this makes things tractable.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps
  2020-12-02  8:08     ` Jonathan Tan
@ 2020-12-02 16:35       ` Taylor Blau
  2020-12-02 18:22         ` Derrick Stolee
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-12-02 16:35 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, git, dstolee, gitster, peff, martin.agren, szeder.dev

On Wed, Dec 02, 2020 at 12:08:08AM -0800, Jonathan Tan wrote:
> > +			c_ent->maximal = 1;
> > +			p = NULL;
>
> Here, we're setting maximal without also setting a bit in this commit's
> commit_mask. This is fine because we're not propagating this commit's
> commit_mask to any parents (we're not continuing the walk from this
> commit), but it seems like a code smell. Suggested fix is below.
>
> > +		}
> > +
> >  		if (c_ent->maximal) {
> >  			num_maximal++;
> >  			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
> >  			bb->commits[bb->commits_nr++] = commit;
> >  		}
>
> As far as I can tell, this means that this commit occupies a bit
> position in the commit mask that it doesn't need. Could this go into a
> separate list instead, to be appended to bb->commits at the very end?
>
> We could even skip the whole maximal stuff (for commits with existing
> bitmaps) and replace "c_ent->maximal = 1;" above with "add to list that
> we're going to append to bb->commits at the very end". That has the
> advantage of not having to redefine "maximal".

Hmm. I'd trust Stolee's opinion over mine here, so I'll be curious what
he has to say.

> >
> > +		if (!c_ent->commit_mask)
> > +			continue;
>
> I think this should be moved as far up as possible (right after
> the call to bb_data_at()) and commented, something like:
>
>   If there is no commit_mask, there is no reason to iterate over this
>   commit; it is not selected (if it were, it would not have a blank
>   commit mask) and all its children have existing bitmaps (see the
>   comment starting with "This commit has an existing bitmap" below), so
>   it does not contribute anything to the final bitmap file or its
>   descendants.

Good suggestion, thanks.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps
  2020-12-02 16:35       ` Taylor Blau
@ 2020-12-02 18:22         ` Derrick Stolee
  2020-12-02 18:25           ` Taylor Blau
  2020-12-07 18:24           ` Jonathan Tan
  0 siblings, 2 replies; 174+ messages in thread
From: Derrick Stolee @ 2020-12-02 18:22 UTC (permalink / raw)
  To: Taylor Blau, Jonathan Tan
  Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev

On 12/2/2020 11:35 AM, Taylor Blau wrote:
> On Wed, Dec 02, 2020 at 12:08:08AM -0800, Jonathan Tan wrote:
>>> +			c_ent->maximal = 1;
>>> +			p = NULL;
>>
>> Here, we're setting maximal without also setting a bit in this commit's
>> commit_mask. This is fine because we're not propagating this commit's
>> commit_mask to any parents (we're not continuing the walk from this
>> commit), but it seems like a code smell. Suggested fix is below.
>>
>>> +		}
>>> +
>>>  		if (c_ent->maximal) {
>>>  			num_maximal++;
>>>  			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
>>>  			bb->commits[bb->commits_nr++] = commit;
>>>  		}
>>
>> As far as I can tell, this means that this commit occupies a bit
>> position in the commit mask that it doesn't need. Could this go into a
>> separate list instead, to be appended to bb->commits at the very end?

I don't see any value in having a second list here. That only makes
things more complicated.

>> We could even skip the whole maximal stuff (for commits with existing
>> bitmaps) and replace "c_ent->maximal = 1;" above with "add to list that
>> we're going to append to bb->commits at the very end". That has the
>> advantage of not having to redefine "maximal".
> 
> Hmm. I'd trust Stolee's opinion over mine here, so I'll be curious what
> he has to say.

It would be equivalent to add it to the list and then continuing the
loop instead of piggy-backing on the if (c_ent->maximal) block, followed
by a trivial loop over the (nullified) parents.

>>>
>>> +		if (!c_ent->commit_mask)
>>> +			continue;
>>
>> I think this should be moved as far up as possible (right after
>> the call to bb_data_at()) and commented, something like:
>>
>>   If there is no commit_mask, there is no reason to iterate over this
>>   commit; it is not selected (if it were, it would not have a blank
>>   commit mask) and all its children have existing bitmaps (see the
>>   comment starting with "This commit has an existing bitmap" below), so
>>   it does not contribute anything to the final bitmap file or its
>>   descendants.
> 
> Good suggestion, thanks.

Yeah, makes sense to me.

Thanks,
-Stolee

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps
  2020-12-02 18:22         ` Derrick Stolee
@ 2020-12-02 18:25           ` Taylor Blau
  2020-12-07 18:26             ` Jonathan Tan
  2020-12-07 18:24           ` Jonathan Tan
  1 sibling, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-12-02 18:25 UTC (permalink / raw)
  To: Jonathan Tan
  Cc: Taylor Blau, Derrick Stolee, git, dstolee, gitster, peff,
	martin.agren, szeder.dev

On Wed, Dec 02, 2020 at 01:22:27PM -0500, Derrick Stolee wrote:
> >> We could even skip the whole maximal stuff (for commits with existing
> >> bitmaps) and replace "c_ent->maximal = 1;" above with "add to list that
> >> we're going to append to bb->commits at the very end". That has the
> >> advantage of not having to redefine "maximal".
> >
> > Hmm. I'd trust Stolee's opinion over mine here, so I'll be curious what
> > he has to say.
>
> It would be equivalent to add it to the list and then continuing the
> loop instead of piggy-backing on the if (c_ent->maximal) block, followed
> by a trivial loop over the (nullified) parents.

Jonathan: does that seem OK to you to leave it as-is? If you don't have
strong objections, I'll go ahead with sending v3 a little later today.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition
  2020-12-02 16:30       ` Taylor Blau
@ 2020-12-07 18:19         ` Jonathan Tan
  2020-12-07 18:43           ` Derrick Stolee
  2020-12-07 18:48           ` Jeff King
  0 siblings, 2 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-12-07 18:19 UTC (permalink / raw)
  To: me; +Cc: jonathantanmy, git, dstolee, gitster, peff, martin.agren, szeder.dev

> > > In an effort to discover a happy medium, this change reduces the walk
> > > for intermediate commits to only the first-parent history. This focuses
> > > the walk on how the histories converge, which still has significant
> > > reduction in repeat object walks. It is still possible to create
> > > quadratic behavior in this version, but it is probably less likely in
> > > realistic data shapes.
> >
> > Would this work? I agree that the width of the commit bitmasks would go
> > down (and there would also be fewer commit bitmasks generated, further
> > increasing the memory savings). But intuitively, if there is a commit
> > that is selected and only accessible through non-1st-parent links, then
> > any bitmaps generated for it cannot be contributed to its descendants
> > (since there was no descendant-to-ancestor walk that could reach it in
> > order to form the reverse edge).
> 
> s/bitmaps/bitmasks. 

I do mean bitmaps there - bitmasks are contributed to parents, but
bitmaps are contributed to descendants, if I remember correctly.

> We'll select commits independent of their first
> parent histories, and so in the situation that you're describing, if C
> reaches A only through non-1st-parent history, then A's bitmask will not
> contain the bits from C.

C is the descendant and A is the ancestor. Yes, A's bitmask will not
contain the bits from C.

> But when generating the reachability bitmap for C, we'll still find that
> we've generated a bitmap for A, and we can copy its bits directly. 

Here is my contention - this can happen only if there is a reverse edge
from A to C, as far as I can tell, but such a reverse edge has not been
formed.

> If
> this differs from an ancestor P that _is_ in the first-parent history,
> then P pushed its bits to C before calling fill_bitmap_commit() through
> the reverse edges.
> 
> > > Here is some data taken on a fresh clone of the kernel:
> > >
> > >              |   runtime (sec)    |   peak heap (GB)   |
> > >              |                    |                    |
> > >              |   from  |   with   |   from  |   with   |
> > >              | scratch | existing | scratch | existing |
> > >   -----------+---------+----------+---------+-----------
> > >     original |  64.044 |   83.241 |   2.088 |    2.194 |
> > >   last patch |  44.811 |   27.828 |   2.289 |    2.358 |
> > >   this patch | 100.641 |   35.560 |   2.152 |    2.224 |
> >
> > Hmm...the jump from 44 to 100 seems rather large.
> 
> Indeed. It's ameliorated a little bit in the later patches. We are
> over-walking some objects (as in we are walking them multiple times),
> but the return we get is reducing the peak heap usage from what it was
> in the last patch.
> 
> In the "unfathomably large" category, this makes things tractable.

Quoting from the next patch [1]:

>              |   runtime (sec)    |   peak heap (GB)   |
>              |                    |                    |
>              |   from  |   with   |   from  |   with   |
>              | scratch | existing | scratch | existing |
>   -----------+---------+----------+---------+-----------
>   last patch | 100.641 |   35.560 |   2.152 |    2.224 |
>   this patch |  99.720 |   11.696 |   2.152 |    2.217 |

That is true, but it is not ameliorated much :-(

If you have steps to generate these timings, I would like to try
comparing the performance between all patches and all-except-23.

[1] https://lore.kernel.org/git/42399a1c2e52e1d055a2d0ad96af2ca4dce6b1a0.1605649533.git.me@ttaylorr.com/

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps
  2020-12-02 18:22         ` Derrick Stolee
  2020-12-02 18:25           ` Taylor Blau
@ 2020-12-07 18:24           ` Jonathan Tan
  2020-12-07 19:20             ` Derrick Stolee
  1 sibling, 1 reply; 174+ messages in thread
From: Jonathan Tan @ 2020-12-07 18:24 UTC (permalink / raw)
  To: stolee
  Cc: me, jonathantanmy, git, dstolee, gitster, peff, martin.agren, szeder.dev

> On 12/2/2020 11:35 AM, Taylor Blau wrote:
> > On Wed, Dec 02, 2020 at 12:08:08AM -0800, Jonathan Tan wrote:
> >>> +			c_ent->maximal = 1;
> >>> +			p = NULL;
> >>
> >> Here, we're setting maximal without also setting a bit in this commit's
> >> commit_mask. This is fine because we're not propagating this commit's
> >> commit_mask to any parents (we're not continuing the walk from this
> >> commit), but it seems like a code smell. Suggested fix is below.
> >>
> >>> +		}
> >>> +
> >>>  		if (c_ent->maximal) {
> >>>  			num_maximal++;
> >>>  			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
> >>>  			bb->commits[bb->commits_nr++] = commit;
> >>>  		}
> >>
> >> As far as I can tell, this means that this commit occupies a bit
> >> position in the commit mask that it doesn't need. Could this go into a
> >> separate list instead, to be appended to bb->commits at the very end?
> 
> I don't see any value in having a second list here. That only makes
> things more complicated.

It does make things more complicated, but it could help shrink commit
bitmasks (which seem to be a concern, according to patch 23).

Suppose num_maximal was 3 and we encountered such a commit (not
selected, but has an old bitmap). So we increment num_maximal. Then, we
encounter a selected commit. That commit would then have a bitmask of
???01. If we had not incremented num_maximal (which would require a
second list), then the bitmask would be ???1.

> >> We could even skip the whole maximal stuff (for commits with existing
> >> bitmaps) and replace "c_ent->maximal = 1;" above with "add to list that
> >> we're going to append to bb->commits at the very end". That has the
> >> advantage of not having to redefine "maximal".
> > 
> > Hmm. I'd trust Stolee's opinion over mine here, so I'll be curious what
> > he has to say.
> 
> It would be equivalent to add it to the list and then continuing the
> loop instead of piggy-backing on the if (c_ent->maximal) block, followed
> by a trivial loop over the (nullified) parents.

That is true. This suggestion was for code clarity, not for correctness.

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps
  2020-12-02 18:25           ` Taylor Blau
@ 2020-12-07 18:26             ` Jonathan Tan
  0 siblings, 0 replies; 174+ messages in thread
From: Jonathan Tan @ 2020-12-07 18:26 UTC (permalink / raw)
  To: me
  Cc: jonathantanmy, stolee, git, dstolee, gitster, peff, martin.agren,
	szeder.dev

> On Wed, Dec 02, 2020 at 01:22:27PM -0500, Derrick Stolee wrote:
> > >> We could even skip the whole maximal stuff (for commits with existing
> > >> bitmaps) and replace "c_ent->maximal = 1;" above with "add to list that
> > >> we're going to append to bb->commits at the very end". That has the
> > >> advantage of not having to redefine "maximal".
> > >
> > > Hmm. I'd trust Stolee's opinion over mine here, so I'll be curious what
> > > he has to say.
> >
> > It would be equivalent to add it to the list and then continuing the
> > loop instead of piggy-backing on the if (c_ent->maximal) block, followed
> > by a trivial loop over the (nullified) parents.
> 
> Jonathan: does that seem OK to you to leave it as-is? If you don't have
> strong objections, I'll go ahead with sending v3 a little later today.

Like I (just) said in [1], I think that my comment stands, but this is a
minor and local issue that does not affect the functionality of the
overall patch set so I think you can go ahead and send v3.

[1] https://lore.kernel.org/git/20201207182418.3034961-1-jonathantanmy@google.com/

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition
  2020-12-07 18:19         ` Jonathan Tan
@ 2020-12-07 18:43           ` Derrick Stolee
  2020-12-07 18:45             ` Derrick Stolee
  2020-12-07 18:48           ` Jeff King
  1 sibling, 1 reply; 174+ messages in thread
From: Derrick Stolee @ 2020-12-07 18:43 UTC (permalink / raw)
  To: Jonathan Tan, me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev

On 12/7/2020 1:19 PM, Jonathan Tan wrote:
>>>> In an effort to discover a happy medium, this change reduces the walk
>>>> for intermediate commits to only the first-parent history. This focuses
>>>> the walk on how the histories converge, which still has significant
>>>> reduction in repeat object walks. It is still possible to create
>>>> quadratic behavior in this version, but it is probably less likely in
>>>> realistic data shapes.
>>>
>>> Would this work? I agree that the width of the commit bitmasks would go
>>> down (and there would also be fewer commit bitmasks generated, further
>>> increasing the memory savings). But intuitively, if there is a commit
>>> that is selected and only accessible through non-1st-parent links, then
>>> any bitmaps generated for it cannot be contributed to its descendants
>>> (since there was no descendant-to-ancestor walk that could reach it in
>>> order to form the reverse edge).
>>
>> s/bitmaps/bitmasks. 
> 
> I do mean bitmaps there - bitmasks are contributed to parents, but
> bitmaps are contributed to descendants, if I remember correctly.

Ah, the confusion is related around the word "contributed".

Yes, without walking all the parents, we will not populate the
reverse edges with all of the possible connections. Thus, the
step that pushes reachability bitmap bits along the reverse edges
will not be as effective.

And this is the whole point: the reverse-edges existed to get us
into a state of _never_ walking an object multiple times, but that
ended up being too expensive to guarantee. This change relaxes that
condition in a way that still works for large, linear histories.

Since "pack-bitmap-write: fill bitmap with commit history" changed
fill_bitmap_commit() to walk commits until reaching those already in
the precomputed reachability bitmap, it will correctly walk far
enough to compute the reachability bitmap for that commit. It might
just walk objects that are part of _another_, already computed bitmap
that is not reachable via the first-parent history.

The very next patch "pack-bitmap-write: better reuse bitmaps" fixes
this problem by checking for computed bitmaps during the walk in
fill_bitmap_commit().

>> We'll select commits independent of their first
>> parent histories, and so in the situation that you're describing, if C
>> reaches A only through non-1st-parent history, then A's bitmask will not
>> contain the bits from C.
> 
> C is the descendant and A is the ancestor. Yes, A's bitmask will not
> contain the bits from C.
> 
>> But when generating the reachability bitmap for C, we'll still find that
>> we've generated a bitmap for A, and we can copy its bits directly. 
> 
> Here is my contention - this can happen only if there is a reverse edge
> from A to C, as far as I can tell, but such a reverse edge has not been
> formed.

See above. This patch is completely correct given the changes to
fill_bitmap_commit() from earlier. It just needs a tweak (in the
next patch) to recover some of the performance.

>> If
>> this differs from an ancestor P that _is_ in the first-parent history,
>> then P pushed its bits to C before calling fill_bitmap_commit() through
>> the reverse edges.
>>
>>>> Here is some data taken on a fresh clone of the kernel:
>>>>
>>>>              |   runtime (sec)    |   peak heap (GB)   |
>>>>              |                    |                    |
>>>>              |   from  |   with   |   from  |   with   |
>>>>              | scratch | existing | scratch | existing |
>>>>   -----------+---------+----------+---------+-----------
>>>>     original |  64.044 |   83.241 |   2.088 |    2.194 |
>>>>   last patch |  44.811 |   27.828 |   2.289 |    2.358 |
>>>>   this patch | 100.641 |   35.560 |   2.152 |    2.224 |
>>>
>>> Hmm...the jump from 44 to 100 seems rather large.
>>
>> Indeed. It's ameliorated a little bit in the later patches. We are
>> over-walking some objects (as in we are walking them multiple times),
>> but the return we get is reducing the peak heap usage from what it was
>> in the last patch.
>>
>> In the "unfathomably large" category, this makes things tractable.
> 
> Quoting from the next patch [1]:
> 
>>              |   runtime (sec)    |   peak heap (GB)   |
>>              |                    |                    |
>>              |   from  |   with   |   from  |   with   |
>>              | scratch | existing | scratch | existing |
>>   -----------+---------+----------+---------+-----------
>>   last patch | 100.641 |   35.560 |   2.152 |    2.224 |
>>   this patch |  99.720 |   11.696 |   2.152 |    2.217 |
> 
> That is true, but it is not ameliorated much :-(
> 
> If you have steps to generate these timings, I would like to try
> comparing the performance between all patches and all-except-23.
> 
> [1] https://lore.kernel.org/git/42399a1c2e52e1d055a2d0ad96af2ca4dce6b1a0.1605649533.git.me@ttaylorr.com/

The biggest problem is that all-except-23 is an unnacceptable
final state, since it has a performance blowout on super-wide
repos such as the git/git fork network. Perhaps Taylor could
include some performance numbers on that, but I'm pretty sure
that the calculation literally OOMs instead of completing. It
might be worth an explicit mention in the patch.

It might also be better to always include a baseline from the
start of the series to ensure that the final state is better
than the initial state. With only the last/this comparison,
it doesn't look great when we backtrack in performance (even
when it is necessary to do so).

Thanks,
-Stolee

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition
  2020-12-07 18:43           ` Derrick Stolee
@ 2020-12-07 18:45             ` Derrick Stolee
  0 siblings, 0 replies; 174+ messages in thread
From: Derrick Stolee @ 2020-12-07 18:45 UTC (permalink / raw)
  To: Jonathan Tan, me; +Cc: git, dstolee, gitster, peff, martin.agren, szeder.dev

On 12/7/2020 1:43 PM, Derrick Stolee wrote:
> The very next patch "pack-bitmap-write: better reuse bitmaps" fixes
> this problem by checking for computed bitmaps during the walk in
> fill_bitmap_commit().

Of course I got confused and instead I meant to refer to the _previous_
patch, "pack-bitmap-write: use existing bitmaps".

-Stolee


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition
  2020-12-07 18:19         ` Jonathan Tan
  2020-12-07 18:43           ` Derrick Stolee
@ 2020-12-07 18:48           ` Jeff King
  1 sibling, 0 replies; 174+ messages in thread
From: Jeff King @ 2020-12-07 18:48 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, git, dstolee, gitster, martin.agren, szeder.dev

On Mon, Dec 07, 2020 at 10:19:09AM -0800, Jonathan Tan wrote:

> Quoting from the next patch [1]:
> 
> >              |   runtime (sec)    |   peak heap (GB)   |
> >              |                    |                    |
> >              |   from  |   with   |   from  |   with   |
> >              | scratch | existing | scratch | existing |
> >   -----------+---------+----------+---------+-----------
> >   last patch | 100.641 |   35.560 |   2.152 |    2.224 |
> >   this patch |  99.720 |   11.696 |   2.152 |    2.217 |
> 
> That is true, but it is not ameliorated much :-(
> 
> If you have steps to generate these timings, I would like to try
> comparing the performance between all patches and all-except-23.

Yes, the drop in CPU performance is disappointing. And there may be a
better way of selecting the commits that recovers some of it.

But all-except-23 is not workable from a memory usage perspective.
Originally we did not have that commit at all, and a full repack of our
git/git fork network (i.e., all forks stuffed into one alternates repo)
went from 16GB to OOM-ing after growing 80+GB (I don't know how large it
would have gone on a bigger machine).

-Peff

^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps
  2020-12-07 18:24           ` Jonathan Tan
@ 2020-12-07 19:20             ` Derrick Stolee
  0 siblings, 0 replies; 174+ messages in thread
From: Derrick Stolee @ 2020-12-07 19:20 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: me, git, dstolee, gitster, peff, martin.agren, szeder.dev

On 12/7/2020 1:24 PM, Jonathan Tan wrote:
>> On 12/2/2020 11:35 AM, Taylor Blau wrote:
>>> On Wed, Dec 02, 2020 at 12:08:08AM -0800, Jonathan Tan wrote:
>>>>> +			c_ent->maximal = 1;
>>>>> +			p = NULL;
>>>>
>>>> Here, we're setting maximal without also setting a bit in this commit's
>>>> commit_mask. This is fine because we're not propagating this commit's
>>>> commit_mask to any parents (we're not continuing the walk from this
>>>> commit), but it seems like a code smell. Suggested fix is below.
>>>>
>>>>> +		}
>>>>> +
>>>>>  		if (c_ent->maximal) {
>>>>>  			num_maximal++;
>>>>>  			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
>>>>>  			bb->commits[bb->commits_nr++] = commit;
>>>>>  		}
>>>>
>>>> As far as I can tell, this means that this commit occupies a bit
>>>> position in the commit mask that it doesn't need. Could this go into a
>>>> separate list instead, to be appended to bb->commits at the very end?
>>
>> I don't see any value in having a second list here. That only makes
>> things more complicated.
> 
> It does make things more complicated, but it could help shrink commit
> bitmasks (which seem to be a concern, according to patch 23).
> 
> Suppose num_maximal was 3 and we encountered such a commit (not
> selected, but has an old bitmap). So we increment num_maximal. Then, we
> encounter a selected commit. That commit would then have a bitmask of
> ???01. If we had not incremented num_maximal (which would require a
> second list), then the bitmask would be ???1.

OK, I see the value. The value is bounded, since the number of
these "0" gaps is bounded by the number of selected commits _and_
reduces the possible number of maximal commits.

However, that seems like enough justification to create the second
list.

Thanks,
-Stolee

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [PATCH v3 00/24] pack-bitmap: bitmap generation improvements
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (23 preceding siblings ...)
  2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
@ 2020-12-08  0:04 ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW() Taylor Blau
                     ` (24 more replies)
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
  25 siblings, 25 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

Here's an updated v3 of mine, Stolee, and Peff's series to improve the
CPU performance of generating reachability bitmaps.

Not a great deal has changed since last time, though this version does
incorporate feedback from Jonathan Tan and Junio (thanks, both, for your
review!). A range-diff is below for convenience, but the major
highlights are:

  - ALLOC_GROW() is now used in more places (this didn't measurably
    affect peak-heap usage, so it's a pure nicetie to avoid duplicating
    that logic throughout the ewah code)

  - Some later commits have been reworded to add additional clarity

  - bitmap_diff_nonzero() was replaced with bitmap_is_subset(), and the
    implementation amended to follow Junio's suggestion

  - The final patches have been slightly modified to avoid allocating
    extra bits in the bitmasks for cases where reachability bitmaps have
    already been generated for those patches (suggestion courtesy of
    Jonathan Tan)

I'm hopeful that this will be in good shape for queuing up, since
Jonathan had a chance to review the whole series. Thanks!

Derrick Stolee (9):
  pack-bitmap-write: fill bitmap with commit history
  bitmap: implement bitmap_is_subset()
  commit: implement commit_list_contains()
  t5310: add branch-based checks
  pack-bitmap-write: rename children to reverse_edges
  pack-bitmap-write: build fewer intermediate bitmaps
  pack-bitmap-write: use existing bitmaps
  pack-bitmap-write: relax unique rewalk condition
  pack-bitmap-write: better reuse bitmaps

Jeff King (11):
  pack-bitmap: fix header size check
  pack-bitmap: bounds-check size of cache extension
  t5310: drop size of truncated ewah bitmap
  rev-list: die when --test-bitmap detects a mismatch
  ewah: factor out bitmap growth
  ewah: make bitmap growth less aggressive
  ewah: implement bitmap_or()
  ewah: add bitmap_dup() function
  pack-bitmap-write: reimplement bitmap writing
  pack-bitmap-write: pass ownership of intermediate bitmaps
  pack-bitmap-write: ignore BITMAP_FLAG_REUSE

Taylor Blau (4):
  ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW()
  pack-bitmap.c: check reads more aggressively when loading
  pack-bitmap: factor out 'bitmap_for_commit()'
  pack-bitmap: factor out 'add_commit_to_bitmap()'

 builtin/pack-objects.c  |   1 -
 commit.c                |  11 +
 commit.h                |   2 +
 ewah/bitmap.c           |  54 ++++-
 ewah/ewah_bitmap.c      |  15 +-
 ewah/ewok.h             |   3 +-
 pack-bitmap-write.c     | 474 ++++++++++++++++++++++++++--------------
 pack-bitmap.c           | 139 ++++++------
 pack-bitmap.h           |   8 +-
 t/t5310-pack-bitmaps.sh | 164 +++++++++++---
 10 files changed, 576 insertions(+), 295 deletions(-)

Range-diff against v2:
 1:  07054ff8ee <  -:  ---------- ewah/ewah_bitmap.c: grow buffer past 1
 -:  ---------- >  1:  0b25ba4ca7 ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW()
 2:  74a13b4a6e =  2:  b455b248e4 pack-bitmap: fix header size check
 3:  db11116dac =  3:  7322427444 pack-bitmap: bounds-check size of cache extension
 4:  f779e76f82 =  4:  055bc1fe66 t5310: drop size of truncated ewah bitmap
 5:  1a9ac1c4ae =  5:  c99cacea67 rev-list: die when --test-bitmap detects a mismatch
 6:  9bb1ea3b19 =  6:  b79360383e ewah: factor out bitmap growth
 7:  f8426c7e8b <  -:  ---------- ewah: make bitmap growth less aggressive
 -:  ---------- >  7:  4b56f12932 ewah: make bitmap growth less aggressive
 8:  674e31f98e !  8:  34137a7f35 ewah: implement bitmap_or()
    @@ Commit message

         Interestingly, we have a public header declaration going back to
         e1273106f6 (ewah: compressed bitmap implementation, 2013-11-14), but the
    -    function was never implemented.
    +    function was never implemented. That was all OK since there were no
    +    users of 'bitmap_or()', but a first caller will be added in a couple of
    +    patches.

         Signed-off-by: Jeff King <peff@peff.net>
         Signed-off-by: Taylor Blau <me@ttaylorr.com>
 9:  a903c949d8 !  9:  fe89f87716 ewah: add bitmap_dup() function
    @@ ewah/bitmap.c: struct bitmap *bitmap_new(void)
     +
      static void bitmap_grow(struct bitmap *self, size_t word_alloc)
      {
    - 	if (word_alloc > self->word_alloc) {
    + 	size_t old_size = self->word_alloc;

      ## ewah/ewok.h ##
     @@ ewah/ewok.h: struct bitmap {
10:  c951206729 ! 10:  91cd8b1a49 pack-bitmap-write: reimplement bitmap writing
    @@ pack-bitmap-write.c: static void compute_xor_offsets(void)
     -		kh_value(writer.bitmaps, hash_pos) = stored;
     -		display_progress(writer.progress, writer.selected_nr - i);
     +	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
    -+		the_repository);
    ++			    the_repository);
     +
     +	bitmap_builder_init(&bb, &writer);
     +	for (i = bb.commits_nr; i > 0; i--) {
    @@ pack-bitmap-write.c: static void compute_xor_offsets(void)
     +		ent->bitmap = NULL;
      	}
     +	bitmap_builder_clear(&bb);
    ++
    ++	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
    ++			    the_repository);

     -	bitmap_free(base);
      	stop_progress(&writer.progress);
11:  466dd3036a = 11:  64598024ec pack-bitmap-write: pass ownership of intermediate bitmaps
12:  8e5607929d ! 12:  93fc437a3c pack-bitmap-write: fill bitmap with commit history
    @@ Metadata
      ## Commit message ##
         pack-bitmap-write: fill bitmap with commit history

    -    The fill_bitmap_commit() method assumes that every parent of the given
    -    commit is already part of the current bitmap. Instead of making that
    -    assumption, let's walk parents until we reach commits already part of
    -    the bitmap. Set the value for that parent immediately after querying to
    -    save time doing double calls to find_object_pos() and to avoid inserting
    -    the parent into the queue multiple times.
    +    The current implementation of bitmap_writer_build() creates a
    +    reachability bitmap for every walked commit. After computing a bitmap
    +    for a commit, those bits are pushed to an in-progress bitmap for its
    +    children.
    +
    +    fill_bitmap_commit() assumes the bits corresponding to objects
    +    reachable from the parents of a commit are already set. This means that
    +    when visiting a new commit, we only have to walk the objects reachable
    +    between it and any of its parents.
    +
    +    A future change to bitmap_writer_build() will relax this condition so
    +    not all parents have their bits set. Prepare for that by having
    +    'fill_bitmap_commit()' walk parents until reaching commits whose bits
    +    are already set. Then, walk the trees for these commits as well.
    +
    +    This has no functional change with the current implementation of
    +    bitmap_writer_build().

         Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
         Signed-off-by: Taylor Blau <me@ttaylorr.com>
    @@ pack-bitmap-write.c: void bitmap_writer_build(struct packing_data *to_pack)
     +	clear_prio_queue(&queue);
      	bitmap_builder_clear(&bb);

    - 	stop_progress(&writer.progress);
    + 	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
13:  4840c64c51 ! 13:  0d5213ba44 bitmap: add bitmap_diff_nonzero()
    @@ Metadata
     Author: Derrick Stolee <dstolee@microsoft.com>

      ## Commit message ##
    -    bitmap: add bitmap_diff_nonzero()
    +    bitmap: implement bitmap_is_subset()

    -    The bitmap_diff_nonzero() checks if the 'self' bitmap contains any bits
    -    that are not on in the 'other' bitmap.
    -
    -    Also, delete the declaration of bitmap_is_subset() as it is not used or
    -    implemented.
    +    The bitmap_is_subset() function checks if the 'self' bitmap contains any
    +    bitmaps that are not on in the 'other' bitmap. Up until this patch, it
    +    had a declaration, but no implementation or callers. A subsequent patch
    +    will want this function, so implement it here.

    +    Helped-by: Junio C Hamano <gitster@pobox.com>
         Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
         Signed-off-by: Taylor Blau <me@ttaylorr.com>

    @@ ewah/bitmap.c: int bitmap_equals(struct bitmap *self, struct bitmap *other)
      	return 1;
      }

    -+int bitmap_diff_nonzero(struct bitmap *self, struct bitmap *other)
    ++int bitmap_is_subset(struct bitmap *self, struct bitmap *other)
     +{
    -+	struct bitmap *small;
    -+	size_t i;
    ++	size_t common_size, i;
     +
    -+	if (self->word_alloc < other->word_alloc) {
    -+		small = self;
    -+	} else {
    -+		small = other;
    -+
    -+		for (i = other->word_alloc; i < self->word_alloc; i++) {
    -+			if (self->words[i] != 0)
    ++	if (self->word_alloc < other->word_alloc)
    ++		common_size = self->word_alloc;
    ++	else {
    ++		common_size = other->word_alloc;
    ++		for (i = common_size; i < self->word_alloc; i++) {
    ++			if (self->words[i])
     +				return 1;
     +		}
     +	}
     +
    -+	for (i = 0; i < small->word_alloc; i++) {
    -+		if ((self->words[i] & ~other->words[i]))
    ++	for (i = 0; i < common_size; i++) {
    ++		if (self->words[i] & ~other->words[i])
     +			return 1;
     +	}
    -+
     +	return 0;
     +}
     +
    @@ ewah/ewok.h: int bitmap_get(struct bitmap *self, size_t pos);
      void bitmap_free(struct bitmap *self);
      int bitmap_equals(struct bitmap *self, struct bitmap *other);
     -int bitmap_is_subset(struct bitmap *self, struct bitmap *super);
    -+int bitmap_diff_nonzero(struct bitmap *self, struct bitmap *other);
    ++int bitmap_is_subset(struct bitmap *self, struct bitmap *other);

      struct ewah_bitmap * bitmap_to_ewah(struct bitmap *bitmap);
      struct bitmap *ewah_to_bitmap(struct ewah_bitmap *ewah);
14:  63e846f4e8 = 14:  72e745fed8 commit: implement commit_list_contains()
15:  8b5d239333 = 15:  c2cae4a8d0 t5310: add branch-based checks
16:  60a46091bb = 16:  c0e2b6f5d9 pack-bitmap-write: rename children to reverse_edges
17:  8f7bb2dd2e = 17:  37f9636098 pack-bitmap.c: check reads more aggressively when loading
18:  5262daa330 ! 18:  e520c8fdc4 pack-bitmap-write: build fewer intermediate bitmaps
    @@ pack-bitmap-write.c: static void bitmap_builder_init(struct bitmap_builder *bb,
     +				c_not_p = 1;
     +				p_not_c = 0;
     +			} else {
    -+				c_not_p = bitmap_diff_nonzero(c_ent->commit_mask, p_ent->commit_mask);
    -+				p_not_c = bitmap_diff_nonzero(p_ent->commit_mask, c_ent->commit_mask);
    ++				c_not_p = bitmap_is_subset(c_ent->commit_mask, p_ent->commit_mask);
    ++				p_not_c = bitmap_is_subset(p_ent->commit_mask, c_ent->commit_mask);
     +			}
     +
     +			if (!c_not_p)
    @@ t/t5310-pack-bitmaps.sh: has_any () {
     +#      | / \________________________ |
     +#      |/                           \|
     +# (l2) *                             * (r2)
    -+#       \____________...____________ |
    ++#       \___________________________ |
     +#                                   \|
     +#                                    * (base)
     +#
    @@ t/t5310-pack-bitmaps.sh: test_expect_success 'setup repo with moderate-sized his
      '

      test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
    -@@ t/t5310-pack-bitmaps.sh: test_expect_success 'truncated bitmap fails gracefully (ewah)' '
    - 	git rev-list --use-bitmap-index --count --all >expect &&
    - 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
    - 	test_when_finished "rm -f $bitmap" &&
    --	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
    -+	test_copy_bytes 270 <$bitmap >$bitmap.tmp &&
    - 	mv -f $bitmap.tmp $bitmap &&
    - 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
    - 	test_cmp expect actual &&
19:  a206f48614 = 19:  c3975fcf78 pack-bitmap-write: ignore BITMAP_FLAG_REUSE
20:  9928b3c7da = 20:  d5ef2c7f81 pack-bitmap: factor out 'bitmap_for_commit()'
21:  f40a39a48a = 21:  f0500190f0 pack-bitmap: factor out 'add_commit_to_bitmap()'
22:  4bf5e78a54 ! 22:  c6fde2b0c4 pack-bitmap-write: use existing bitmaps
    @@ Commit message
         In fill_bitmap_commit(), we must reorder thing somewhat. The priority
         queue walks commits from newest-to-oldest, which means we correctly stop
         walking when reaching a commit with a bitmap. However, if we walk trees
    -    from top to bottom, then we might be parsing trees that are actually
    -    part of a re-used bitmap. To avoid over-walking trees, add them to a
    -    LIFO queue and walk them from bottom-to-top after exploring commits
    -    completely.
    +    interleaved with the commits, then we might be parsing trees that are
    +    actually part of a re-used bitmap. To avoid over-walking trees, add them
    +    to a LIFO queue and walk them after exploring commits completely.

         On git.git, this reduces a second immediate bitmap computation from 2.0s
         to 1.0s. On linux.git, we go from 32s to 22s. On chromium's fork
    @@ pack-bitmap-write.c: static void fill_bitmap_tree(struct bitmap *bitmap,
      		struct commit_list *p;
      		struct commit *c = prio_queue_get(queue);

    -+		/*
    -+		 * If this commit has an old bitmap, then translate that
    -+		 * bitmap and add its bits to this one. No need to walk
    -+		 * parents or the tree for this commit.
    -+		 */
     +		if (old_bitmap && mapping) {
    -+			struct ewah_bitmap *old;
    -+
    -+			old = bitmap_for_commit(old_bitmap, c);
    ++			struct ewah_bitmap *old = bitmap_for_commit(old_bitmap, c);
    ++			/*
    ++			 * If this commit has an old bitmap, then translate that
    ++			 * bitmap and add its bits to this one. No need to walk
    ++			 * parents or the tree for this commit.
    ++			 */
     +			if (old && !rebuild_bitmap(mapping, old, ent->bitmap))
     +				continue;
     +		}
    @@ pack-bitmap-write.c: void bitmap_writer_build(struct packing_data *to_pack)
      	writer.to_pack = to_pack;
     @@ pack-bitmap-write.c: void bitmap_writer_build(struct packing_data *to_pack)
      	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
    - 		the_repository);
    + 			    the_repository);

     +	old_bitmap = prepare_bitmap_git(to_pack->repo);
     +	if (old_bitmap)
    @@ pack-bitmap-write.c: void bitmap_writer_build(struct packing_data *to_pack)
      	bitmap_builder_clear(&bb);
     +	free(mapping);

    - 	stop_progress(&writer.progress);
    -
    + 	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
    + 			    the_repository);
23:  1da4fa0fb8 ! 23:  50d2031deb pack-bitmap-write: relax unique rewalk condition
    @@ Commit message
                      | scratch | existing | scratch | existing |
           -----------+---------+----------+---------+-----------
             original |  64.044 |   83.241 |   2.088 |    2.194 |
    -      last patch |  44.811 |   27.828 |   2.289 |    2.358 |
    -      this patch | 100.641 |   35.560 |   2.152 |    2.224 |
    +      last patch |  45.049 |   37.624 |   2.267 |    2.334 |
    +      this patch |  88.478 |   53.218 |   2.157 |    2.224 |

         Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
         Signed-off-by: Taylor Blau <me@ttaylorr.com>
24:  42399a1c2e <  -:  ---------- pack-bitmap-write: better reuse bitmaps
 -:  ---------- > 24:  6b9950771e pack-bitmap-write: better reuse bitmaps
--
2.29.2.533.g07db1f5344

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [PATCH v3 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW()
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 02/24] pack-bitmap: fix header size check Taylor Blau
                     ` (23 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

'ewah/ewah_bitmap.c:buffer_grow()' is responsible for growing the buffer
used to store the bits of an EWAH bitmap. It is essentially doing the
same task as the 'ALLOC_GROW()' macro, so use that instead.

This simplifies the callers of 'buffer_grow()', who no longer have to
ask for a specific size, but rather specify how much of the buffer they
need. They also no longer need to guard 'buffer_grow()' behind an if
statement, since 'ALLOC_GROW()' (and, by extension, 'buffer_grow()') is
a noop if the buffer is already large enough.

But, the most significant change is that this fixes a bug when calling
buffer_grow() with both 'alloc_size' and 'new_size' set to 1. In this
case, truncating integer math will leave the new size set to 1, causing
the buffer to never grow.

Instead, let alloc_nr() handle this, which asks for '(new_size + 16) * 3
/ 2' instead of 'new_size * 3 / 2'.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/ewah_bitmap.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/ewah/ewah_bitmap.c b/ewah/ewah_bitmap.c
index d59b1afe3d..2a8c7c5c33 100644
--- a/ewah/ewah_bitmap.c
+++ b/ewah/ewah_bitmap.c
@@ -19,6 +19,7 @@
 #include "git-compat-util.h"
 #include "ewok.h"
 #include "ewok_rlw.h"
+#include "cache.h"
 
 static inline size_t min_size(size_t a, size_t b)
 {
@@ -33,20 +34,13 @@ static inline size_t max_size(size_t a, size_t b)
 static inline void buffer_grow(struct ewah_bitmap *self, size_t new_size)
 {
 	size_t rlw_offset = (uint8_t *)self->rlw - (uint8_t *)self->buffer;
-
-	if (self->alloc_size >= new_size)
-		return;
-
-	self->alloc_size = new_size;
-	REALLOC_ARRAY(self->buffer, self->alloc_size);
+	ALLOC_GROW(self->buffer, new_size, self->alloc_size);
 	self->rlw = self->buffer + (rlw_offset / sizeof(eword_t));
 }
 
 static inline void buffer_push(struct ewah_bitmap *self, eword_t value)
 {
-	if (self->buffer_size + 1 >= self->alloc_size)
-		buffer_grow(self, self->buffer_size * 3 / 2);
-
+	buffer_grow(self, self->buffer_size + 1);
 	self->buffer[self->buffer_size++] = value;
 }
 
@@ -137,8 +131,7 @@ void ewah_add_dirty_words(
 
 		rlw_set_literal_words(self->rlw, literals + can_add);
 
-		if (self->buffer_size + can_add >= self->alloc_size)
-			buffer_grow(self, (self->buffer_size + can_add) * 3 / 2);
+		buffer_grow(self, self->buffer_size + can_add);
 
 		if (negate) {
 			size_t i;
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 02/24] pack-bitmap: fix header size check
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW() Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
                     ` (22 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

When we parse a .bitmap header, we first check that we have enough bytes
to make a valid header. We do that based on sizeof(struct
bitmap_disk_header). However, as of 0f4d6cada8 (pack-bitmap: make bitmap
header handling hash agnostic, 2019-02-19), that struct oversizes its
checksum member to GIT_MAX_RAWSZ. That means we need to adjust for the
difference between that constant and the size of the actual hash we're
using. That commit adjusted the code which moves our pointer forward,
but forgot to update the size check.

This meant we were overly strict about the header size (requiring room
for a 32-byte worst-case hash, when sha1 is only 20 bytes). But in
practice it didn't matter because bitmap files tend to have at least 12
bytes of actual data anyway, so it was unlikely for a valid file to be
caught by this.

Let's fix it by pulling the header size into a separate variable and
using it in both spots. That fixes the bug and simplifies the code to make
it harder to have a mismatch like this in the future. It will also come
in handy in the next patch for more bounds checking.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 4077e731e8..fe5647e72e 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -138,9 +138,10 @@ static struct ewah_bitmap *read_bitmap_1(struct bitmap_index *index)
 static int load_bitmap_header(struct bitmap_index *index)
 {
 	struct bitmap_disk_header *header = (void *)index->map;
+	size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
 
-	if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
-		return error("Corrupted bitmap index (missing header data)");
+	if (index->map_size < header_size + the_hash_algo->rawsz)
+		return error("Corrupted bitmap index (too small)");
 
 	if (memcmp(header->magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE)) != 0)
 		return error("Corrupted bitmap index file (wrong header)");
@@ -164,7 +165,7 @@ static int load_bitmap_header(struct bitmap_index *index)
 	}
 
 	index->entry_count = ntohl(header->entry_count);
-	index->map_pos += sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
+	index->map_pos += header_size;
 	return 0;
 }
 
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 03/24] pack-bitmap: bounds-check size of cache extension
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW() Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 02/24] pack-bitmap: fix header size check Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
                     ` (21 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

A .bitmap file may have a "name hash cache" extension, which puts a
sequence of uint32_t values (one per object) at the end of the file.
When we see a flag indicating this extension, we blindly subtract the
appropriate number of bytes from our available length. However, if the
.bitmap file is too short, we'll underflow our length variable and wrap
around, thinking we have a very large length. This can lead to reading
out-of-bounds bytes while loading individual ewah bitmaps.

We can fix this by checking the number of available bytes when we parse
the header. The existing "truncated bitmap" test is now split into two
tests: one where we don't have this extension at all (and hence actually
do try to read a truncated ewah bitmap) and one where we realize
up-front that we can't even fit in the cache structure. We'll check
stderr in each case to make sure we hit the error we're expecting.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c           |  8 ++++++--
 t/t5310-pack-bitmaps.sh | 17 +++++++++++++++--
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index fe5647e72e..074d9ac8f2 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -153,14 +153,18 @@ static int load_bitmap_header(struct bitmap_index *index)
 	/* Parse known bitmap format options */
 	{
 		uint32_t flags = ntohs(header->options);
+		size_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
+		unsigned char *index_end = index->map + index->map_size - the_hash_algo->rawsz;
 
 		if ((flags & BITMAP_OPT_FULL_DAG) == 0)
 			return error("Unsupported options for bitmap index file "
 				"(Git requires BITMAP_OPT_FULL_DAG)");
 
 		if (flags & BITMAP_OPT_HASH_CACHE) {
-			unsigned char *end = index->map + index->map_size - the_hash_algo->rawsz;
-			index->hashes = ((uint32_t *)end) - index->pack->num_objects;
+			if (cache_size > index_end - index->map - header_size)
+				return error("corrupted bitmap index file (too short to fit hash cache)");
+			index->hashes = (void *)(index_end - cache_size);
+			index_end -= cache_size;
 		}
 	}
 
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 1d40fcad39..dbe1ffc88a 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -343,7 +343,8 @@ test_expect_success 'pack reuse respects --incremental' '
 	test_must_be_empty actual
 '
 
-test_expect_success 'truncated bitmap fails gracefully' '
+test_expect_success 'truncated bitmap fails gracefully (ewah)' '
+	test_config pack.writebitmaphashcache false &&
 	git repack -ad &&
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
@@ -352,7 +353,19 @@ test_expect_success 'truncated bitmap fails gracefully' '
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-	test_i18ngrep corrupt stderr
+	test_i18ngrep corrupt.ewah.bitmap stderr
+'
+
+test_expect_success 'truncated bitmap fails gracefully (cache)' '
+	git repack -ad &&
+	git rev-list --use-bitmap-index --count --all >expect &&
+	bitmap=$(ls .git/objects/pack/*.bitmap) &&
+	test_when_finished "rm -f $bitmap" &&
+	test_copy_bytes 512 <$bitmap >$bitmap.tmp &&
+	mv -f $bitmap.tmp $bitmap &&
+	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
+	test_cmp expect actual &&
+	test_i18ngrep corrupted.bitmap.index stderr
 '
 
 # have_delta <obj> <expected_base>
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 04/24] t5310: drop size of truncated ewah bitmap
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (2 preceding siblings ...)
  2020-12-08  0:04   ` [PATCH v3 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
                     ` (20 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

We truncate the .bitmap file to 512 bytes and expect to run into
problems reading an individual ewah file. But this length is somewhat
arbitrary, and just happened to work when the test was added in
9d2e330b17 (ewah_read_mmap: bounds-check mmap reads, 2018-06-14).

An upcoming commit will change the size of the history we create in the
test repo, which will cause this test to fail. We can future-proof it a
bit more by reducing the size of the truncated bitmap file.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 t/t5310-pack-bitmaps.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index dbe1ffc88a..8a2a3b2114 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -349,7 +349,7 @@ test_expect_success 'truncated bitmap fails gracefully (ewah)' '
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
 	test_when_finished "rm -f $bitmap" &&
-	test_copy_bytes 512 <$bitmap >$bitmap.tmp &&
+	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 05/24] rev-list: die when --test-bitmap detects a mismatch
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (3 preceding siblings ...)
  2020-12-08  0:04   ` [PATCH v3 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 06/24] ewah: factor out bitmap growth Taylor Blau
                     ` (19 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

You can use "git rev-list --test-bitmap HEAD" to check that bitmaps
produce the same answer we'd get from a regular traversal. But if we
detect an error, we only print "mismatch", and still exit with a
successful error code.

That makes the uses of --test-bitmap in the test suite (e.g., in t5310)
mostly pointless: even if we saw an error, the tests wouldn't notice.
Let's instead call die(), which will let these tests work as designed,
and alert us if the bitmaps are bogus.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 074d9ac8f2..4431f9f120 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1328,7 +1328,7 @@ void test_bitmap_walk(struct rev_info *revs)
 	if (bitmap_equals(result, tdata.base))
 		fprintf(stderr, "OK!\n");
 	else
-		fprintf(stderr, "Mismatch!\n");
+		die("mismatch in bitmap results");
 
 	free_bitmap_index(bitmap_git);
 }
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 06/24] ewah: factor out bitmap growth
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (4 preceding siblings ...)
  2020-12-08  0:04   ` [PATCH v3 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 07/24] ewah: make bitmap growth less aggressive Taylor Blau
                     ` (18 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

We auto-grow bitmaps when somebody asks to set a bit whose position is
outside of our currently allocated range. Other operations besides
single bit-setting might need to do this, too, so let's pull it into its
own function.

Note that we change the semantics a little: you now ask for the number
of words you'd like to have, not the id of the block you'd like to write
to.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index d8cec585af..7c1ecfa6fd 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -35,18 +35,22 @@ struct bitmap *bitmap_new(void)
 	return bitmap_word_alloc(32);
 }
 
-void bitmap_set(struct bitmap *self, size_t pos)
+static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
-	size_t block = EWAH_BLOCK(pos);
-
-	if (block >= self->word_alloc) {
+	if (word_alloc > self->word_alloc) {
 		size_t old_size = self->word_alloc;
-		self->word_alloc = block ? block * 2 : 1;
+		self->word_alloc = word_alloc * 2;
 		REALLOC_ARRAY(self->words, self->word_alloc);
 		memset(self->words + old_size, 0x0,
 			(self->word_alloc - old_size) * sizeof(eword_t));
 	}
+}
 
+void bitmap_set(struct bitmap *self, size_t pos)
+{
+	size_t block = EWAH_BLOCK(pos);
+
+	bitmap_grow(self, block + 1);
 	self->words[block] |= EWAH_MASK(pos);
 }
 
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 07/24] ewah: make bitmap growth less aggressive
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (5 preceding siblings ...)
  2020-12-08  0:04   ` [PATCH v3 06/24] ewah: factor out bitmap growth Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 08/24] ewah: implement bitmap_or() Taylor Blau
                     ` (17 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

If you ask to set a bit in the Nth word and we haven't yet allocated
that many slots in our array, we'll increase the bitmap size to 2*N.
This means we might frequently end up with bitmaps that are twice the
necessary size (as soon as you ask for the biggest bit, we'll size up to
twice that).

But if we just allocate as many words as were asked for, we may not grow
fast enough. The worst case there is setting bit 0, then 1, etc. Each
time we grow we'd just extend by one more word, giving us linear
reallocations (and quadratic memory copies).

A middle ground is relying on alloc_nr(), which causes us to grow by a
factor of roughly 3/2 instead of 2. That's less aggressive than
doubling, and it may help avoid fragmenting memory. (If we start with N,
then grow twice, our total is N*(3/2)^2 = 9N/4. After growing twice,
that array of size 9N/4 can fit into the space vacated by the original
array and first growth, N+3N/2 = 10N/4 > 9N/4, leading to less
fragmentation in memory).

Our worst case is still 3/2N wasted bits (you set bit N-1, then setting
bit N causes us to grow by 3/2), but our average should be much better.

This isn't usually that big a deal, but it will matter as we shift the
reachability bitmap generation code to store more bitmaps in memory.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 7c1ecfa6fd..6f9e5c529b 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -37,13 +37,10 @@ struct bitmap *bitmap_new(void)
 
 static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
-	if (word_alloc > self->word_alloc) {
-		size_t old_size = self->word_alloc;
-		self->word_alloc = word_alloc * 2;
-		REALLOC_ARRAY(self->words, self->word_alloc);
-		memset(self->words + old_size, 0x0,
-			(self->word_alloc - old_size) * sizeof(eword_t));
-	}
+	size_t old_size = self->word_alloc;
+	ALLOC_GROW(self->words, word_alloc, self->word_alloc);
+	memset(self->words + old_size, 0x0,
+	       (self->word_alloc - old_size) * sizeof(eword_t));
 }
 
 void bitmap_set(struct bitmap *self, size_t pos)
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 08/24] ewah: implement bitmap_or()
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (6 preceding siblings ...)
  2020-12-08  0:04   ` [PATCH v3 07/24] ewah: make bitmap growth less aggressive Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 09/24] ewah: add bitmap_dup() function Taylor Blau
                     ` (16 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

We have a function to bitwise-OR an ewah into an uncompressed bitmap,
but not to OR two uncompressed bitmaps. Let's add it.

Interestingly, we have a public header declaration going back to
e1273106f6 (ewah: compressed bitmap implementation, 2013-11-14), but the
function was never implemented. That was all OK since there were no
users of 'bitmap_or()', but a first caller will be added in a couple of
patches.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 6f9e5c529b..0a3502603f 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -122,6 +122,15 @@ void bitmap_and_not(struct bitmap *self, struct bitmap *other)
 		self->words[i] &= ~other->words[i];
 }
 
+void bitmap_or(struct bitmap *self, const struct bitmap *other)
+{
+	size_t i;
+
+	bitmap_grow(self, other->word_alloc);
+	for (i = 0; i < other->word_alloc; i++)
+		self->words[i] |= other->words[i];
+}
+
 void bitmap_or_ewah(struct bitmap *self, struct ewah_bitmap *other)
 {
 	size_t original_size = self->word_alloc;
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 09/24] ewah: add bitmap_dup() function
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (7 preceding siblings ...)
  2020-12-08  0:04   ` [PATCH v3 08/24] ewah: implement bitmap_or() Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:04   ` [PATCH v3 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
                     ` (15 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

There's no easy way to make a copy of a bitmap. Obviously a caller can
iterate over the bits and set them one by one in a new bitmap, but we
can go much faster by copying whole words with memcpy().

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 7 +++++++
 ewah/ewok.h   | 1 +
 2 files changed, 8 insertions(+)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 0a3502603f..b5f6376282 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -35,6 +35,13 @@ struct bitmap *bitmap_new(void)
 	return bitmap_word_alloc(32);
 }
 
+struct bitmap *bitmap_dup(const struct bitmap *src)
+{
+	struct bitmap *dst = bitmap_word_alloc(src->word_alloc);
+	COPY_ARRAY(dst->words, src->words, src->word_alloc);
+	return dst;
+}
+
 static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
 	size_t old_size = self->word_alloc;
diff --git a/ewah/ewok.h b/ewah/ewok.h
index 011852bef1..1fc555e672 100644
--- a/ewah/ewok.h
+++ b/ewah/ewok.h
@@ -173,6 +173,7 @@ struct bitmap {
 
 struct bitmap *bitmap_new(void);
 struct bitmap *bitmap_word_alloc(size_t word_alloc);
+struct bitmap *bitmap_dup(const struct bitmap *src);
 void bitmap_set(struct bitmap *self, size_t pos);
 void bitmap_unset(struct bitmap *self, size_t pos);
 int bitmap_get(struct bitmap *self, size_t pos);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 10/24] pack-bitmap-write: reimplement bitmap writing
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (8 preceding siblings ...)
  2020-12-08  0:04   ` [PATCH v3 09/24] ewah: add bitmap_dup() function Taylor Blau
@ 2020-12-08  0:04   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
                     ` (14 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

The bitmap generation code works by iterating over the set of commits
for which we plan to write bitmaps, and then for each one performing a
traditional traversal over the reachable commits and trees, filling in
the bitmap. Between two traversals, we can often reuse the previous
bitmap result as long as the first commit is an ancestor of the second.
However, our worst case is that we may end up doing "n" complete
complete traversals to the root in order to create "n" bitmaps.

In a real-world case (the shared-storage repo consisting of all GitHub
forks of chromium/chromium), we perform very poorly: generating bitmaps
takes ~3 hours, whereas we can walk the whole object graph in ~3
minutes.

This commit completely rewrites the algorithm, with the goal of
accessing each object only once. It works roughly like this:

  - generate a list of commits in topo-order using a single traversal

  - invert the edges of the graph (so have parents point at their
    children)

  - make one pass in reverse topo-order, generating a bitmap for each
    commit and passing the result along to child nodes

We generate correct results because each node we visit has already had
all of its ancestors added to the bitmap. And we make only two linear
passes over the commits.

We also visit each tree usually only once. When filling in a bitmap, we
don't bother to recurse into trees whose bit is already set in the
bitmap (since we know we've already done so when setting their bit).
That means that if commit A references tree T, none of its descendants
will need to open T again. I say "usually", though, because it is
possible for a given tree to be mentioned in unrelated parts of history
(e.g., cherry-picking to a parallel branch).

So we've accomplished our goal, and the resulting algorithm is pretty
simple to understand. But there are some downsides, at least with this
initial implementation:

  - we no longer reuse the results of any on-disk bitmaps when
    generating. So we'd expect to sometimes be slower than the original
    when bitmaps already exist. However, this is something we'll be able
    to add back in later.

  - we use much more memory. Instead of keeping one bitmap in memory at
    a time, we're passing them up through the graph. So our memory use
    should scale with the graph width (times the size of a bitmap).

So how does it perform?

For a clone of linux.git, generating bitmaps from scratch with the old
algorithm took 63s. Using this algorithm it takes 205s. Which is much
worse, but _might_ be acceptable if it behaved linearly as the size
grew. It also increases peak heap usage by ~1G. That's not impossibly
large, but not encouraging.

On the complete fork-network of torvalds/linux, it increases the peak
RAM usage by 40GB. Yikes. (I forgot to record the time it took, but the
memory usage was too much to consider this reasonable anyway).

On the complete fork-network of chromium/chromium, I ran out of memory
before succeeding. Some back-of-the-envelope calculations indicate it
would need 80+GB to complete.

So at this stage, we've managed to make things much worse. But because
of the way this new algorithm is structured, there are a lot of
opportunities for optimization on top. We'll start implementing those in
the follow-on patches.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 306 +++++++++++++++++++++++++-------------------
 1 file changed, 172 insertions(+), 134 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 5e998bdaa7..bcd059ccd9 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -110,8 +110,6 @@ void bitmap_writer_build_type_index(struct packing_data *to_pack,
 /**
  * Compute the actual bitmaps
  */
-static struct object **seen_objects;
-static unsigned int seen_objects_nr, seen_objects_alloc;
 
 static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
 {
@@ -127,21 +125,6 @@ static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitm
 	writer.selected_nr++;
 }
 
-static inline void mark_as_seen(struct object *object)
-{
-	ALLOC_GROW(seen_objects, seen_objects_nr + 1, seen_objects_alloc);
-	seen_objects[seen_objects_nr++] = object;
-}
-
-static inline void reset_all_seen(void)
-{
-	unsigned int i;
-	for (i = 0; i < seen_objects_nr; ++i) {
-		seen_objects[i]->flags &= ~(SEEN | ADDED | SHOWN);
-	}
-	seen_objects_nr = 0;
-}
-
 static uint32_t find_object_pos(const struct object_id *oid)
 {
 	struct object_entry *entry = packlist_find(writer.to_pack, oid);
@@ -154,60 +137,6 @@ static uint32_t find_object_pos(const struct object_id *oid)
 	return oe_in_pack_pos(writer.to_pack, entry);
 }
 
-static void show_object(struct object *object, const char *name, void *data)
-{
-	struct bitmap *base = data;
-	bitmap_set(base, find_object_pos(&object->oid));
-	mark_as_seen(object);
-}
-
-static void show_commit(struct commit *commit, void *data)
-{
-	mark_as_seen((struct object *)commit);
-}
-
-static int
-add_to_include_set(struct bitmap *base, struct commit *commit)
-{
-	khiter_t hash_pos;
-	uint32_t bitmap_pos = find_object_pos(&commit->object.oid);
-
-	if (bitmap_get(base, bitmap_pos))
-		return 0;
-
-	hash_pos = kh_get_oid_map(writer.bitmaps, commit->object.oid);
-	if (hash_pos < kh_end(writer.bitmaps)) {
-		struct bitmapped_commit *bc = kh_value(writer.bitmaps, hash_pos);
-		bitmap_or_ewah(base, bc->bitmap);
-		return 0;
-	}
-
-	bitmap_set(base, bitmap_pos);
-	return 1;
-}
-
-static int
-should_include(struct commit *commit, void *_data)
-{
-	struct bitmap *base = _data;
-
-	if (!add_to_include_set(base, commit)) {
-		struct commit_list *parent = commit->parents;
-
-		mark_as_seen((struct object *)commit);
-
-		while (parent) {
-			parent->item->object.flags |= SEEN;
-			mark_as_seen((struct object *)parent->item);
-			parent = parent->next;
-		}
-
-		return 0;
-	}
-
-	return 1;
-}
-
 static void compute_xor_offsets(void)
 {
 	static const int MAX_XOR_OFFSET_SEARCH = 10;
@@ -248,79 +177,188 @@ static void compute_xor_offsets(void)
 	}
 }
 
-void bitmap_writer_build(struct packing_data *to_pack)
+struct bb_commit {
+	struct commit_list *children;
+	struct bitmap *bitmap;
+	unsigned selected:1;
+	unsigned idx; /* within selected array */
+};
+
+define_commit_slab(bb_data, struct bb_commit);
+
+struct bitmap_builder {
+	struct bb_data data;
+	struct commit **commits;
+	size_t commits_nr, commits_alloc;
+};
+
+static void bitmap_builder_init(struct bitmap_builder *bb,
+				struct bitmap_writer *writer)
 {
-	static const double REUSE_BITMAP_THRESHOLD = 0.2;
-
-	int i, reuse_after, need_reset;
-	struct bitmap *base = bitmap_new();
 	struct rev_info revs;
+	struct commit *commit;
+	unsigned int i;
+
+	memset(bb, 0, sizeof(*bb));
+	init_bb_data(&bb->data);
+
+	reset_revision_walk();
+	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
+	revs.topo_order = 1;
+
+	for (i = 0; i < writer->selected_nr; i++) {
+		struct commit *c = writer->selected[i].commit;
+		struct bb_commit *ent = bb_data_at(&bb->data, c);
+		ent->selected = 1;
+		ent->idx = i;
+		add_pending_object(&revs, &c->object, "");
+	}
+
+	if (prepare_revision_walk(&revs))
+		die("revision walk setup failed");
+
+	while ((commit = get_revision(&revs))) {
+		struct commit_list *p;
+
+		parse_commit_or_die(commit);
+
+		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+		bb->commits[bb->commits_nr++] = commit;
+
+		for (p = commit->parents; p; p = p->next) {
+			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
+			commit_list_insert(commit, &ent->children);
+		}
+	}
+}
+
+static void bitmap_builder_clear(struct bitmap_builder *bb)
+{
+	clear_bb_data(&bb->data);
+	free(bb->commits);
+	bb->commits_nr = bb->commits_alloc = 0;
+}
+
+static void fill_bitmap_tree(struct bitmap *bitmap,
+			     struct tree *tree)
+{
+	uint32_t pos;
+	struct tree_desc desc;
+	struct name_entry entry;
+
+	/*
+	 * If our bit is already set, then there is nothing to do. Both this
+	 * tree and all of its children will be set.
+	 */
+	pos = find_object_pos(&tree->object.oid);
+	if (bitmap_get(bitmap, pos))
+		return;
+	bitmap_set(bitmap, pos);
+
+	if (parse_tree(tree) < 0)
+		die("unable to load tree object %s",
+		    oid_to_hex(&tree->object.oid));
+	init_tree_desc(&desc, tree->buffer, tree->size);
+
+	while (tree_entry(&desc, &entry)) {
+		switch (object_type(entry.mode)) {
+		case OBJ_TREE:
+			fill_bitmap_tree(bitmap,
+					 lookup_tree(the_repository, &entry.oid));
+			break;
+		case OBJ_BLOB:
+			bitmap_set(bitmap, find_object_pos(&entry.oid));
+			break;
+		default:
+			/* Gitlink, etc; not reachable */
+			break;
+		}
+	}
+
+	free_tree_buffer(tree);
+}
+
+static void fill_bitmap_commit(struct bb_commit *ent,
+			       struct commit *commit)
+{
+	if (!ent->bitmap)
+		ent->bitmap = bitmap_new();
+
+	/*
+	 * mark ourselves, but do not bother with parents; their values
+	 * will already have been propagated to us
+	 */
+	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
+	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
+}
+
+static void store_selected(struct bb_commit *ent, struct commit *commit)
+{
+	struct bitmapped_commit *stored = &writer.selected[ent->idx];
+	khiter_t hash_pos;
+	int hash_ret;
+
+	/*
+	 * the "reuse bitmaps" phase may have stored something here, but
+	 * our new algorithm doesn't use it. Drop it.
+	 */
+	if (stored->bitmap)
+		ewah_free(stored->bitmap);
+
+	stored->bitmap = bitmap_to_ewah(ent->bitmap);
+
+	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
+	if (hash_ret == 0)
+		die("Duplicate entry when writing index: %s",
+		    oid_to_hex(&commit->object.oid));
+	kh_value(writer.bitmaps, hash_pos) = stored;
+}
+
+void bitmap_writer_build(struct packing_data *to_pack)
+{
+	struct bitmap_builder bb;
+	size_t i;
+	int nr_stored = 0; /* for progress */
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
 
 	if (writer.show_progress)
 		writer.progress = start_progress("Building bitmaps", writer.selected_nr);
-
-	repo_init_revisions(to_pack->repo, &revs, NULL);
-	revs.tag_objects = 1;
-	revs.tree_objects = 1;
-	revs.blob_objects = 1;
-	revs.no_walk = 0;
-
-	revs.include_check = should_include;
-	reset_revision_walk();
-
-	reuse_after = writer.selected_nr * REUSE_BITMAP_THRESHOLD;
-	need_reset = 0;
-
-	for (i = writer.selected_nr - 1; i >= 0; --i) {
-		struct bitmapped_commit *stored;
-		struct object *object;
-
-		khiter_t hash_pos;
-		int hash_ret;
-
-		stored = &writer.selected[i];
-		object = (struct object *)stored->commit;
-
-		if (stored->bitmap == NULL) {
-			if (i < writer.selected_nr - 1 &&
-			    (need_reset ||
-			     !in_merge_bases(writer.selected[i + 1].commit,
-					     stored->commit))) {
-			    bitmap_reset(base);
-			    reset_all_seen();
-			}
-
-			add_pending_object(&revs, object, "");
-			revs.include_check_data = base;
-
-			if (prepare_revision_walk(&revs))
-				die("revision walk setup failed");
-
-			traverse_commit_list(&revs, show_commit, show_object, base);
-
-			object_array_clear(&revs.pending);
-
-			stored->bitmap = bitmap_to_ewah(base);
-			need_reset = 0;
-		} else
-			need_reset = 1;
-
-		if (i >= reuse_after)
-			stored->flags |= BITMAP_FLAG_REUSE;
-
-		hash_pos = kh_put_oid_map(writer.bitmaps, object->oid, &hash_ret);
-		if (hash_ret == 0)
-			die("Duplicate entry when writing index: %s",
-			    oid_to_hex(&object->oid));
-
-		kh_value(writer.bitmaps, hash_pos) = stored;
-		display_progress(writer.progress, writer.selected_nr - i);
+	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
+			    the_repository);
+
+	bitmap_builder_init(&bb, &writer);
+	for (i = bb.commits_nr; i > 0; i--) {
+		struct commit *commit = bb.commits[i-1];
+		struct bb_commit *ent = bb_data_at(&bb.data, commit);
+		struct commit *child;
+
+		fill_bitmap_commit(ent, commit);
+
+		if (ent->selected) {
+			store_selected(ent, commit);
+			nr_stored++;
+			display_progress(writer.progress, nr_stored);
+		}
+
+		while ((child = pop_commit(&ent->children))) {
+			struct bb_commit *child_ent =
+				bb_data_at(&bb.data, child);
+
+			if (child_ent->bitmap)
+				bitmap_or(child_ent->bitmap, ent->bitmap);
+			else
+				child_ent->bitmap = bitmap_dup(ent->bitmap);
+		}
+		bitmap_free(ent->bitmap);
+		ent->bitmap = NULL;
 	}
+	bitmap_builder_clear(&bb);
+
+	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
+			    the_repository);
 
-	bitmap_free(base);
 	stop_progress(&writer.progress);
 
 	compute_xor_offsets();
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (9 preceding siblings ...)
  2020-12-08  0:04   ` [PATCH v3 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
                     ` (13 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

Our algorithm to generate reachability bitmaps walks through the commit
graph from the bottom up, passing bitmap data from each commit to its
descendants. For a linear stretch of history like:

  A -- B -- C

our sequence of steps is:

  - compute the bitmap for A by walking its trees, etc

  - duplicate A's bitmap as a starting point for B; we can now free A's
    bitmap, since we only needed it as an intermediate result

  - OR in any extra objects that B can reach into its bitmap

  - duplicate B's bitmap as a starting point for C; likewise, free B's
    bitmap

  - OR in objects for C, and so on...

Rather than duplicating bitmaps and immediately freeing the original, we
can just pass ownership from commit to commit. Note that this doesn't
always work:

  - the recipient may be a merge which already has an intermediate
    bitmap from its other ancestor. In that case we have to OR our
    result into it. Note that the first ancestor to reach the merge does
    get to pass ownership, though.

  - we may have multiple children; we can only pass ownership to one of
    them

However, it happens often enough and copying bitmaps is expensive enough
that this provides a noticeable speedup. On a clone of linux.git, this
reduces the time to generate bitmaps from 205s to 70s. This is about the
same amount of time it took to generate bitmaps using our old "many
traversals" algorithm (the previous commit measures the identical
scenario as taking 63s). It unfortunately provides only a very modest
reduction in the peak memory usage, though.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index bcd059ccd9..1eb9615df8 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -333,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *commit = bb.commits[i-1];
 		struct bb_commit *ent = bb_data_at(&bb.data, commit);
 		struct commit *child;
+		int reused = 0;
 
 		fill_bitmap_commit(ent, commit);
 
@@ -348,10 +349,15 @@ void bitmap_writer_build(struct packing_data *to_pack)
 
 			if (child_ent->bitmap)
 				bitmap_or(child_ent->bitmap, ent->bitmap);
-			else
+			else if (reused)
 				child_ent->bitmap = bitmap_dup(ent->bitmap);
+			else {
+				child_ent->bitmap = ent->bitmap;
+				reused = 1;
+			}
 		}
-		bitmap_free(ent->bitmap);
+		if (!reused)
+			bitmap_free(ent->bitmap);
 		ent->bitmap = NULL;
 	}
 	bitmap_builder_clear(&bb);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 12/24] pack-bitmap-write: fill bitmap with commit history
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (10 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 13/24] bitmap: implement bitmap_is_subset() Taylor Blau
                     ` (12 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The current implementation of bitmap_writer_build() creates a
reachability bitmap for every walked commit. After computing a bitmap
for a commit, those bits are pushed to an in-progress bitmap for its
children.

fill_bitmap_commit() assumes the bits corresponding to objects
reachable from the parents of a commit are already set. This means that
when visiting a new commit, we only have to walk the objects reachable
between it and any of its parents.

A future change to bitmap_writer_build() will relax this condition so
not all parents have their bits set. Prepare for that by having
'fill_bitmap_commit()' walk parents until reaching commits whose bits
are already set. Then, walk the trees for these commits as well.

This has no functional change with the current implementation of
bitmap_writer_build().

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 1eb9615df8..957639241e 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -12,6 +12,7 @@
 #include "sha1-lookup.h"
 #include "pack-objects.h"
 #include "commit-reach.h"
+#include "prio-queue.h"
 
 struct bitmapped_commit {
 	struct commit *commit;
@@ -279,17 +280,30 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
 }
 
 static void fill_bitmap_commit(struct bb_commit *ent,
-			       struct commit *commit)
+			       struct commit *commit,
+			       struct prio_queue *queue)
 {
 	if (!ent->bitmap)
 		ent->bitmap = bitmap_new();
 
-	/*
-	 * mark ourselves, but do not bother with parents; their values
-	 * will already have been propagated to us
-	 */
 	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
-	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
+	prio_queue_put(queue, commit);
+
+	while (queue->nr) {
+		struct commit_list *p;
+		struct commit *c = prio_queue_get(queue);
+
+		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
+		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
+
+		for (p = c->parents; p; p = p->next) {
+			int pos = find_object_pos(&p->item->object.oid);
+			if (!bitmap_get(ent->bitmap, pos)) {
+				bitmap_set(ent->bitmap, pos);
+				prio_queue_put(queue, p->item);
+			}
+		}
+	}
 }
 
 static void store_selected(struct bb_commit *ent, struct commit *commit)
@@ -319,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	struct bitmap_builder bb;
 	size_t i;
 	int nr_stored = 0; /* for progress */
+	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
@@ -335,7 +350,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *child;
 		int reused = 0;
 
-		fill_bitmap_commit(ent, commit);
+		fill_bitmap_commit(ent, commit, &queue);
 
 		if (ent->selected) {
 			store_selected(ent, commit);
@@ -360,6 +375,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 			bitmap_free(ent->bitmap);
 		ent->bitmap = NULL;
 	}
+	clear_prio_queue(&queue);
 	bitmap_builder_clear(&bb);
 
 	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 13/24] bitmap: implement bitmap_is_subset()
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (11 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 14/24] commit: implement commit_list_contains() Taylor Blau
                     ` (11 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_is_subset() function checks if the 'self' bitmap contains any
bitmaps that are not on in the 'other' bitmap. Up until this patch, it
had a declaration, but no implementation or callers. A subsequent patch
will want this function, so implement it here.

Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 21 +++++++++++++++++++++
 ewah/ewok.h   |  2 +-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index b5f6376282..0d31cdc866 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -195,6 +195,27 @@ int bitmap_equals(struct bitmap *self, struct bitmap *other)
 	return 1;
 }
 
+int bitmap_is_subset(struct bitmap *self, struct bitmap *other)
+{
+	size_t common_size, i;
+
+	if (self->word_alloc < other->word_alloc)
+		common_size = self->word_alloc;
+	else {
+		common_size = other->word_alloc;
+		for (i = common_size; i < self->word_alloc; i++) {
+			if (self->words[i])
+				return 1;
+		}
+	}
+
+	for (i = 0; i < common_size; i++) {
+		if (self->words[i] & ~other->words[i])
+			return 1;
+	}
+	return 0;
+}
+
 void bitmap_reset(struct bitmap *bitmap)
 {
 	memset(bitmap->words, 0x0, bitmap->word_alloc * sizeof(eword_t));
diff --git a/ewah/ewok.h b/ewah/ewok.h
index 1fc555e672..66920965da 100644
--- a/ewah/ewok.h
+++ b/ewah/ewok.h
@@ -180,7 +180,7 @@ int bitmap_get(struct bitmap *self, size_t pos);
 void bitmap_reset(struct bitmap *self);
 void bitmap_free(struct bitmap *self);
 int bitmap_equals(struct bitmap *self, struct bitmap *other);
-int bitmap_is_subset(struct bitmap *self, struct bitmap *super);
+int bitmap_is_subset(struct bitmap *self, struct bitmap *other);
 
 struct ewah_bitmap * bitmap_to_ewah(struct bitmap *bitmap);
 struct bitmap *ewah_to_bitmap(struct ewah_bitmap *ewah);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 14/24] commit: implement commit_list_contains()
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (12 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 13/24] bitmap: implement bitmap_is_subset() Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 15/24] t5310: add branch-based checks Taylor Blau
                     ` (10 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

It can be helpful to check if a commit_list contains a commit. Use
pointer equality, assuming lookup_commit() was used.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 commit.c | 11 +++++++++++
 commit.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/commit.c b/commit.c
index fe1fa3dc41..9a785bf906 100644
--- a/commit.c
+++ b/commit.c
@@ -544,6 +544,17 @@ struct commit_list *commit_list_insert(struct commit *item, struct commit_list *
 	return new_list;
 }
 
+int commit_list_contains(struct commit *item, struct commit_list *list)
+{
+	while (list) {
+		if (list->item == item)
+			return 1;
+		list = list->next;
+	}
+
+	return 0;
+}
+
 unsigned commit_list_count(const struct commit_list *l)
 {
 	unsigned c = 0;
diff --git a/commit.h b/commit.h
index 5467786c7b..742a6de460 100644
--- a/commit.h
+++ b/commit.h
@@ -167,6 +167,8 @@ int find_commit_subject(const char *commit_buffer, const char **subject);
 
 struct commit_list *commit_list_insert(struct commit *item,
 					struct commit_list **list);
+int commit_list_contains(struct commit *item,
+			 struct commit_list *list);
 struct commit_list **commit_list_append(struct commit *commit,
 					struct commit_list **next);
 unsigned commit_list_count(const struct commit_list *l);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 15/24] t5310: add branch-based checks
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (13 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 14/24] commit: implement commit_list_contains() Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
                     ` (9 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The current rev-list tests that check the bitmap data only work on HEAD
instead of multiple branches. Expand the test cases to handle both
'master' and 'other' branches.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 t/t5310-pack-bitmaps.sh | 61 +++++++++++++++++++++++------------------
 1 file changed, 34 insertions(+), 27 deletions(-)

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 8a2a3b2114..b1248f1cc8 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -41,63 +41,70 @@ test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
 	git rev-list --test-bitmap HEAD
 '
 
-rev_list_tests() {
-	state=$1
-
-	test_expect_success "counting commits via bitmap ($state)" '
-		git rev-list --count HEAD >expect &&
-		git rev-list --use-bitmap-index --count HEAD >actual &&
+rev_list_tests_head () {
+	test_expect_success "counting commits via bitmap ($state, $branch)" '
+		git rev-list --count $branch >expect &&
+		git rev-list --use-bitmap-index --count $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting partial commits via bitmap ($state)" '
-		git rev-list --count HEAD~5..HEAD >expect &&
-		git rev-list --use-bitmap-index --count HEAD~5..HEAD >actual &&
+	test_expect_success "counting partial commits via bitmap ($state, $branch)" '
+		git rev-list --count $branch~5..$branch >expect &&
+		git rev-list --use-bitmap-index --count $branch~5..$branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting commits with limit ($state)" '
-		git rev-list --count -n 1 HEAD >expect &&
-		git rev-list --use-bitmap-index --count -n 1 HEAD >actual &&
+	test_expect_success "counting commits with limit ($state, $branch)" '
+		git rev-list --count -n 1 $branch >expect &&
+		git rev-list --use-bitmap-index --count -n 1 $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting non-linear history ($state)" '
+	test_expect_success "counting non-linear history ($state, $branch)" '
 		git rev-list --count other...master >expect &&
 		git rev-list --use-bitmap-index --count other...master >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting commits with limiting ($state)" '
-		git rev-list --count HEAD -- 1.t >expect &&
-		git rev-list --use-bitmap-index --count HEAD -- 1.t >actual &&
+	test_expect_success "counting commits with limiting ($state, $branch)" '
+		git rev-list --count $branch -- 1.t >expect &&
+		git rev-list --use-bitmap-index --count $branch -- 1.t >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting objects via bitmap ($state)" '
-		git rev-list --count --objects HEAD >expect &&
-		git rev-list --use-bitmap-index --count --objects HEAD >actual &&
+	test_expect_success "counting objects via bitmap ($state, $branch)" '
+		git rev-list --count --objects $branch >expect &&
+		git rev-list --use-bitmap-index --count --objects $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "enumerate commits ($state)" '
-		git rev-list --use-bitmap-index HEAD >actual &&
-		git rev-list HEAD >expect &&
+	test_expect_success "enumerate commits ($state, $branch)" '
+		git rev-list --use-bitmap-index $branch >actual &&
+		git rev-list $branch >expect &&
 		test_bitmap_traversal --no-confirm-bitmaps expect actual
 	'
 
-	test_expect_success "enumerate --objects ($state)" '
-		git rev-list --objects --use-bitmap-index HEAD >actual &&
-		git rev-list --objects HEAD >expect &&
+	test_expect_success "enumerate --objects ($state, $branch)" '
+		git rev-list --objects --use-bitmap-index $branch >actual &&
+		git rev-list --objects $branch >expect &&
 		test_bitmap_traversal expect actual
 	'
 
-	test_expect_success "bitmap --objects handles non-commit objects ($state)" '
-		git rev-list --objects --use-bitmap-index HEAD tagged-blob >actual &&
+	test_expect_success "bitmap --objects handles non-commit objects ($state, $branch)" '
+		git rev-list --objects --use-bitmap-index $branch tagged-blob >actual &&
 		grep $blob actual
 	'
 }
 
+rev_list_tests () {
+	state=$1
+
+	for branch in "master" "other"
+	do
+		rev_list_tests_head
+	done
+}
+
 rev_list_tests 'full bitmap'
 
 test_expect_success 'clone from bitmapped repository' '
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 16/24] pack-bitmap-write: rename children to reverse_edges
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (14 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 15/24] t5310: add branch-based checks Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
                     ` (8 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_builder_init() method walks the reachable commits in
topological order and constructs a "reverse graph" along the way. At the
moment, this reverse graph contains an edge from commit A to commit B if
and only if A is a parent of B. Thus, the name "children" is appropriate
for for this reverse graph.

In the next change, we will repurpose the reverse graph to not be
directly-adjacent commits in the commit-graph, but instead a more
abstract relationship. The previous changes have already incorporated
the necessary updates to fill_bitmap_commit() that allow these edges to
not be immediate children.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 957639241e..7e218d02a6 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -179,7 +179,7 @@ static void compute_xor_offsets(void)
 }
 
 struct bb_commit {
-	struct commit_list *children;
+	struct commit_list *reverse_edges;
 	struct bitmap *bitmap;
 	unsigned selected:1;
 	unsigned idx; /* within selected array */
@@ -228,7 +228,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		for (p = commit->parents; p; p = p->next) {
 			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->children);
+			commit_list_insert(commit, &ent->reverse_edges);
 		}
 	}
 }
@@ -358,7 +358,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 			display_progress(writer.progress, nr_stored);
 		}
 
-		while ((child = pop_commit(&ent->children))) {
+		while ((child = pop_commit(&ent->reverse_edges))) {
 			struct bb_commit *child_ent =
 				bb_data_at(&bb.data, child);
 
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 17/24] pack-bitmap.c: check reads more aggressively when loading
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (15 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
                     ` (7 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

Before 'load_bitmap_entries_v1()' reads an actual EWAH bitmap, it should
check that it can safely do so by ensuring that there are at least 6
bytes available to be read (four for the commit's index position, and
then two more for the xor offset and flags, respectively).

Likewise, it should check that the commit index it read refers to a
legitimate object in the pack.

The first fix catches a truncation bug that was exposed when testing,
and the second is purely precautionary.

There are some possible future improvements, not pursued here. They are:

  - Computing the correct boundary of the bitmap itself in the caller
    and ensuring that we don't read past it. This may or may not be
    worth it, since in a truncation situation, all bets are off: (is the
    trailer still there and the bitmap entries malformed, or is the
    trailer truncated?). The best we can do is try to read what's there
    as if it's correct data (and protect ourselves when it's obviously
    bogus).

  - Avoid the magic "6" by teaching read_be32() and read_u8() (both of
    which are custom helpers for this function) to check sizes before
    advancing the pointers.

  - Adding more tests in this area. Testing these truncation situations
    are remarkably fragile to even subtle changes in the bitmap
    generation. So, the resulting tests are likely to be quite brittle.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 4431f9f120..60c781d100 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -229,11 +229,16 @@ static int load_bitmap_entries_v1(struct bitmap_index *index)
 		uint32_t commit_idx_pos;
 		struct object_id oid;
 
+		if (index->map_size - index->map_pos < 6)
+			return error("corrupt ewah bitmap: truncated header for entry %d", i);
+
 		commit_idx_pos = read_be32(index->map, &index->map_pos);
 		xor_offset = read_u8(index->map, &index->map_pos);
 		flags = read_u8(index->map, &index->map_pos);
 
-		nth_packed_object_id(&oid, index->pack, commit_idx_pos);
+		if (nth_packed_object_id(&oid, index->pack, commit_idx_pos) < 0)
+			return error("corrupt ewah bitmap: commit index %u out of range",
+				     (unsigned)commit_idx_pos);
 
 		bitmap = read_bitmap_1(index);
 		if (!bitmap)
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 18/24] pack-bitmap-write: build fewer intermediate bitmaps
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (16 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
                     ` (6 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_writer_build() method calls bitmap_builder_init() to
construct a list of commits reachable from the selected commits along
with a "reverse graph". This reverse graph has edges pointing from a
commit to other commits that can reach that commit. After computing a
reachability bitmap for a commit, the values in that bitmap are then
copied to the reachability bitmaps across the edges in the reverse
graph.

We can now relax the role of the reverse graph to greatly reduce the
number of intermediate reachability bitmaps we compute during this
reverse walk. The end result is that we walk objects the same number of
times as before when constructing the reachability bitmaps, but we also
spend much less time copying bits between bitmaps and have much lower
memory pressure in the process.

The core idea is to select a set of "important" commits based on
interactions among the sets of commits reachable from each selected commit.

The first technical concept is to create a new 'commit_mask' member in the
bb_commit struct. Note that the selected commits are provided in an
ordered array. The first thing to do is to mark the ith bit in the
commit_mask for the ith selected commit. As we walk the commit-graph, we
copy the bits in a commit's commit_mask to its parents. At the end of
the walk, the ith bit in the commit_mask for a commit C stores a boolean
representing "The ith selected commit can reach C."

As we walk, we will discover non-selected commits that are important. We
will get into this later, but those important commits must also receive
bit positions, growing the width of the bitmasks as we walk. At the true
end of the walk, the ith bit means "the ith _important_ commit can reach
C."

MAXIMAL COMMITS
---------------

We use a new 'maximal' bit in the bb_commit struct to represent whether
a commit is important or not. The term "maximal" comes from the
partially-ordered set of commits in the commit-graph where C >= P if P
is a parent of C, and then extending the relationship transitively.
Instead of taking the maximal commits across the entire commit-graph, we
instead focus on selecting each commit that is maximal among commits
with the same bits on in their commit_mask. This definition is
important, so let's consider an example.

Suppose we have three selected commits A, B, and C. These are assigned
bitmasks 100, 010, and 001 to start. Each of these can be marked as
maximal immediately because they each will be the uniquely maximal
commit that contains their own bit. Keep in mind that that these commits
may have different bitmasks after the walk; for example, if B can reach
C but A cannot, then the final bitmask for C is 011. Even in these
cases, C would still be a maximal commit among all commits with the
third bit on in their masks.

Now define sets X, Y, and Z to be the sets of commits reachable from A,
B, and C, respectively. The intersections of these sets correspond to
different bitmasks:

 * 100: X - (Y union Z)
 * 010: Y - (X union Z)
 * 001: Z - (X union Y)
 * 110: (X intersect Y) - Z
 * 101: (X intersect Z) - Y
 * 011: (Y intersect Z) - X
 * 111: X intersect Y intersect Z

This can be visualized with the following Hasse diagram:

	100    010    001
         | \  /   \  / |
         |  \/     \/  |
         |  /\     /\  |
         | /  \   /  \ |
        110    101    011
          \___  |  ___/
              \ | /
               111

Some of these bitmasks may not be represented, depending on the topology
of the commit-graph. In fact, we are counting on it, since the number of
possible bitmasks is exponential in the number of selected commits, but
is also limited by the total number of commits. In practice, very few
bitmasks are possible because most commits converge on a common "trunk"
in the commit history.

With this three-bit example, we wish to find commits that are maximal
for each bitmask. How can we identify this as we are walking?

As we walk, we visit a commit C. Since we are walking the commits in
topo-order, we know that C is visited after all of its children are
visited. Thus, when we get C from the revision walk we inspect the
'maximal' property of its bb_data and use that to determine if C is truly
important. Its commit_mask is also nearly final. If C is not one of the
originally-selected commits, then assign a bit position to C (by
incrementing num_maximal) and set that bit on in commit_mask. See
"MULTIPLE MAXIMAL COMMITS" below for more detail on this.

Now that the commit C is known to be maximal or not, consider each
parent P of C. Compute two new values:

 * c_not_p : true if and only if the commit_mask for C contains a bit
             that is not contained in the commit_mask for P.

 * p_not_c : true if and only if the commit_mask for P contains a bit
             that is not contained in the commit_mask for P.

If c_not_p is false, then P already has all of the bits that C would
provide to its commit_mask. In this case, move on to other parents as C
has nothing to contribute to P's state that was not already provided by
other children of P.

We continue with the case that c_not_p is true. This means there are
bits in C's commit_mask to copy to P's commit_mask, so use bitmap_or()
to add those bits.

If p_not_c is also true, then set the maximal bit for P to one. This means
that if no other commit has P as a parent, then P is definitely maximal.
This is because no child had the same bitmask. It is important to think
about the maximal bit for P at this point as a temporary state: "P is
maximal based on current information."

In contrast, if p_not_c is false, then set the maximal bit for P to
zero. Further, clear all reverse_edges for P since any edges that were
previously assigned to P are no longer important. P will gain all
reverse edges based on C.

The final thing we need to do is to update the reverse edges for P.
These reverse edges respresent "which closest maximal commits
contributed bits to my commit_mask?" Since C contributed bits to P's
commit_mask in this case, C must add to the reverse edges of P.

If C is maximal, then C is a 'closest' maximal commit that contributed
bits to P. Add C to P's reverse_edges list.

Otherwise, C has a list of maximal commits that contributed bits to its
bitmask (and this list is exactly one element). Add all of these items
to P's reverse_edges list. Be careful to ignore duplicates here.

After inspecting all parents P for a commit C, we can clear the
commit_mask for C. This reduces the memory load to be limited to the
"width" of the commit graph.

Consider our ABC/XYZ example from earlier and let's inspect the state of
the commits for an interesting bitmask, say 011. Suppose that D is the
only maximal commit with this bitmask (in the first three bits). All
other commits with bitmask 011 have D as the only entry in their
reverse_edges list. D's reverse_edges list contains B and C.

COMPUTING REACHABILITY BITMAPS
------------------------------

Now that we have our definition, let's zoom out and consider what
happens with our new reverse graph when computing reachability bitmaps.
We walk the reverse graph in reverse-topo-order, so we visit commits
with largest commit_masks first. After we compute the reachability
bitmap for a commit C, we push the bits in that bitmap to each commit D
in the reverse edge list for C. Then, when we finally visit D we already
have the bits for everything reachable from maximal commits that D can
reach and we only need to walk the objects in the set-difference.

In our ABC/XYZ example, when we finally walk for the commit A we only
need to walk commits with bitmask equal to A's bitmask. If that bitmask
is 100, then we are only walking commits in X - (Y union Z) because the
bitmap already contains the bits for objects reachable from (X intersect
Y) union (X intersect Z) (i.e. the bits from the reachability bitmaps
for the maximal commits with bitmasks 110 and 101).

The behavior is intended to walk each commit (and the trees that commit
introduces) at most once while allocating and copying fewer reachability
bitmaps. There is one caveat: what happens when there are multiple
maximal commits with the same bitmask, with respect to the initial set
of selected commits?

MULTIPLE MAXIMAL COMMITS
------------------------

Earlier, we mentioned that when we discover a new maximal commit, we
assign a new bit position to that commit and set that bit position to
one for that commit. This is absolutely important for interesting
commit-graphs such as git/git and torvalds/linux. The reason is due to
the existence of "butterflies" in the commit-graph partial order.

Here is an example of four commits forming a butterfly:

   I    J
   |\  /|
   | \/ |
   | /\ |
   |/  \|
   M    N
    \  /
     |/
     Q

Here, I and J both have parents M and N. In general, these do not need
to be exact parent relationships, but reachability relationships. The
most important part is that M and N cannot reach each other, so they are
independent in the partial order. If I had commit_mask 10 and J had
commit_mask 01, then M and N would both be assigned commit_mask 11 and
be maximal commits with the bitmask 11. Then, what happens when M and N
can both reach a commit Q? If Q is also assigned the bitmask 11, then it
is not maximal but is reachable from both M and N.

While this is not necessarily a deal-breaker for our abstract definition
of finding maximal commits according to a given bitmask, we have a few
issues that can come up in our larger picture of constructing
reachability bitmaps.

In particular, if we do not also consider Q to be a "maximal" commit,
then we will walk commits reachable from Q twice: once when computing
the reachability bitmap for M and another time when computing the
reachability bitmap for N. This becomes much worse if the topology
continues this pattern with multiple butterflies.

The solution has already been mentioned: each of M and N are assigned
their own bits to the bitmask and hence they become uniquely maximal for
their bitmasks. Finally, Q also becomes maximal and thus we do not need
to walk its commits multiple times. The final bitmasks for these commits
are as follows:

  I:10       J:01
   |\        /|
   | \ _____/ |
   | /\____   |
   |/      \  |
   M:111    N:1101
        \  /
       Q:1111

Further, Q's reverse edge list is { M, N }, while M and N both have
reverse edge list { I, J }.

PERFORMANCE MEASUREMENTS
------------------------

Now that we've spent a LOT of time on the theory of this algorithm,
let's show that this is actually worth all that effort.

To test the performance, use GIT_TRACE2_PERF=1 when running
'git repack -abd' in a repository with no existing reachability bitmaps.
This avoids any issues with keeping existing bitmaps to skew the
numbers.

Inspect the "building_bitmaps_total" region in the trace2 output to
focus on the portion of work that is affected by this change. Here are
the performance comparisons for a few repositories. The timings are for
the following versions of Git: "multi" is the timing from before any
reverse graph is constructed, where we might perform multiple
traversals. "reverse" is for the previous change where the reverse graph
has every reachable commit.  Finally "maximal" is the version introduced
here where the reverse graph only contains the maximal commits.

      Repository: git/git
           multi: 2.628 sec
         reverse: 2.344 sec
         maximal: 2.047 sec

      Repository: torvalds/linux
           multi: 64.7 sec
         reverse: 205.3 sec
         maximal: 44.7 sec

So in all cases we've not only recovered any time lost to switching to
the reverse-edge algorithm, but we come out ahead of "multi" in all
cases. Likewise, peak heap has gone back to something reasonable:

      Repository: torvalds/linux
           multi: 2.087 GB
         reverse: 3.141 GB
         maximal: 2.288 GB

While I do not have access to full fork networks on GitHub, Peff has run
this algorithm on the chromium/chromium fork network and reported a
change from 3 hours to ~233 seconds. That network is particularly
beneficial for this approach because it has a long, linear history along
with many tags. The "multi" approach was obviously quadratic and the new
approach is linear.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 72 +++++++++++++++++++++++++++++++---
 t/t5310-pack-bitmaps.sh | 85 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 148 insertions(+), 9 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 7e218d02a6..0af93193d8 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -180,8 +180,10 @@ static void compute_xor_offsets(void)
 
 struct bb_commit {
 	struct commit_list *reverse_edges;
+	struct bitmap *commit_mask;
 	struct bitmap *bitmap;
-	unsigned selected:1;
+	unsigned selected:1,
+		 maximal:1;
 	unsigned idx; /* within selected array */
 };
 
@@ -198,7 +200,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i;
+	unsigned int i, num_maximal;
 
 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -210,27 +212,85 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
 		struct bb_commit *ent = bb_data_at(&bb->data, c);
+
 		ent->selected = 1;
+		ent->maximal = 1;
 		ent->idx = i;
+
+		ent->commit_mask = bitmap_new();
+		bitmap_set(ent->commit_mask, i);
+
 		add_pending_object(&revs, &c->object, "");
 	}
+	num_maximal = writer->selected_nr;
 
 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");
 
 	while ((commit = get_revision(&revs))) {
 		struct commit_list *p;
+		struct bb_commit *c_ent;
 
 		parse_commit_or_die(commit);
 
-		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
-		bb->commits[bb->commits_nr++] = commit;
+		c_ent = bb_data_at(&bb->data, commit);
+
+		if (c_ent->maximal) {
+			if (!c_ent->selected) {
+				bitmap_set(c_ent->commit_mask, num_maximal);
+				num_maximal++;
+			}
+
+			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+			bb->commits[bb->commits_nr++] = commit;
+		}
 
 		for (p = commit->parents; p; p = p->next) {
-			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->reverse_edges);
+			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
+			int c_not_p, p_not_c;
+
+			if (!p_ent->commit_mask) {
+				p_ent->commit_mask = bitmap_new();
+				c_not_p = 1;
+				p_not_c = 0;
+			} else {
+				c_not_p = bitmap_is_subset(c_ent->commit_mask, p_ent->commit_mask);
+				p_not_c = bitmap_is_subset(p_ent->commit_mask, c_ent->commit_mask);
+			}
+
+			if (!c_not_p)
+				continue;
+
+			bitmap_or(p_ent->commit_mask, c_ent->commit_mask);
+
+			if (p_not_c)
+				p_ent->maximal = 1;
+			else {
+				p_ent->maximal = 0;
+				free_commit_list(p_ent->reverse_edges);
+				p_ent->reverse_edges = NULL;
+			}
+
+			if (c_ent->maximal) {
+				commit_list_insert(commit, &p_ent->reverse_edges);
+			} else {
+				struct commit_list *cc = c_ent->reverse_edges;
+
+				for (; cc; cc = cc->next) {
+					if (!commit_list_contains(cc->item, p_ent->reverse_edges))
+						commit_list_insert(cc->item, &p_ent->reverse_edges);
+				}
+			}
 		}
+
+		bitmap_free(c_ent->commit_mask);
+		c_ent->commit_mask = NULL;
 	}
+
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_selected_commits", writer->selected_nr);
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_maximal_commits", num_maximal);
 }
 
 static void bitmap_builder_clear(struct bitmap_builder *bb)
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index b1248f1cc8..4c928221be 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -20,11 +20,87 @@ has_any () {
 	grep -Ff "$1" "$2"
 }
 
+# To ensure the logic for "maximal commits" is exercised, make
+# the repository a bit more complicated.
+#
+#    other                         master
+#      *                             *
+# (99 commits)                  (99 commits)
+#      *                             *
+#      |\                           /|
+#      | * octo-other  octo-master * |
+#      |/|\_________  ____________/|\|
+#      | \          \/  __________/  |
+#      |  | ________/\ /             |
+#      *  |/          * merge-right  *
+#      | _|__________/ \____________ |
+#      |/ |                         \|
+# (l1) *  * merge-left               * (r1)
+#      | / \________________________ |
+#      |/                           \|
+# (l2) *                             * (r2)
+#       \___________________________ |
+#                                   \|
+#                                    * (base)
+#
+# The important part for the maximal commit algorithm is how
+# the bitmasks are extended. Assuming starting bit positions
+# for master (bit 0) and other (bit 1), and some flexibility
+# in the order that merge bases are visited, the bitmasks at
+# the end should be:
+#
+#      master: 1       (maximal, selected)
+#       other: 01      (maximal, selected)
+# octo-master: 1
+#  octo-other: 01
+# merge-right: 111     (maximal)
+#        (l1): 111
+#        (r1): 111
+#  merge-left: 1101    (maximal)
+#        (l2): 11111   (maximal)
+#        (r2): 111101  (maximal)
+#      (base): 1111111 (maximal)
+
 test_expect_success 'setup repo with moderate-sized history' '
-	test_commit_bulk --id=file 100 &&
+	test_commit_bulk --id=file 10 &&
 	git checkout -b other HEAD~5 &&
 	test_commit_bulk --id=side 10 &&
+
+	# add complicated history setup, including merges and
+	# ambiguous merge-bases
+
+	git checkout -b merge-left other~2 &&
+	git merge master~2 -m "merge-left" &&
+
+	git checkout -b merge-right master~1 &&
+	git merge other~1 -m "merge-right" &&
+
+	git checkout -b octo-master master &&
+	git merge merge-left merge-right -m "octopus-master" &&
+
+	git checkout -b octo-other other &&
+	git merge merge-left merge-right -m "octopus-other" &&
+
+	git checkout other &&
+	git merge octo-other -m "pull octopus" &&
+
 	git checkout master &&
+	git merge octo-master -m "pull octopus" &&
+
+	# Remove these branches so they are not selected
+	# as bitmap tips
+	git branch -D merge-left &&
+	git branch -D merge-right &&
+	git branch -D octo-other &&
+	git branch -D octo-master &&
+
+	# add padding to make these merges less interesting
+	# and avoid having them selected for bitmaps
+	test_commit_bulk --id=file 100 &&
+	git checkout other &&
+	test_commit_bulk --id=side 100 &&
+	git checkout master &&
+
 	bitmaptip=$(git rev-parse master) &&
 	blob=$(echo tagged-blob | git hash-object -w --stdin) &&
 	git tag tagged-blob $blob &&
@@ -32,9 +108,12 @@ test_expect_success 'setup repo with moderate-sized history' '
 '
 
 test_expect_success 'full repack creates bitmaps' '
-	git repack -ad &&
+	GIT_TRACE2_EVENT_NESTING=4 GIT_TRACE2_EVENT="$(pwd)/trace" \
+		git repack -ad &&
 	ls .git/objects/pack/ | grep bitmap >output &&
-	test_line_count = 1 output
+	test_line_count = 1 output &&
+	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
 '
 
 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (17 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 20/24] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
                     ` (5 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

The on-disk bitmap format has a flag to mark a bitmap to be "reused".
This is a rather curious feature, and works like this:

  - a run of pack-objects would decide to mark the last 80% of the
    bitmaps it generates with the reuse flag

  - the next time we generate bitmaps, we'd see those reuse flags from
    the last run, and mark those commits as special:

      - we'd be more likely to select those commits to get bitmaps in
        the new output

      - when generating the bitmap for a selected commit, we'd reuse the
        old bitmap as-is (rearranging the bits to match the new pack, of
        course)

However, neither of these behaviors particularly makes sense.

Just because a commit happened to be bitmapped last time does not make
it a good candidate for having a bitmap this time. In particular, we may
choose bitmaps based on how recent they are in history, or whether a ref
tip points to them, and those things will change. We're better off
re-considering fresh which commits are good candidates.

Reusing the existing bitmap _is_ a reasonable thing to do to save
computation. But only reusing exact bitmaps is a weak form of this. If
we have an old bitmap for A and now want a new bitmap for its child, we
should be able to compute that only by looking at trees and that are new
to the child. But this code would consider only exact reuse (which is
perhaps why it was eager to select those commits in the first place).

Furthermore, the recent switch to the reverse-edge algorithm for
generating bitmaps dropped this optimization entirely (and yet still
performs better).

So let's do a few cleanups:

 - drop the whole "reusing bitmaps" phase of generating bitmaps. It's
   not helping anything, and is mostly unused code (or worse, code that
   is using CPU but not doing anything useful)

 - drop the use of the on-disk reuse flag to select commits to bitmap

 - stop setting the on-disk reuse flag in bitmaps we generate (since
   nothing respects it anymore)

We will keep a few innards of the reuse code, which will help us
implement a more capable version of the "reuse" optimization:

 - simplify rebuild_existing_bitmaps() into a function that only builds
   the mapping of bits between the old and new orders, but doesn't
   actually convert any bitmaps

 - make rebuild_bitmap() public; we'll call it lazily to convert bitmaps
   as we traverse (using the mapping created above)

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 builtin/pack-objects.c |  1 -
 pack-bitmap-write.c    | 50 +++++-------------------------------------
 pack-bitmap.c          | 46 +++++---------------------------------
 pack-bitmap.h          |  6 ++++-
 4 files changed, 16 insertions(+), 87 deletions(-)

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 5617c01b5a..2a00358f34 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -1104,7 +1104,6 @@ static void write_pack_file(void)
 				stop_progress(&progress_state);
 
 				bitmap_writer_show_progress(progress);
-				bitmap_writer_reuse_bitmaps(&to_pack);
 				bitmap_writer_select_commits(indexed_commits, indexed_commits_nr, -1);
 				bitmap_writer_build(&to_pack);
 				bitmap_writer_finish(written_list, nr_written,
diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 0af93193d8..333058854d 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -30,7 +30,6 @@ struct bitmap_writer {
 	struct ewah_bitmap *tags;
 
 	kh_oid_map_t *bitmaps;
-	kh_oid_map_t *reused;
 	struct packing_data *to_pack;
 
 	struct bitmapped_commit *selected;
@@ -112,7 +111,7 @@ void bitmap_writer_build_type_index(struct packing_data *to_pack,
  * Compute the actual bitmaps
  */
 
-static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
+static inline void push_bitmapped_commit(struct commit *commit)
 {
 	if (writer.selected_nr >= writer.selected_alloc) {
 		writer.selected_alloc = (writer.selected_alloc + 32) * 2;
@@ -120,7 +119,7 @@ static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitm
 	}
 
 	writer.selected[writer.selected_nr].commit = commit;
-	writer.selected[writer.selected_nr].bitmap = reused;
+	writer.selected[writer.selected_nr].bitmap = NULL;
 	writer.selected[writer.selected_nr].flags = 0;
 
 	writer.selected_nr++;
@@ -372,13 +371,6 @@ static void store_selected(struct bb_commit *ent, struct commit *commit)
 	khiter_t hash_pos;
 	int hash_ret;
 
-	/*
-	 * the "reuse bitmaps" phase may have stored something here, but
-	 * our new algorithm doesn't use it. Drop it.
-	 */
-	if (stored->bitmap)
-		ewah_free(stored->bitmap);
-
 	stored->bitmap = bitmap_to_ewah(ent->bitmap);
 
 	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
@@ -480,35 +472,6 @@ static int date_compare(const void *_a, const void *_b)
 	return (long)b->date - (long)a->date;
 }
 
-void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack)
-{
-	struct bitmap_index *bitmap_git;
-	if (!(bitmap_git = prepare_bitmap_git(to_pack->repo)))
-		return;
-
-	writer.reused = kh_init_oid_map();
-	rebuild_existing_bitmaps(bitmap_git, to_pack, writer.reused,
-				 writer.show_progress);
-	/*
-	 * NEEDSWORK: rebuild_existing_bitmaps() makes writer.reused reference
-	 * some bitmaps in bitmap_git, so we can't free the latter.
-	 */
-}
-
-static struct ewah_bitmap *find_reused_bitmap(const struct object_id *oid)
-{
-	khiter_t hash_pos;
-
-	if (!writer.reused)
-		return NULL;
-
-	hash_pos = kh_get_oid_map(writer.reused, *oid);
-	if (hash_pos >= kh_end(writer.reused))
-		return NULL;
-
-	return kh_value(writer.reused, hash_pos);
-}
-
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 				  unsigned int indexed_commits_nr,
 				  int max_bitmaps)
@@ -522,12 +485,11 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 
 	if (indexed_commits_nr < 100) {
 		for (i = 0; i < indexed_commits_nr; ++i)
-			push_bitmapped_commit(indexed_commits[i], NULL);
+			push_bitmapped_commit(indexed_commits[i]);
 		return;
 	}
 
 	for (;;) {
-		struct ewah_bitmap *reused_bitmap = NULL;
 		struct commit *chosen = NULL;
 
 		next = next_commit_index(i);
@@ -542,15 +504,13 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 
 		if (next == 0) {
 			chosen = indexed_commits[i];
-			reused_bitmap = find_reused_bitmap(&chosen->object.oid);
 		} else {
 			chosen = indexed_commits[i + next];
 
 			for (j = 0; j <= next; ++j) {
 				struct commit *cm = indexed_commits[i + j];
 
-				reused_bitmap = find_reused_bitmap(&cm->object.oid);
-				if (reused_bitmap || (cm->object.flags & NEEDS_BITMAP) != 0) {
+				if ((cm->object.flags & NEEDS_BITMAP) != 0) {
 					chosen = cm;
 					break;
 				}
@@ -560,7 +520,7 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 			}
 		}
 
-		push_bitmapped_commit(chosen, reused_bitmap);
+		push_bitmapped_commit(chosen);
 
 		i += next + 1;
 		display_progress(writer.progress, i);
diff --git a/pack-bitmap.c b/pack-bitmap.c
index 60c781d100..d1368b69bb 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1338,9 +1338,9 @@ void test_bitmap_walk(struct rev_info *revs)
 	free_bitmap_index(bitmap_git);
 }
 
-static int rebuild_bitmap(uint32_t *reposition,
-			  struct ewah_bitmap *source,
-			  struct bitmap *dest)
+int rebuild_bitmap(const uint32_t *reposition,
+		   struct ewah_bitmap *source,
+		   struct bitmap *dest)
 {
 	uint32_t pos = 0;
 	struct ewah_iterator it;
@@ -1369,19 +1369,11 @@ static int rebuild_bitmap(uint32_t *reposition,
 	return 0;
 }
 
-int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
-			     struct packing_data *mapping,
-			     kh_oid_map_t *reused_bitmaps,
-			     int show_progress)
+uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
+				struct packing_data *mapping)
 {
 	uint32_t i, num_objects;
 	uint32_t *reposition;
-	struct bitmap *rebuild;
-	struct stored_bitmap *stored;
-	struct progress *progress = NULL;
-
-	khiter_t hash_pos;
-	int hash_ret;
 
 	num_objects = bitmap_git->pack->num_objects;
 	reposition = xcalloc(num_objects, sizeof(uint32_t));
@@ -1399,33 +1391,7 @@ int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
 			reposition[i] = oe_in_pack_pos(mapping, oe) + 1;
 	}
 
-	rebuild = bitmap_new();
-	i = 0;
-
-	if (show_progress)
-		progress = start_progress("Reusing bitmaps", 0);
-
-	kh_foreach_value(bitmap_git->bitmaps, stored, {
-		if (stored->flags & BITMAP_FLAG_REUSE) {
-			if (!rebuild_bitmap(reposition,
-					    lookup_stored_bitmap(stored),
-					    rebuild)) {
-				hash_pos = kh_put_oid_map(reused_bitmaps,
-							  stored->oid,
-							  &hash_ret);
-				kh_value(reused_bitmaps, hash_pos) =
-					bitmap_to_ewah(rebuild);
-			}
-			bitmap_reset(rebuild);
-			display_progress(progress, ++i);
-		}
-	});
-
-	stop_progress(&progress);
-
-	free(reposition);
-	bitmap_free(rebuild);
-	return 0;
+	return reposition;
 }
 
 void free_bitmap_index(struct bitmap_index *b)
diff --git a/pack-bitmap.h b/pack-bitmap.h
index 1203120c43..afa4115136 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -73,7 +73,11 @@ void bitmap_writer_set_checksum(unsigned char *sha1);
 void bitmap_writer_build_type_index(struct packing_data *to_pack,
 				    struct pack_idx_entry **index,
 				    uint32_t index_nr);
-void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack);
+uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
+				struct packing_data *mapping);
+int rebuild_bitmap(const uint32_t *reposition,
+		   struct ewah_bitmap *source,
+		   struct bitmap *dest);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 20/24] pack-bitmap: factor out 'bitmap_for_commit()'
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (18 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
                     ` (4 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

A couple of callers within pack-bitmap.c duplicate logic to lookup a
given object id in the bitamps khash. Factor this out into a new
function, 'bitmap_for_commit()' to reduce some code duplication.

Make this new function non-static, since it will be used in later
commits from outside of pack-bitmap.c.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 33 +++++++++++++++++++--------------
 pack-bitmap.h |  2 ++
 2 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index d1368b69bb..5efb8af121 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -380,6 +380,16 @@ struct include_data {
 	struct bitmap *seen;
 };
 
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit)
+{
+	khiter_t hash_pos = kh_get_oid_map(bitmap_git->bitmaps,
+					   commit->object.oid);
+	if (hash_pos >= kh_end(bitmap_git->bitmaps))
+		return NULL;
+	return lookup_stored_bitmap(kh_value(bitmap_git->bitmaps, hash_pos));
+}
+
 static inline int bitmap_position_extended(struct bitmap_index *bitmap_git,
 					   const struct object_id *oid)
 {
@@ -465,10 +475,10 @@ static void show_commit(struct commit *commit, void *data)
 
 static int add_to_include_set(struct bitmap_index *bitmap_git,
 			      struct include_data *data,
-			      const struct object_id *oid,
+			      struct commit *commit,
 			      int bitmap_pos)
 {
-	khiter_t hash_pos;
+	struct ewah_bitmap *partial;
 
 	if (data->seen && bitmap_get(data->seen, bitmap_pos))
 		return 0;
@@ -476,10 +486,9 @@ static int add_to_include_set(struct bitmap_index *bitmap_git,
 	if (bitmap_get(data->base, bitmap_pos))
 		return 0;
 
-	hash_pos = kh_get_oid_map(bitmap_git->bitmaps, *oid);
-	if (hash_pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, hash_pos);
-		bitmap_or_ewah(data->base, lookup_stored_bitmap(st));
+	partial = bitmap_for_commit(bitmap_git, commit);
+	if (partial) {
+		bitmap_or_ewah(data->base, partial);
 		return 0;
 	}
 
@@ -498,8 +507,7 @@ static int should_include(struct commit *commit, void *_data)
 						  (struct object *)commit,
 						  NULL);
 
-	if (!add_to_include_set(data->bitmap_git, data, &commit->object.oid,
-				bitmap_pos)) {
+	if (!add_to_include_set(data->bitmap_git, data, commit, bitmap_pos)) {
 		struct commit_list *parent = commit->parents;
 
 		while (parent) {
@@ -1282,10 +1290,10 @@ void test_bitmap_walk(struct rev_info *revs)
 {
 	struct object *root;
 	struct bitmap *result = NULL;
-	khiter_t pos;
 	size_t result_popcnt;
 	struct bitmap_test_data tdata;
 	struct bitmap_index *bitmap_git;
+	struct ewah_bitmap *bm;
 
 	if (!(bitmap_git = prepare_bitmap_git(revs->repo)))
 		die("failed to load bitmap indexes");
@@ -1297,12 +1305,9 @@ void test_bitmap_walk(struct rev_info *revs)
 		bitmap_git->version, bitmap_git->entry_count);
 
 	root = revs->pending.objects[0].item;
-	pos = kh_get_oid_map(bitmap_git->bitmaps, root->oid);
-
-	if (pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-		struct ewah_bitmap *bm = lookup_stored_bitmap(st);
+	bm = bitmap_for_commit(bitmap_git, (struct commit *)root);
 
+	if (bm) {
 		fprintf(stderr, "Found bitmap for %s. %d bits / %08x checksum\n",
 			oid_to_hex(&root->oid), (int)bm->bit_size, ewah_checksum(bm));
 
diff --git a/pack-bitmap.h b/pack-bitmap.h
index afa4115136..25dfcf5615 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -78,6 +78,8 @@ uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
 int rebuild_bitmap(const uint32_t *reposition,
 		   struct ewah_bitmap *source,
 		   struct bitmap *dest);
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()'
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (19 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 20/24] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
                     ` (3 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

'find_objects()' currently needs to interact with the bitmaps khash
pretty closely. To make 'find_objects()' read a little more
straightforwardly, remove some of the khash-level details into a new
function that describes what it does: 'add_commit_to_bitmap()'.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 36 +++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 5efb8af121..d88745fb02 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -521,6 +521,23 @@ static int should_include(struct commit *commit, void *_data)
 	return 1;
 }
 
+static int add_commit_to_bitmap(struct bitmap_index *bitmap_git,
+				struct bitmap **base,
+				struct commit *commit)
+{
+	struct ewah_bitmap *or_with = bitmap_for_commit(bitmap_git, commit);
+
+	if (!or_with)
+		return 0;
+
+	if (*base == NULL)
+		*base = ewah_to_bitmap(or_with);
+	else
+		bitmap_or_ewah(*base, or_with);
+
+	return 1;
+}
+
 static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
 				   struct rev_info *revs,
 				   struct object_list *roots,
@@ -544,21 +561,10 @@ static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
 		struct object *object = roots->item;
 		roots = roots->next;
 
-		if (object->type == OBJ_COMMIT) {
-			khiter_t pos = kh_get_oid_map(bitmap_git->bitmaps, object->oid);
-
-			if (pos < kh_end(bitmap_git->bitmaps)) {
-				struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-				struct ewah_bitmap *or_with = lookup_stored_bitmap(st);
-
-				if (base == NULL)
-					base = ewah_to_bitmap(or_with);
-				else
-					bitmap_or_ewah(base, or_with);
-
-				object->flags |= SEEN;
-				continue;
-			}
+		if (object->type == OBJ_COMMIT &&
+		    add_commit_to_bitmap(bitmap_git, &base, (struct commit *)object)) {
+			object->flags |= SEEN;
+			continue;
 		}
 
 		object_list_insert(object, &not_mapped);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 22/24] pack-bitmap-write: use existing bitmaps
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (20 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 23/24] pack-bitmap-write: relax unique rewalk condition Taylor Blau
                     ` (2 subsequent siblings)
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

When constructing new bitmaps, we perform a commit and tree walk in
fill_bitmap_commit() and fill_bitmap_tree(). This walk would benefit
from using existing bitmaps when available. We must track the existing
bitmaps and translate them into the new object order, but this is
generally faster than parsing trees.

In fill_bitmap_commit(), we must reorder thing somewhat. The priority
queue walks commits from newest-to-oldest, which means we correctly stop
walking when reaching a commit with a bitmap. However, if we walk trees
interleaved with the commits, then we might be parsing trees that are
actually part of a re-used bitmap. To avoid over-walking trees, add them
to a LIFO queue and walk them after exploring commits completely.

On git.git, this reduces a second immediate bitmap computation from 2.0s
to 1.0s. On linux.git, we go from 32s to 22s. On chromium's fork
network, we go from 227s to 198s.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 40 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 36 insertions(+), 4 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 333058854d..76c8236f94 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -340,20 +340,37 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
 
 static void fill_bitmap_commit(struct bb_commit *ent,
 			       struct commit *commit,
-			       struct prio_queue *queue)
+			       struct prio_queue *queue,
+			       struct prio_queue *tree_queue,
+			       struct bitmap_index *old_bitmap,
+			       const uint32_t *mapping)
 {
 	if (!ent->bitmap)
 		ent->bitmap = bitmap_new();
 
-	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
 	prio_queue_put(queue, commit);
 
 	while (queue->nr) {
 		struct commit_list *p;
 		struct commit *c = prio_queue_get(queue);
 
+		if (old_bitmap && mapping) {
+			struct ewah_bitmap *old = bitmap_for_commit(old_bitmap, c);
+			/*
+			 * If this commit has an old bitmap, then translate that
+			 * bitmap and add its bits to this one. No need to walk
+			 * parents or the tree for this commit.
+			 */
+			if (old && !rebuild_bitmap(mapping, old, ent->bitmap))
+				continue;
+		}
+
+		/*
+		 * Mark ourselves and queue our tree. The commit
+		 * walk ensures we cover all parents.
+		 */
 		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
-		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
+		prio_queue_put(tree_queue, get_commit_tree(c));
 
 		for (p = c->parents; p; p = p->next) {
 			int pos = find_object_pos(&p->item->object.oid);
@@ -363,6 +380,9 @@ static void fill_bitmap_commit(struct bb_commit *ent,
 			}
 		}
 	}
+
+	while (tree_queue->nr)
+		fill_bitmap_tree(ent->bitmap, prio_queue_get(tree_queue));
 }
 
 static void store_selected(struct bb_commit *ent, struct commit *commit)
@@ -386,6 +406,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	size_t i;
 	int nr_stored = 0; /* for progress */
 	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
+	struct prio_queue tree_queue = { NULL };
+	struct bitmap_index *old_bitmap;
+	uint32_t *mapping;
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
@@ -395,6 +418,12 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
 			    the_repository);
 
+	old_bitmap = prepare_bitmap_git(to_pack->repo);
+	if (old_bitmap)
+		mapping = create_bitmap_mapping(old_bitmap, to_pack);
+	else
+		mapping = NULL;
+
 	bitmap_builder_init(&bb, &writer);
 	for (i = bb.commits_nr; i > 0; i--) {
 		struct commit *commit = bb.commits[i-1];
@@ -402,7 +431,8 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *child;
 		int reused = 0;
 
-		fill_bitmap_commit(ent, commit, &queue);
+		fill_bitmap_commit(ent, commit, &queue, &tree_queue,
+				   old_bitmap, mapping);
 
 		if (ent->selected) {
 			store_selected(ent, commit);
@@ -428,7 +458,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		ent->bitmap = NULL;
 	}
 	clear_prio_queue(&queue);
+	clear_prio_queue(&tree_queue);
 	bitmap_builder_clear(&bb);
+	free(mapping);
 
 	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
 			    the_repository);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 23/24] pack-bitmap-write: relax unique rewalk condition
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (21 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08  0:05   ` [PATCH v3 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
  2020-12-08 20:56   ` [PATCH v3 00/24] pack-bitmap: bitmap generation improvements Junio C Hamano
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The previous commits improved the bitmap computation process for very
long, linear histories with many refs by removing quadratic growth in
how many objects were walked. The strategy of computing "intermediate
commits" using bitmasks for which refs can reach those commits
partitioned the poset of reachable objects so each part could be walked
exactly once. This was effective for linear histories.

However, there was a (significant) drawback: wide histories with many
refs had an explosion of memory costs to compute the commit bitmasks
during the exploration that discovers these intermediate commits. Since
these wide histories are unlikely to repeat walking objects, the benefit
of walking objects multiple times was not expensive before. But now, the
commit walk *before computing bitmaps* is incredibly expensive.

In an effort to discover a happy medium, this change reduces the walk
for intermediate commits to only the first-parent history. This focuses
the walk on how the histories converge, which still has significant
reduction in repeat object walks. It is still possible to create
quadratic behavior in this version, but it is probably less likely in
realistic data shapes.

Here is some data taken on a fresh clone of the kernel:

             |   runtime (sec)    |   peak heap (GB)   |
             |                    |                    |
             |   from  |   with   |   from  |   with   |
             | scratch | existing | scratch | existing |
  -----------+---------+----------+---------+-----------
    original |  64.044 |   83.241 |   2.088 |    2.194 |
  last patch |  45.049 |   37.624 |   2.267 |    2.334 |
  this patch |  88.478 |   53.218 |   2.157 |    2.224 |

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 14 +++++---------
 t/t5310-pack-bitmaps.sh | 27 ++++++++++++++-------------
 2 files changed, 19 insertions(+), 22 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 76c8236f94..d2af4a974f 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -199,7 +199,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i, num_maximal;
+	unsigned int i, num_maximal = 0;
 
 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -207,6 +207,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	reset_revision_walk();
 	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
 	revs.topo_order = 1;
+	revs.first_parent_only = 1;
 
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
@@ -221,13 +222,12 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		add_pending_object(&revs, &c->object, "");
 	}
-	num_maximal = writer->selected_nr;
 
 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");
 
 	while ((commit = get_revision(&revs))) {
-		struct commit_list *p;
+		struct commit_list *p = commit->parents;
 		struct bb_commit *c_ent;
 
 		parse_commit_or_die(commit);
@@ -235,16 +235,12 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 		c_ent = bb_data_at(&bb->data, commit);
 
 		if (c_ent->maximal) {
-			if (!c_ent->selected) {
-				bitmap_set(c_ent->commit_mask, num_maximal);
-				num_maximal++;
-			}
-
+			num_maximal++;
 			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
 			bb->commits[bb->commits_nr++] = commit;
 		}
 
-		for (p = commit->parents; p; p = p->next) {
+		if (p) {
 			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
 			int c_not_p, p_not_c;
 
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 4c928221be..332af446a8 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -43,23 +43,24 @@ has_any () {
 #                                   \|
 #                                    * (base)
 #
+# We only push bits down the first-parent history, which
+# makes some of these commits unimportant!
+#
 # The important part for the maximal commit algorithm is how
 # the bitmasks are extended. Assuming starting bit positions
-# for master (bit 0) and other (bit 1), and some flexibility
-# in the order that merge bases are visited, the bitmasks at
-# the end should be:
+# for master (bit 0) and other (bit 1), the bitmasks at the
+# end should be:
 #
 #      master: 1       (maximal, selected)
 #       other: 01      (maximal, selected)
-# octo-master: 1
-#  octo-other: 01
-# merge-right: 111     (maximal)
-#        (l1): 111
-#        (r1): 111
-#  merge-left: 1101    (maximal)
-#        (l2): 11111   (maximal)
-#        (r2): 111101  (maximal)
-#      (base): 1111111 (maximal)
+#      (base): 11 (maximal)
+#
+# This complicated history was important for a previous
+# version of the walk that guarantees never walking a
+# commit multiple times. That goal might be important
+# again, so preserve this complicated case. For now, this
+# test will guarantee that the bitmaps are computed
+# correctly, even with the repeat calculations.
 
 test_expect_success 'setup repo with moderate-sized history' '
 	test_commit_bulk --id=file 10 &&
@@ -113,7 +114,7 @@ test_expect_success 'full repack creates bitmaps' '
 	ls .git/objects/pack/ | grep bitmap >output &&
 	test_line_count = 1 output &&
 	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
-	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"107\"" trace
 '
 
 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v3 24/24] pack-bitmap-write: better reuse bitmaps
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (22 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 23/24] pack-bitmap-write: relax unique rewalk condition Taylor Blau
@ 2020-12-08  0:05   ` Taylor Blau
  2020-12-08 20:56   ` [PATCH v3 00/24] pack-bitmap: bitmap generation improvements Junio C Hamano
  24 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08  0:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

If the old bitmap file contains a bitmap for a given commit, then that
commit does not need help from intermediate commits in its history to
compute its final bitmap. Eject that commit from the walk and insert it
into a separate list of reusable commits that are eventually stored in
the list of commits for computing bitmaps.

This helps the repeat bitmap computation task, even if the selected
commits shift drastically. This helps when a previously-bitmapped commit
exists in the first-parent history of a newly-selected commit. Since we
stop the walk at these commits and we use a first-parent walk, it is
harder to walk "around" these bitmapped commits. It's not impossible,
but we can greatly reduce the computation time for many selected
commits.

             |   runtime (sec)    |   peak heap (GB)   |
             |                    |                    |
             |   from  |   with   |   from  |   with   |
             | scratch | existing | scratch | existing |
  -----------+---------+----------+---------+-----------
  last patch |  88.478 |   53.218 |   2.157 |    2.224 |
  this patch |  86.681 |   16.164 |   2.157 |    2.222 |

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 40 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index d2af4a974f..cc5ead9990 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -195,10 +195,13 @@ struct bitmap_builder {
 };
 
 static void bitmap_builder_init(struct bitmap_builder *bb,
-				struct bitmap_writer *writer)
+				struct bitmap_writer *writer,
+				struct bitmap_index *old_bitmap)
 {
 	struct rev_info revs;
 	struct commit *commit;
+	struct commit_list *reusable = NULL;
+	struct commit_list *r;
 	unsigned int i, num_maximal = 0;
 
 	memset(bb, 0, sizeof(*bb));
@@ -234,6 +237,31 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		c_ent = bb_data_at(&bb->data, commit);
 
+		/*
+		 * If there is no commit_mask, there is no reason to iterate
+		 * over this commit; it is not selected (if it were, it would
+		 * not have a blank commit mask) and all its children have
+		 * existing bitmaps (see the comment starting with "This commit
+		 * has an existing bitmap" below), so it does not contribute
+		 * anything to the final bitmap file or its descendants.
+		 */
+		if (!c_ent->commit_mask)
+			continue;
+
+		if (old_bitmap && bitmap_for_commit(old_bitmap, commit)) {
+			/*
+			 * This commit has an existing bitmap, so we can
+			 * get its bits immediately without an object
+			 * walk. That is, it is reusable as-is and there is no
+			 * need to continue walking beyond it.
+			 *
+			 * Mark it as such and add it to bb->commits separately
+			 * to avoid allocating a position in the commit mask.
+			 */
+			commit_list_insert(commit, &reusable);
+			goto next;
+		}
+
 		if (c_ent->maximal) {
 			num_maximal++;
 			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
@@ -278,14 +306,22 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 			}
 		}
 
+next:
 		bitmap_free(c_ent->commit_mask);
 		c_ent->commit_mask = NULL;
 	}
 
+	for (r = reusable; r; r = r->next) {
+		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+		bb->commits[bb->commits_nr++] = r->item;
+	}
+
 	trace2_data_intmax("pack-bitmap-write", the_repository,
 			   "num_selected_commits", writer->selected_nr);
 	trace2_data_intmax("pack-bitmap-write", the_repository,
 			   "num_maximal_commits", num_maximal);
+
+	free_commit_list(reusable);
 }
 
 static void bitmap_builder_clear(struct bitmap_builder *bb)
@@ -420,7 +456,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	else
 		mapping = NULL;
 
-	bitmap_builder_init(&bb, &writer);
+	bitmap_builder_init(&bb, &writer, old_bitmap);
 	for (i = bb.commits_nr; i > 0; i--) {
 		struct commit *commit = bb.commits[i-1];
 		struct bb_commit *ent = bb_data_at(&bb.data, commit);
-- 
2.29.2.533.g07db1f5344

^ permalink raw reply related	[flat|nested] 174+ messages in thread

* Re: [PATCH v3 00/24] pack-bitmap: bitmap generation improvements
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
                     ` (23 preceding siblings ...)
  2020-12-08  0:05   ` [PATCH v3 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
@ 2020-12-08 20:56   ` Junio C Hamano
  2020-12-08 21:03     ` Taylor Blau
  24 siblings, 1 reply; 174+ messages in thread
From: Junio C Hamano @ 2020-12-08 20:56 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, peff, jonathantanmy, dstolee

Taylor Blau <me@ttaylorr.com> writes:

> Here's an updated v3 of mine, Stolee, and Peff's series to improve the
> CPU performance of generating reachability bitmaps.

Has the "avoid having to assume the default branch name is 'master',
by naming the initial branch we create our history to use in testing
'second'" fix-up by Dscho, which has been queued in 'seen' on top of
the previous round of this topic, incorporated to this round?  

I think [4/24] and [15/24] can be adjusted by adding this piece from
Dscho to the set-up procedure and ...

@@ -64,6 +64,7 @@ has_any () {
 
 test_expect_success 'setup repo with moderate-sized history' '
 	test_commit_bulk --id=file 10 &&
+	git branch -M second &&
 	git checkout -b other HEAD~5 &&
 	test_commit_bulk --id=side 10 &&
 
... fixing the remainder of the test script by adjusting for the
fallout from the 'master' that is now called 'second'.

Thanks.


^ permalink raw reply	[flat|nested] 174+ messages in thread

* Re: [PATCH v3 00/24] pack-bitmap: bitmap generation improvements
  2020-12-08 20:56   ` [PATCH v3 00/24] pack-bitmap: bitmap generation improvements Junio C Hamano
@ 2020-12-08 21:03     ` Taylor Blau
  2020-12-08 22:03       ` Junio C Hamano
  0 siblings, 1 reply; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 21:03 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Taylor Blau, git, peff, jonathantanmy, dstolee

On Tue, Dec 08, 2020 at 12:56:05PM -0800, Junio C Hamano wrote:
> Taylor Blau <me@ttaylorr.com> writes:
>
> > Here's an updated v3 of mine, Stolee, and Peff's series to improve the
> > CPU performance of generating reachability bitmaps.
>
> Has the "avoid having to assume the default branch name is 'master',
> by naming the initial branch we create our history to use in testing
> 'second'" fix-up by Dscho, which has been queued in 'seen' on top of
> the previous round of this topic, incorporated to this round?

Unfortunately, no. I wrote you an email a little earlier today, but it's
possible that our emails may have crossed (vger seems to be rather slow
today...).

> I think [4/24] and [15/24] can be adjusted by adding this piece from
> Dscho to the set-up procedure and ...
>
> @@ -64,6 +64,7 @@ has_any () {
>
>  test_expect_success 'setup repo with moderate-sized history' '
>  	test_commit_bulk --id=file 10 &&
> +	git branch -M second &&
>  	git checkout -b other HEAD~5 &&
>  	test_commit_bulk --id=side 10 &&
>
> ... fixing the remainder of the test script by adjusting for the
> fallout from the 'master' that is now called 'second'.

That seems reasonable. Another approach would be to leave these patches
untouched and apply Dscho's fixup on the end, but I'm not sure which
you'd prefer.

If the latter, then I think you have everything you need. If the former,
would you like a re-submission of this series? Either is fine with me.

> Thanks.

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [PATCH v4 00/24] pack-bitmap: bitmap generation improvements
  2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
                   ` (24 preceding siblings ...)
  2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
@ 2020-12-08 22:03 ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW() Taylor Blau
                     ` (23 more replies)
  25 siblings, 24 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

Here's v4 as requested from [1, 2], but I think we can safely call this
"the improved v3", since all we're doing here is removing new instances
of "master" as part of the ongoing default branch transition.

A range-diff that shows this is included below, but it counts v4's 4/24
as a complete replacement of v3's, so careful readers are encouraged to
look at the inter-diff there.

I also snuck a typo-fix into 23/24 (which has existed unnoticed since
v1) changing "rewalk" to "revwalk".

Sorry for all of the shuffling around, hopefully this one should do the
trick.

[1]: https://lore.kernel.org/git/xmqqmtyo6mqi.fsf@gitster.c.googlers.com/
[2]: https://lore.kernel.org/git/pull.809.git.1607260623935.gitgitgadget@gmail.com/

Derrick Stolee (9):
  pack-bitmap-write: fill bitmap with commit history
  bitmap: implement bitmap_is_subset()
  commit: implement commit_list_contains()
  t5310: add branch-based checks
  pack-bitmap-write: rename children to reverse_edges
  pack-bitmap-write: build fewer intermediate bitmaps
  pack-bitmap-write: use existing bitmaps
  pack-bitmap-write: relax unique revwalk condition
  pack-bitmap-write: better reuse bitmaps

Jeff King (11):
  pack-bitmap: fix header size check
  pack-bitmap: bounds-check size of cache extension
  t5310: drop size of truncated ewah bitmap
  rev-list: die when --test-bitmap detects a mismatch
  ewah: factor out bitmap growth
  ewah: make bitmap growth less aggressive
  ewah: implement bitmap_or()
  ewah: add bitmap_dup() function
  pack-bitmap-write: reimplement bitmap writing
  pack-bitmap-write: pass ownership of intermediate bitmaps
  pack-bitmap-write: ignore BITMAP_FLAG_REUSE

Taylor Blau (4):
  ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW()
  pack-bitmap.c: check reads more aggressively when loading
  pack-bitmap: factor out 'bitmap_for_commit()'
  pack-bitmap: factor out 'add_commit_to_bitmap()'

 builtin/pack-objects.c  |   1 -
 commit.c                |  11 +
 commit.h                |   2 +
 ewah/bitmap.c           |  54 ++++-
 ewah/ewah_bitmap.c      |  15 +-
 ewah/ewok.h             |   3 +-
 pack-bitmap-write.c     | 474 ++++++++++++++++++++++++++--------------
 pack-bitmap.c           | 139 ++++++------
 pack-bitmap.h           |   8 +-
 t/t5310-pack-bitmaps.sh | 177 +++++++++++----
 10 files changed, 583 insertions(+), 301 deletions(-)

Range-diff against v3:
 1:  0b25ba4ca7 =  1:  e72f85f82f ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW()
 2:  b455b248e4 =  2:  b24395e4b0 pack-bitmap: fix header size check
 3:  7322427444 =  3:  97533dba27 pack-bitmap: bounds-check size of cache extension
 4:  055bc1fe66 <  -:  ---------- t5310: drop size of truncated ewah bitmap
 -:  ---------- >  4:  2e7454d7b9 t5310: drop size of truncated ewah bitmap
 5:  c99cacea67 =  5:  3cb4156372 rev-list: die when --test-bitmap detects a mismatch
 6:  b79360383e =  6:  570bf22425 ewah: factor out bitmap growth
 7:  4b56f12932 =  7:  48a1949ee6 ewah: make bitmap growth less aggressive
 8:  34137a7f35 =  8:  04bf0de474 ewah: implement bitmap_or()
 9:  fe89f87716 =  9:  c8bd4ed5fa ewah: add bitmap_dup() function
10:  91cd8b1a49 = 10:  bbeb87a95d pack-bitmap-write: reimplement bitmap writing
11:  64598024ec = 11:  f87c11700b pack-bitmap-write: pass ownership of intermediate bitmaps
12:  93fc437a3c = 12:  c466dda576 pack-bitmap-write: fill bitmap with commit history
13:  0d5213ba44 = 13:  0cfa932b71 bitmap: implement bitmap_is_subset()
14:  72e745fed8 = 14:  033fb2ed55 commit: implement commit_list_contains()
15:  c2cae4a8d0 ! 15:  76071f9f4e t5310: add branch-based checks
    @@ Commit message
         'master' and 'other' branches.

         Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
    +    Helped-by: Junio C Hamano <gitster@pobox.com>
         Signed-off-by: Taylor Blau <me@ttaylorr.com>

      ## t/t5310-pack-bitmaps.sh ##
    @@ t/t5310-pack-bitmaps.sh: test_expect_success 'rev-list --test-bitmap verifies bi

     -	test_expect_success "counting non-linear history ($state)" '
     +	test_expect_success "counting non-linear history ($state, $branch)" '
    - 		git rev-list --count other...master >expect &&
    - 		git rev-list --use-bitmap-index --count other...master >actual &&
    + 		git rev-list --count other...second >expect &&
    + 		git rev-list --use-bitmap-index --count other...second >actual &&
      		test_cmp expect actual
      	'

    @@ t/t5310-pack-bitmaps.sh: test_expect_success 'rev-list --test-bitmap verifies bi
     +rev_list_tests () {
     +	state=$1
     +
    -+	for branch in "master" "other"
    ++	for branch in "second" "other"
     +	do
     +		rev_list_tests_head
     +	done
16:  c0e2b6f5d9 = 16:  d8c6f0f0bc pack-bitmap-write: rename children to reverse_edges
17:  37f9636098 = 17:  2e08243706 pack-bitmap.c: check reads more aggressively when loading
18:  e520c8fdc4 ! 18:  b4c5d2c3df pack-bitmap-write: build fewer intermediate bitmaps
    @@ Commit message

         Helped-by: Jeff King <peff@peff.net>
         Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
    +    Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
         Signed-off-by: Taylor Blau <me@ttaylorr.com>

      ## pack-bitmap-write.c ##
    @@ t/t5310-pack-bitmaps.sh: has_any () {
      test_expect_success 'setup repo with moderate-sized history' '
     -	test_commit_bulk --id=file 100 &&
     +	test_commit_bulk --id=file 10 &&
    + 	git branch -M second &&
      	git checkout -b other HEAD~5 &&
      	test_commit_bulk --id=side 10 &&
     +
    @@ t/t5310-pack-bitmaps.sh: has_any () {
     +	# ambiguous merge-bases
     +
     +	git checkout -b merge-left other~2 &&
    -+	git merge master~2 -m "merge-left" &&
    ++	git merge second~2 -m "merge-left" &&
     +
    -+	git checkout -b merge-right master~1 &&
    ++	git checkout -b merge-right second~1 &&
     +	git merge other~1 -m "merge-right" &&
     +
    -+	git checkout -b octo-master master &&
    -+	git merge merge-left merge-right -m "octopus-master" &&
    ++	git checkout -b octo-second second &&
    ++	git merge merge-left merge-right -m "octopus-second" &&
     +
     +	git checkout -b octo-other other &&
     +	git merge merge-left merge-right -m "octopus-other" &&
    @@ t/t5310-pack-bitmaps.sh: has_any () {
     +	git checkout other &&
     +	git merge octo-other -m "pull octopus" &&
     +
    - 	git checkout master &&
    -+	git merge octo-master -m "pull octopus" &&
    + 	git checkout second &&
    ++	git merge octo-second -m "pull octopus" &&
     +
     +	# Remove these branches so they are not selected
     +	# as bitmap tips
     +	git branch -D merge-left &&
     +	git branch -D merge-right &&
     +	git branch -D octo-other &&
    -+	git branch -D octo-master &&
    ++	git branch -D octo-second &&
     +
     +	# add padding to make these merges less interesting
     +	# and avoid having them selected for bitmaps
     +	test_commit_bulk --id=file 100 &&
     +	git checkout other &&
     +	test_commit_bulk --id=side 100 &&
    -+	git checkout master &&
    ++	git checkout second &&
     +
    - 	bitmaptip=$(git rev-parse master) &&
    + 	bitmaptip=$(git rev-parse second) &&
      	blob=$(echo tagged-blob | git hash-object -w --stdin) &&
      	git tag tagged-blob $blob &&
     @@ t/t5310-pack-bitmaps.sh: test_expect_success 'setup repo with moderate-sized history' '
19:  c3975fcf78 = 19:  d973cf240d pack-bitmap-write: ignore BITMAP_FLAG_REUSE
20:  d5ef2c7f81 = 20:  4d7a4184ac pack-bitmap: factor out 'bitmap_for_commit()'
21:  f0500190f0 = 21:  bd3a16088b pack-bitmap: factor out 'add_commit_to_bitmap()'
22:  c6fde2b0c4 = 22:  e0d989b98f pack-bitmap-write: use existing bitmaps
23:  50d2031deb ! 23:  8f9fdb0f43 pack-bitmap-write: relax unique rewalk condition
    @@ Metadata
     Author: Derrick Stolee <dstolee@microsoft.com>

      ## Commit message ##
    -    pack-bitmap-write: relax unique rewalk condition
    +    pack-bitmap-write: relax unique revwalk condition

         The previous commits improved the bitmap computation process for very
         long, linear histories with many refs by removing quadratic growth in
    @@ Commit message
           this patch |  88.478 |   53.218 |   2.157 |    2.224 |

         Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
    +    Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
         Signed-off-by: Taylor Blau <me@ttaylorr.com>

      ## pack-bitmap-write.c ##
    @@ pack-bitmap-write.c: static void bitmap_builder_init(struct bitmap_builder *bb,


      ## t/t5310-pack-bitmaps.sh ##
    +@@ t/t5310-pack-bitmaps.sh: has_any () {
    + # To ensure the logic for "maximal commits" is exercised, make
    + # the repository a bit more complicated.
    + #
    +-#    other                         master
    ++#    other                         second
    + #      *                             *
    + # (99 commits)                  (99 commits)
    + #      *                             *
    + #      |\                           /|
    +-#      | * octo-other  octo-master * |
    ++#      | * octo-other  octo-second * |
    + #      |/|\_________  ____________/|\|
    + #      | \          \/  __________/  |
    + #      |  | ________/\ /             |
     @@ t/t5310-pack-bitmaps.sh: has_any () {
      #                                   \|
      #                                    * (base)
    @@ t/t5310-pack-bitmaps.sh: has_any () {
     -# for master (bit 0) and other (bit 1), and some flexibility
     -# in the order that merge bases are visited, the bitmasks at
     -# the end should be:
    -+# for master (bit 0) and other (bit 1), the bitmasks at the
    ++# for second (bit 0) and other (bit 1), the bitmasks at the
     +# end should be:
      #
    - #      master: 1       (maximal, selected)
    +-#      master: 1       (maximal, selected)
    ++#      second: 1       (maximal, selected)
      #       other: 01      (maximal, selected)
     -# octo-master: 1
     -#  octo-other: 01
24:  6b9950771e = 24:  720b6e0dc7 pack-bitmap-write: better reuse bitmaps
--
2.29.2.533.g07db1f5344

^ permalink raw reply	[flat|nested] 174+ messages in thread

* [PATCH v4 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW()
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 02/24] pack-bitmap: fix header size check Taylor Blau
                     ` (22 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

'ewah/ewah_bitmap.c:buffer_grow()' is responsible for growing the buffer
used to store the bits of an EWAH bitmap. It is essentially doing the
same task as the 'ALLOC_GROW()' macro, so use that instead.

This simplifies the callers of 'buffer_grow()', who no longer have to
ask for a specific size, but rather specify how much of the buffer they
need. They also no longer need to guard 'buffer_grow()' behind an if
statement, since 'ALLOC_GROW()' (and, by extension, 'buffer_grow()') is
a noop if the buffer is already large enough.

But, the most significant change is that this fixes a bug when calling
buffer_grow() with both 'alloc_size' and 'new_size' set to 1. In this
case, truncating integer math will leave the new size set to 1, causing
the buffer to never grow.

Instead, let alloc_nr() handle this, which asks for '(new_size + 16) * 3
/ 2' instead of 'new_size * 3 / 2'.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/ewah_bitmap.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/ewah/ewah_bitmap.c b/ewah/ewah_bitmap.c
index d59b1afe3d..2a8c7c5c33 100644
--- a/ewah/ewah_bitmap.c
+++ b/ewah/ewah_bitmap.c
@@ -19,6 +19,7 @@
 #include "git-compat-util.h"
 #include "ewok.h"
 #include "ewok_rlw.h"
+#include "cache.h"
 
 static inline size_t min_size(size_t a, size_t b)
 {
@@ -33,20 +34,13 @@ static inline size_t max_size(size_t a, size_t b)
 static inline void buffer_grow(struct ewah_bitmap *self, size_t new_size)
 {
 	size_t rlw_offset = (uint8_t *)self->rlw - (uint8_t *)self->buffer;
-
-	if (self->alloc_size >= new_size)
-		return;
-
-	self->alloc_size = new_size;
-	REALLOC_ARRAY(self->buffer, self->alloc_size);
+	ALLOC_GROW(self->buffer, new_size, self->alloc_size);
 	self->rlw = self->buffer + (rlw_offset / sizeof(eword_t));
 }
 
 static inline void buffer_push(struct ewah_bitmap *self, eword_t value)
 {
-	if (self->buffer_size + 1 >= self->alloc_size)
-		buffer_grow(self, self->buffer_size * 3 / 2);
-
+	buffer_grow(self, self->buffer_size + 1);
 	self->buffer[self->buffer_size++] = value;
 }
 
@@ -137,8 +131,7 @@ void ewah_add_dirty_words(
 
 		rlw_set_literal_words(self->rlw, literals + can_add);
 
-		if (self->buffer_size + can_add >= self->alloc_size)
-			buffer_grow(self, (self->buffer_size + can_add) * 3 / 2);
+		buffer_grow(self, self->buffer_size + can_add);
 
 		if (negate) {
 			size_t i;
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 02/24] pack-bitmap: fix header size check
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW() Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
                     ` (21 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

When we parse a .bitmap header, we first check that we have enough bytes
to make a valid header. We do that based on sizeof(struct
bitmap_disk_header). However, as of 0f4d6cada8 (pack-bitmap: make bitmap
header handling hash agnostic, 2019-02-19), that struct oversizes its
checksum member to GIT_MAX_RAWSZ. That means we need to adjust for the
difference between that constant and the size of the actual hash we're
using. That commit adjusted the code which moves our pointer forward,
but forgot to update the size check.

This meant we were overly strict about the header size (requiring room
for a 32-byte worst-case hash, when sha1 is only 20 bytes). But in
practice it didn't matter because bitmap files tend to have at least 12
bytes of actual data anyway, so it was unlikely for a valid file to be
caught by this.

Let's fix it by pulling the header size into a separate variable and
using it in both spots. That fixes the bug and simplifies the code to make
it harder to have a mismatch like this in the future. It will also come
in handy in the next patch for more bounds checking.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 4077e731e8..fe5647e72e 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -138,9 +138,10 @@ static struct ewah_bitmap *read_bitmap_1(struct bitmap_index *index)
 static int load_bitmap_header(struct bitmap_index *index)
 {
 	struct bitmap_disk_header *header = (void *)index->map;
+	size_t header_size = sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
 
-	if (index->map_size < sizeof(*header) + the_hash_algo->rawsz)
-		return error("Corrupted bitmap index (missing header data)");
+	if (index->map_size < header_size + the_hash_algo->rawsz)
+		return error("Corrupted bitmap index (too small)");
 
 	if (memcmp(header->magic, BITMAP_IDX_SIGNATURE, sizeof(BITMAP_IDX_SIGNATURE)) != 0)
 		return error("Corrupted bitmap index file (wrong header)");
@@ -164,7 +165,7 @@ static int load_bitmap_header(struct bitmap_index *index)
 	}
 
 	index->entry_count = ntohl(header->entry_count);
-	index->map_pos += sizeof(*header) - GIT_MAX_RAWSZ + the_hash_algo->rawsz;
+	index->map_pos += header_size;
 	return 0;
 }
 
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 03/24] pack-bitmap: bounds-check size of cache extension
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW() Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 02/24] pack-bitmap: fix header size check Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
                     ` (20 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

A .bitmap file may have a "name hash cache" extension, which puts a
sequence of uint32_t values (one per object) at the end of the file.
When we see a flag indicating this extension, we blindly subtract the
appropriate number of bytes from our available length. However, if the
.bitmap file is too short, we'll underflow our length variable and wrap
around, thinking we have a very large length. This can lead to reading
out-of-bounds bytes while loading individual ewah bitmaps.

We can fix this by checking the number of available bytes when we parse
the header. The existing "truncated bitmap" test is now split into two
tests: one where we don't have this extension at all (and hence actually
do try to read a truncated ewah bitmap) and one where we realize
up-front that we can't even fit in the cache structure. We'll check
stderr in each case to make sure we hit the error we're expecting.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c           |  8 ++++++--
 t/t5310-pack-bitmaps.sh | 17 +++++++++++++++--
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index fe5647e72e..074d9ac8f2 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -153,14 +153,18 @@ static int load_bitmap_header(struct bitmap_index *index)
 	/* Parse known bitmap format options */
 	{
 		uint32_t flags = ntohs(header->options);
+		size_t cache_size = st_mult(index->pack->num_objects, sizeof(uint32_t));
+		unsigned char *index_end = index->map + index->map_size - the_hash_algo->rawsz;
 
 		if ((flags & BITMAP_OPT_FULL_DAG) == 0)
 			return error("Unsupported options for bitmap index file "
 				"(Git requires BITMAP_OPT_FULL_DAG)");
 
 		if (flags & BITMAP_OPT_HASH_CACHE) {
-			unsigned char *end = index->map + index->map_size - the_hash_algo->rawsz;
-			index->hashes = ((uint32_t *)end) - index->pack->num_objects;
+			if (cache_size > index_end - index->map - header_size)
+				return error("corrupted bitmap index file (too short to fit hash cache)");
+			index->hashes = (void *)(index_end - cache_size);
+			index_end -= cache_size;
 		}
 	}
 
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 1d40fcad39..dbe1ffc88a 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -343,7 +343,8 @@ test_expect_success 'pack reuse respects --incremental' '
 	test_must_be_empty actual
 '
 
-test_expect_success 'truncated bitmap fails gracefully' '
+test_expect_success 'truncated bitmap fails gracefully (ewah)' '
+	test_config pack.writebitmaphashcache false &&
 	git repack -ad &&
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
@@ -352,7 +353,19 @@ test_expect_success 'truncated bitmap fails gracefully' '
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-	test_i18ngrep corrupt stderr
+	test_i18ngrep corrupt.ewah.bitmap stderr
+'
+
+test_expect_success 'truncated bitmap fails gracefully (cache)' '
+	git repack -ad &&
+	git rev-list --use-bitmap-index --count --all >expect &&
+	bitmap=$(ls .git/objects/pack/*.bitmap) &&
+	test_when_finished "rm -f $bitmap" &&
+	test_copy_bytes 512 <$bitmap >$bitmap.tmp &&
+	mv -f $bitmap.tmp $bitmap &&
+	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
+	test_cmp expect actual &&
+	test_i18ngrep corrupted.bitmap.index stderr
 '
 
 # have_delta <obj> <expected_base>
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 04/24] t5310: drop size of truncated ewah bitmap
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (2 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
                     ` (19 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

We truncate the .bitmap file to 512 bytes and expect to run into
problems reading an individual ewah file. But this length is somewhat
arbitrary, and just happened to work when the test was added in
9d2e330b17 (ewah_read_mmap: bounds-check mmap reads, 2018-06-14).

An upcoming commit will change the size of the history we create in the
test repo, which will cause this test to fail. We can future-proof it a
bit more by reducing the size of the truncated bitmap file.

Signed-off-by: Jeff King <peff@peff.net>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 t/t5310-pack-bitmaps.sh | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index dbe1ffc88a..bf094cfe42 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -22,10 +22,11 @@ has_any () {
 
 test_expect_success 'setup repo with moderate-sized history' '
 	test_commit_bulk --id=file 100 &&
+	git branch -M second &&
 	git checkout -b other HEAD~5 &&
 	test_commit_bulk --id=side 10 &&
-	git checkout master &&
-	bitmaptip=$(git rev-parse master) &&
+	git checkout second &&
+	bitmaptip=$(git rev-parse second) &&
 	blob=$(echo tagged-blob | git hash-object -w --stdin) &&
 	git tag tagged-blob $blob &&
 	git config repack.writebitmaps true
@@ -63,8 +64,8 @@ rev_list_tests() {
 	'
 
 	test_expect_success "counting non-linear history ($state)" '
-		git rev-list --count other...master >expect &&
-		git rev-list --use-bitmap-index --count other...master >actual &&
+		git rev-list --count other...second >expect &&
+		git rev-list --use-bitmap-index --count other...second >actual &&
 		test_cmp expect actual
 	'
 
@@ -128,7 +129,7 @@ test_expect_success 'setup further non-bitmapped commits' '
 rev_list_tests 'partial bitmap'
 
 test_expect_success 'fetch (partial bitmap)' '
-	git --git-dir=clone.git fetch origin master:master &&
+	git --git-dir=clone.git fetch origin second:second &&
 	git rev-parse HEAD >expect &&
 	git --git-dir=clone.git rev-parse HEAD >actual &&
 	test_cmp expect actual
@@ -230,7 +231,7 @@ test_expect_success 'full repack, reusing previous bitmaps' '
 '
 
 test_expect_success 'fetch (full bitmap)' '
-	git --git-dir=clone.git fetch origin master:master &&
+	git --git-dir=clone.git fetch origin second:second &&
 	git rev-parse HEAD >expect &&
 	git --git-dir=clone.git rev-parse HEAD >actual &&
 	test_cmp expect actual
@@ -349,7 +350,7 @@ test_expect_success 'truncated bitmap fails gracefully (ewah)' '
 	git rev-list --use-bitmap-index --count --all >expect &&
 	bitmap=$(ls .git/objects/pack/*.bitmap) &&
 	test_when_finished "rm -f $bitmap" &&
-	test_copy_bytes 512 <$bitmap >$bitmap.tmp &&
+	test_copy_bytes 256 <$bitmap >$bitmap.tmp &&
 	mv -f $bitmap.tmp $bitmap &&
 	git rev-list --use-bitmap-index --count --all >actual 2>stderr &&
 	test_cmp expect actual &&
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 05/24] rev-list: die when --test-bitmap detects a mismatch
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (3 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 06/24] ewah: factor out bitmap growth Taylor Blau
                     ` (18 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

You can use "git rev-list --test-bitmap HEAD" to check that bitmaps
produce the same answer we'd get from a regular traversal. But if we
detect an error, we only print "mismatch", and still exit with a
successful error code.

That makes the uses of --test-bitmap in the test suite (e.g., in t5310)
mostly pointless: even if we saw an error, the tests wouldn't notice.
Let's instead call die(), which will let these tests work as designed,
and alert us if the bitmaps are bogus.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 074d9ac8f2..4431f9f120 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1328,7 +1328,7 @@ void test_bitmap_walk(struct rev_info *revs)
 	if (bitmap_equals(result, tdata.base))
 		fprintf(stderr, "OK!\n");
 	else
-		fprintf(stderr, "Mismatch!\n");
+		die("mismatch in bitmap results");
 
 	free_bitmap_index(bitmap_git);
 }
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 06/24] ewah: factor out bitmap growth
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (4 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 07/24] ewah: make bitmap growth less aggressive Taylor Blau
                     ` (17 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

We auto-grow bitmaps when somebody asks to set a bit whose position is
outside of our currently allocated range. Other operations besides
single bit-setting might need to do this, too, so let's pull it into its
own function.

Note that we change the semantics a little: you now ask for the number
of words you'd like to have, not the id of the block you'd like to write
to.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index d8cec585af..7c1ecfa6fd 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -35,18 +35,22 @@ struct bitmap *bitmap_new(void)
 	return bitmap_word_alloc(32);
 }
 
-void bitmap_set(struct bitmap *self, size_t pos)
+static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
-	size_t block = EWAH_BLOCK(pos);
-
-	if (block >= self->word_alloc) {
+	if (word_alloc > self->word_alloc) {
 		size_t old_size = self->word_alloc;
-		self->word_alloc = block ? block * 2 : 1;
+		self->word_alloc = word_alloc * 2;
 		REALLOC_ARRAY(self->words, self->word_alloc);
 		memset(self->words + old_size, 0x0,
 			(self->word_alloc - old_size) * sizeof(eword_t));
 	}
+}
 
+void bitmap_set(struct bitmap *self, size_t pos)
+{
+	size_t block = EWAH_BLOCK(pos);
+
+	bitmap_grow(self, block + 1);
 	self->words[block] |= EWAH_MASK(pos);
 }
 
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 07/24] ewah: make bitmap growth less aggressive
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (5 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 06/24] ewah: factor out bitmap growth Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 08/24] ewah: implement bitmap_or() Taylor Blau
                     ` (16 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

If you ask to set a bit in the Nth word and we haven't yet allocated
that many slots in our array, we'll increase the bitmap size to 2*N.
This means we might frequently end up with bitmaps that are twice the
necessary size (as soon as you ask for the biggest bit, we'll size up to
twice that).

But if we just allocate as many words as were asked for, we may not grow
fast enough. The worst case there is setting bit 0, then 1, etc. Each
time we grow we'd just extend by one more word, giving us linear
reallocations (and quadratic memory copies).

A middle ground is relying on alloc_nr(), which causes us to grow by a
factor of roughly 3/2 instead of 2. That's less aggressive than
doubling, and it may help avoid fragmenting memory. (If we start with N,
then grow twice, our total is N*(3/2)^2 = 9N/4. After growing twice,
that array of size 9N/4 can fit into the space vacated by the original
array and first growth, N+3N/2 = 10N/4 > 9N/4, leading to less
fragmentation in memory).

Our worst case is still 3/2N wasted bits (you set bit N-1, then setting
bit N causes us to grow by 3/2), but our average should be much better.

This isn't usually that big a deal, but it will matter as we shift the
reachability bitmap generation code to store more bitmaps in memory.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 7c1ecfa6fd..6f9e5c529b 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -37,13 +37,10 @@ struct bitmap *bitmap_new(void)
 
 static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
-	if (word_alloc > self->word_alloc) {
-		size_t old_size = self->word_alloc;
-		self->word_alloc = word_alloc * 2;
-		REALLOC_ARRAY(self->words, self->word_alloc);
-		memset(self->words + old_size, 0x0,
-			(self->word_alloc - old_size) * sizeof(eword_t));
-	}
+	size_t old_size = self->word_alloc;
+	ALLOC_GROW(self->words, word_alloc, self->word_alloc);
+	memset(self->words + old_size, 0x0,
+	       (self->word_alloc - old_size) * sizeof(eword_t));
 }
 
 void bitmap_set(struct bitmap *self, size_t pos)
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 08/24] ewah: implement bitmap_or()
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (6 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 07/24] ewah: make bitmap growth less aggressive Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 09/24] ewah: add bitmap_dup() function Taylor Blau
                     ` (15 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

We have a function to bitwise-OR an ewah into an uncompressed bitmap,
but not to OR two uncompressed bitmaps. Let's add it.

Interestingly, we have a public header declaration going back to
e1273106f6 (ewah: compressed bitmap implementation, 2013-11-14), but the
function was never implemented. That was all OK since there were no
users of 'bitmap_or()', but a first caller will be added in a couple of
patches.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 6f9e5c529b..0a3502603f 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -122,6 +122,15 @@ void bitmap_and_not(struct bitmap *self, struct bitmap *other)
 		self->words[i] &= ~other->words[i];
 }
 
+void bitmap_or(struct bitmap *self, const struct bitmap *other)
+{
+	size_t i;
+
+	bitmap_grow(self, other->word_alloc);
+	for (i = 0; i < other->word_alloc; i++)
+		self->words[i] |= other->words[i];
+}
+
 void bitmap_or_ewah(struct bitmap *self, struct ewah_bitmap *other)
 {
 	size_t original_size = self->word_alloc;
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* Re: [PATCH v3 00/24] pack-bitmap: bitmap generation improvements
  2020-12-08 21:03     ` Taylor Blau
@ 2020-12-08 22:03       ` Junio C Hamano
  0 siblings, 0 replies; 174+ messages in thread
From: Junio C Hamano @ 2020-12-08 22:03 UTC (permalink / raw)
  To: Taylor Blau; +Cc: git, peff, jonathantanmy, dstolee

Taylor Blau <me@ttaylorr.com> writes:

> That seems reasonable. Another approach would be to leave these patches
> untouched and apply Dscho's fixup on the end, but I'm not sure which
> you'd prefer.

I'd prefer not to see known breakages that are found before the
topic is not yet in 'next' left in the topic, and fix them at the
source before the topic gets merged.


^ permalink raw reply	[flat|nested] 174+ messages in thread

* [PATCH v4 09/24] ewah: add bitmap_dup() function
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (7 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 08/24] ewah: implement bitmap_or() Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
                     ` (14 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

There's no easy way to make a copy of a bitmap. Obviously a caller can
iterate over the bits and set them one by one in a new bitmap, but we
can go much faster by copying whole words with memcpy().

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 7 +++++++
 ewah/ewok.h   | 1 +
 2 files changed, 8 insertions(+)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index 0a3502603f..b5f6376282 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -35,6 +35,13 @@ struct bitmap *bitmap_new(void)
 	return bitmap_word_alloc(32);
 }
 
+struct bitmap *bitmap_dup(const struct bitmap *src)
+{
+	struct bitmap *dst = bitmap_word_alloc(src->word_alloc);
+	COPY_ARRAY(dst->words, src->words, src->word_alloc);
+	return dst;
+}
+
 static void bitmap_grow(struct bitmap *self, size_t word_alloc)
 {
 	size_t old_size = self->word_alloc;
diff --git a/ewah/ewok.h b/ewah/ewok.h
index 011852bef1..1fc555e672 100644
--- a/ewah/ewok.h
+++ b/ewah/ewok.h
@@ -173,6 +173,7 @@ struct bitmap {
 
 struct bitmap *bitmap_new(void);
 struct bitmap *bitmap_word_alloc(size_t word_alloc);
+struct bitmap *bitmap_dup(const struct bitmap *src);
 void bitmap_set(struct bitmap *self, size_t pos);
 void bitmap_unset(struct bitmap *self, size_t pos);
 int bitmap_get(struct bitmap *self, size_t pos);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 10/24] pack-bitmap-write: reimplement bitmap writing
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (8 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 09/24] ewah: add bitmap_dup() function Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:03   ` [PATCH v4 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
                     ` (13 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

The bitmap generation code works by iterating over the set of commits
for which we plan to write bitmaps, and then for each one performing a
traditional traversal over the reachable commits and trees, filling in
the bitmap. Between two traversals, we can often reuse the previous
bitmap result as long as the first commit is an ancestor of the second.
However, our worst case is that we may end up doing "n" complete
complete traversals to the root in order to create "n" bitmaps.

In a real-world case (the shared-storage repo consisting of all GitHub
forks of chromium/chromium), we perform very poorly: generating bitmaps
takes ~3 hours, whereas we can walk the whole object graph in ~3
minutes.

This commit completely rewrites the algorithm, with the goal of
accessing each object only once. It works roughly like this:

  - generate a list of commits in topo-order using a single traversal

  - invert the edges of the graph (so have parents point at their
    children)

  - make one pass in reverse topo-order, generating a bitmap for each
    commit and passing the result along to child nodes

We generate correct results because each node we visit has already had
all of its ancestors added to the bitmap. And we make only two linear
passes over the commits.

We also visit each tree usually only once. When filling in a bitmap, we
don't bother to recurse into trees whose bit is already set in the
bitmap (since we know we've already done so when setting their bit).
That means that if commit A references tree T, none of its descendants
will need to open T again. I say "usually", though, because it is
possible for a given tree to be mentioned in unrelated parts of history
(e.g., cherry-picking to a parallel branch).

So we've accomplished our goal, and the resulting algorithm is pretty
simple to understand. But there are some downsides, at least with this
initial implementation:

  - we no longer reuse the results of any on-disk bitmaps when
    generating. So we'd expect to sometimes be slower than the original
    when bitmaps already exist. However, this is something we'll be able
    to add back in later.

  - we use much more memory. Instead of keeping one bitmap in memory at
    a time, we're passing them up through the graph. So our memory use
    should scale with the graph width (times the size of a bitmap).

So how does it perform?

For a clone of linux.git, generating bitmaps from scratch with the old
algorithm took 63s. Using this algorithm it takes 205s. Which is much
worse, but _might_ be acceptable if it behaved linearly as the size
grew. It also increases peak heap usage by ~1G. That's not impossibly
large, but not encouraging.

On the complete fork-network of torvalds/linux, it increases the peak
RAM usage by 40GB. Yikes. (I forgot to record the time it took, but the
memory usage was too much to consider this reasonable anyway).

On the complete fork-network of chromium/chromium, I ran out of memory
before succeeding. Some back-of-the-envelope calculations indicate it
would need 80+GB to complete.

So at this stage, we've managed to make things much worse. But because
of the way this new algorithm is structured, there are a lot of
opportunities for optimization on top. We'll start implementing those in
the follow-on patches.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 306 +++++++++++++++++++++++++-------------------
 1 file changed, 172 insertions(+), 134 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 5e998bdaa7..bcd059ccd9 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -110,8 +110,6 @@ void bitmap_writer_build_type_index(struct packing_data *to_pack,
 /**
  * Compute the actual bitmaps
  */
-static struct object **seen_objects;
-static unsigned int seen_objects_nr, seen_objects_alloc;
 
 static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
 {
@@ -127,21 +125,6 @@ static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitm
 	writer.selected_nr++;
 }
 
-static inline void mark_as_seen(struct object *object)
-{
-	ALLOC_GROW(seen_objects, seen_objects_nr + 1, seen_objects_alloc);
-	seen_objects[seen_objects_nr++] = object;
-}
-
-static inline void reset_all_seen(void)
-{
-	unsigned int i;
-	for (i = 0; i < seen_objects_nr; ++i) {
-		seen_objects[i]->flags &= ~(SEEN | ADDED | SHOWN);
-	}
-	seen_objects_nr = 0;
-}
-
 static uint32_t find_object_pos(const struct object_id *oid)
 {
 	struct object_entry *entry = packlist_find(writer.to_pack, oid);
@@ -154,60 +137,6 @@ static uint32_t find_object_pos(const struct object_id *oid)
 	return oe_in_pack_pos(writer.to_pack, entry);
 }
 
-static void show_object(struct object *object, const char *name, void *data)
-{
-	struct bitmap *base = data;
-	bitmap_set(base, find_object_pos(&object->oid));
-	mark_as_seen(object);
-}
-
-static void show_commit(struct commit *commit, void *data)
-{
-	mark_as_seen((struct object *)commit);
-}
-
-static int
-add_to_include_set(struct bitmap *base, struct commit *commit)
-{
-	khiter_t hash_pos;
-	uint32_t bitmap_pos = find_object_pos(&commit->object.oid);
-
-	if (bitmap_get(base, bitmap_pos))
-		return 0;
-
-	hash_pos = kh_get_oid_map(writer.bitmaps, commit->object.oid);
-	if (hash_pos < kh_end(writer.bitmaps)) {
-		struct bitmapped_commit *bc = kh_value(writer.bitmaps, hash_pos);
-		bitmap_or_ewah(base, bc->bitmap);
-		return 0;
-	}
-
-	bitmap_set(base, bitmap_pos);
-	return 1;
-}
-
-static int
-should_include(struct commit *commit, void *_data)
-{
-	struct bitmap *base = _data;
-
-	if (!add_to_include_set(base, commit)) {
-		struct commit_list *parent = commit->parents;
-
-		mark_as_seen((struct object *)commit);
-
-		while (parent) {
-			parent->item->object.flags |= SEEN;
-			mark_as_seen((struct object *)parent->item);
-			parent = parent->next;
-		}
-
-		return 0;
-	}
-
-	return 1;
-}
-
 static void compute_xor_offsets(void)
 {
 	static const int MAX_XOR_OFFSET_SEARCH = 10;
@@ -248,79 +177,188 @@ static void compute_xor_offsets(void)
 	}
 }
 
-void bitmap_writer_build(struct packing_data *to_pack)
+struct bb_commit {
+	struct commit_list *children;
+	struct bitmap *bitmap;
+	unsigned selected:1;
+	unsigned idx; /* within selected array */
+};
+
+define_commit_slab(bb_data, struct bb_commit);
+
+struct bitmap_builder {
+	struct bb_data data;
+	struct commit **commits;
+	size_t commits_nr, commits_alloc;
+};
+
+static void bitmap_builder_init(struct bitmap_builder *bb,
+				struct bitmap_writer *writer)
 {
-	static const double REUSE_BITMAP_THRESHOLD = 0.2;
-
-	int i, reuse_after, need_reset;
-	struct bitmap *base = bitmap_new();
 	struct rev_info revs;
+	struct commit *commit;
+	unsigned int i;
+
+	memset(bb, 0, sizeof(*bb));
+	init_bb_data(&bb->data);
+
+	reset_revision_walk();
+	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
+	revs.topo_order = 1;
+
+	for (i = 0; i < writer->selected_nr; i++) {
+		struct commit *c = writer->selected[i].commit;
+		struct bb_commit *ent = bb_data_at(&bb->data, c);
+		ent->selected = 1;
+		ent->idx = i;
+		add_pending_object(&revs, &c->object, "");
+	}
+
+	if (prepare_revision_walk(&revs))
+		die("revision walk setup failed");
+
+	while ((commit = get_revision(&revs))) {
+		struct commit_list *p;
+
+		parse_commit_or_die(commit);
+
+		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+		bb->commits[bb->commits_nr++] = commit;
+
+		for (p = commit->parents; p; p = p->next) {
+			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
+			commit_list_insert(commit, &ent->children);
+		}
+	}
+}
+
+static void bitmap_builder_clear(struct bitmap_builder *bb)
+{
+	clear_bb_data(&bb->data);
+	free(bb->commits);
+	bb->commits_nr = bb->commits_alloc = 0;
+}
+
+static void fill_bitmap_tree(struct bitmap *bitmap,
+			     struct tree *tree)
+{
+	uint32_t pos;
+	struct tree_desc desc;
+	struct name_entry entry;
+
+	/*
+	 * If our bit is already set, then there is nothing to do. Both this
+	 * tree and all of its children will be set.
+	 */
+	pos = find_object_pos(&tree->object.oid);
+	if (bitmap_get(bitmap, pos))
+		return;
+	bitmap_set(bitmap, pos);
+
+	if (parse_tree(tree) < 0)
+		die("unable to load tree object %s",
+		    oid_to_hex(&tree->object.oid));
+	init_tree_desc(&desc, tree->buffer, tree->size);
+
+	while (tree_entry(&desc, &entry)) {
+		switch (object_type(entry.mode)) {
+		case OBJ_TREE:
+			fill_bitmap_tree(bitmap,
+					 lookup_tree(the_repository, &entry.oid));
+			break;
+		case OBJ_BLOB:
+			bitmap_set(bitmap, find_object_pos(&entry.oid));
+			break;
+		default:
+			/* Gitlink, etc; not reachable */
+			break;
+		}
+	}
+
+	free_tree_buffer(tree);
+}
+
+static void fill_bitmap_commit(struct bb_commit *ent,
+			       struct commit *commit)
+{
+	if (!ent->bitmap)
+		ent->bitmap = bitmap_new();
+
+	/*
+	 * mark ourselves, but do not bother with parents; their values
+	 * will already have been propagated to us
+	 */
+	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
+	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
+}
+
+static void store_selected(struct bb_commit *ent, struct commit *commit)
+{
+	struct bitmapped_commit *stored = &writer.selected[ent->idx];
+	khiter_t hash_pos;
+	int hash_ret;
+
+	/*
+	 * the "reuse bitmaps" phase may have stored something here, but
+	 * our new algorithm doesn't use it. Drop it.
+	 */
+	if (stored->bitmap)
+		ewah_free(stored->bitmap);
+
+	stored->bitmap = bitmap_to_ewah(ent->bitmap);
+
+	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
+	if (hash_ret == 0)
+		die("Duplicate entry when writing index: %s",
+		    oid_to_hex(&commit->object.oid));
+	kh_value(writer.bitmaps, hash_pos) = stored;
+}
+
+void bitmap_writer_build(struct packing_data *to_pack)
+{
+	struct bitmap_builder bb;
+	size_t i;
+	int nr_stored = 0; /* for progress */
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
 
 	if (writer.show_progress)
 		writer.progress = start_progress("Building bitmaps", writer.selected_nr);
-
-	repo_init_revisions(to_pack->repo, &revs, NULL);
-	revs.tag_objects = 1;
-	revs.tree_objects = 1;
-	revs.blob_objects = 1;
-	revs.no_walk = 0;
-
-	revs.include_check = should_include;
-	reset_revision_walk();
-
-	reuse_after = writer.selected_nr * REUSE_BITMAP_THRESHOLD;
-	need_reset = 0;
-
-	for (i = writer.selected_nr - 1; i >= 0; --i) {
-		struct bitmapped_commit *stored;
-		struct object *object;
-
-		khiter_t hash_pos;
-		int hash_ret;
-
-		stored = &writer.selected[i];
-		object = (struct object *)stored->commit;
-
-		if (stored->bitmap == NULL) {
-			if (i < writer.selected_nr - 1 &&
-			    (need_reset ||
-			     !in_merge_bases(writer.selected[i + 1].commit,
-					     stored->commit))) {
-			    bitmap_reset(base);
-			    reset_all_seen();
-			}
-
-			add_pending_object(&revs, object, "");
-			revs.include_check_data = base;
-
-			if (prepare_revision_walk(&revs))
-				die("revision walk setup failed");
-
-			traverse_commit_list(&revs, show_commit, show_object, base);
-
-			object_array_clear(&revs.pending);
-
-			stored->bitmap = bitmap_to_ewah(base);
-			need_reset = 0;
-		} else
-			need_reset = 1;
-
-		if (i >= reuse_after)
-			stored->flags |= BITMAP_FLAG_REUSE;
-
-		hash_pos = kh_put_oid_map(writer.bitmaps, object->oid, &hash_ret);
-		if (hash_ret == 0)
-			die("Duplicate entry when writing index: %s",
-			    oid_to_hex(&object->oid));
-
-		kh_value(writer.bitmaps, hash_pos) = stored;
-		display_progress(writer.progress, writer.selected_nr - i);
+	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
+			    the_repository);
+
+	bitmap_builder_init(&bb, &writer);
+	for (i = bb.commits_nr; i > 0; i--) {
+		struct commit *commit = bb.commits[i-1];
+		struct bb_commit *ent = bb_data_at(&bb.data, commit);
+		struct commit *child;
+
+		fill_bitmap_commit(ent, commit);
+
+		if (ent->selected) {
+			store_selected(ent, commit);
+			nr_stored++;
+			display_progress(writer.progress, nr_stored);
+		}
+
+		while ((child = pop_commit(&ent->children))) {
+			struct bb_commit *child_ent =
+				bb_data_at(&bb.data, child);
+
+			if (child_ent->bitmap)
+				bitmap_or(child_ent->bitmap, ent->bitmap);
+			else
+				child_ent->bitmap = bitmap_dup(ent->bitmap);
+		}
+		bitmap_free(ent->bitmap);
+		ent->bitmap = NULL;
 	}
+	bitmap_builder_clear(&bb);
+
+	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
+			    the_repository);
 
-	bitmap_free(base);
 	stop_progress(&writer.progress);
 
 	compute_xor_offsets();
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (9 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
@ 2020-12-08 22:03   ` Taylor Blau
  2020-12-08 22:04   ` [PATCH v4 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
                     ` (12 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:03 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

Our algorithm to generate reachability bitmaps walks through the commit
graph from the bottom up, passing bitmap data from each commit to its
descendants. For a linear stretch of history like:

  A -- B -- C

our sequence of steps is:

  - compute the bitmap for A by walking its trees, etc

  - duplicate A's bitmap as a starting point for B; we can now free A's
    bitmap, since we only needed it as an intermediate result

  - OR in any extra objects that B can reach into its bitmap

  - duplicate B's bitmap as a starting point for C; likewise, free B's
    bitmap

  - OR in objects for C, and so on...

Rather than duplicating bitmaps and immediately freeing the original, we
can just pass ownership from commit to commit. Note that this doesn't
always work:

  - the recipient may be a merge which already has an intermediate
    bitmap from its other ancestor. In that case we have to OR our
    result into it. Note that the first ancestor to reach the merge does
    get to pass ownership, though.

  - we may have multiple children; we can only pass ownership to one of
    them

However, it happens often enough and copying bitmaps is expensive enough
that this provides a noticeable speedup. On a clone of linux.git, this
reduces the time to generate bitmaps from 205s to 70s. This is about the
same amount of time it took to generate bitmaps using our old "many
traversals" algorithm (the previous commit measures the identical
scenario as taking 63s). It unfortunately provides only a very modest
reduction in the peak memory usage, though.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index bcd059ccd9..1eb9615df8 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -333,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *commit = bb.commits[i-1];
 		struct bb_commit *ent = bb_data_at(&bb.data, commit);
 		struct commit *child;
+		int reused = 0;
 
 		fill_bitmap_commit(ent, commit);
 
@@ -348,10 +349,15 @@ void bitmap_writer_build(struct packing_data *to_pack)
 
 			if (child_ent->bitmap)
 				bitmap_or(child_ent->bitmap, ent->bitmap);
-			else
+			else if (reused)
 				child_ent->bitmap = bitmap_dup(ent->bitmap);
+			else {
+				child_ent->bitmap = ent->bitmap;
+				reused = 1;
+			}
 		}
-		bitmap_free(ent->bitmap);
+		if (!reused)
+			bitmap_free(ent->bitmap);
 		ent->bitmap = NULL;
 	}
 	bitmap_builder_clear(&bb);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 12/24] pack-bitmap-write: fill bitmap with commit history
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (10 preceding siblings ...)
  2020-12-08 22:03   ` [PATCH v4 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
@ 2020-12-08 22:04   ` Taylor Blau
  2020-12-08 22:04   ` [PATCH v4 13/24] bitmap: implement bitmap_is_subset() Taylor Blau
                     ` (11 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The current implementation of bitmap_writer_build() creates a
reachability bitmap for every walked commit. After computing a bitmap
for a commit, those bits are pushed to an in-progress bitmap for its
children.

fill_bitmap_commit() assumes the bits corresponding to objects
reachable from the parents of a commit are already set. This means that
when visiting a new commit, we only have to walk the objects reachable
between it and any of its parents.

A future change to bitmap_writer_build() will relax this condition so
not all parents have their bits set. Prepare for that by having
'fill_bitmap_commit()' walk parents until reaching commits whose bits
are already set. Then, walk the trees for these commits as well.

This has no functional change with the current implementation of
bitmap_writer_build().

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 1eb9615df8..957639241e 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -12,6 +12,7 @@
 #include "sha1-lookup.h"
 #include "pack-objects.h"
 #include "commit-reach.h"
+#include "prio-queue.h"
 
 struct bitmapped_commit {
 	struct commit *commit;
@@ -279,17 +280,30 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
 }
 
 static void fill_bitmap_commit(struct bb_commit *ent,
-			       struct commit *commit)
+			       struct commit *commit,
+			       struct prio_queue *queue)
 {
 	if (!ent->bitmap)
 		ent->bitmap = bitmap_new();
 
-	/*
-	 * mark ourselves, but do not bother with parents; their values
-	 * will already have been propagated to us
-	 */
 	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
-	fill_bitmap_tree(ent->bitmap, get_commit_tree(commit));
+	prio_queue_put(queue, commit);
+
+	while (queue->nr) {
+		struct commit_list *p;
+		struct commit *c = prio_queue_get(queue);
+
+		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
+		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
+
+		for (p = c->parents; p; p = p->next) {
+			int pos = find_object_pos(&p->item->object.oid);
+			if (!bitmap_get(ent->bitmap, pos)) {
+				bitmap_set(ent->bitmap, pos);
+				prio_queue_put(queue, p->item);
+			}
+		}
+	}
 }
 
 static void store_selected(struct bb_commit *ent, struct commit *commit)
@@ -319,6 +333,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	struct bitmap_builder bb;
 	size_t i;
 	int nr_stored = 0; /* for progress */
+	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
@@ -335,7 +350,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *child;
 		int reused = 0;
 
-		fill_bitmap_commit(ent, commit);
+		fill_bitmap_commit(ent, commit, &queue);
 
 		if (ent->selected) {
 			store_selected(ent, commit);
@@ -360,6 +375,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 			bitmap_free(ent->bitmap);
 		ent->bitmap = NULL;
 	}
+	clear_prio_queue(&queue);
 	bitmap_builder_clear(&bb);
 
 	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 13/24] bitmap: implement bitmap_is_subset()
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (11 preceding siblings ...)
  2020-12-08 22:04   ` [PATCH v4 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
@ 2020-12-08 22:04   ` Taylor Blau
  2020-12-08 22:04   ` [PATCH v4 14/24] commit: implement commit_list_contains() Taylor Blau
                     ` (10 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_is_subset() function checks if the 'self' bitmap contains any
bitmaps that are not on in the 'other' bitmap. Up until this patch, it
had a declaration, but no implementation or callers. A subsequent patch
will want this function, so implement it here.

Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 ewah/bitmap.c | 21 +++++++++++++++++++++
 ewah/ewok.h   |  2 +-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/ewah/bitmap.c b/ewah/bitmap.c
index b5f6376282..0d31cdc866 100644
--- a/ewah/bitmap.c
+++ b/ewah/bitmap.c
@@ -195,6 +195,27 @@ int bitmap_equals(struct bitmap *self, struct bitmap *other)
 	return 1;
 }
 
+int bitmap_is_subset(struct bitmap *self, struct bitmap *other)
+{
+	size_t common_size, i;
+
+	if (self->word_alloc < other->word_alloc)
+		common_size = self->word_alloc;
+	else {
+		common_size = other->word_alloc;
+		for (i = common_size; i < self->word_alloc; i++) {
+			if (self->words[i])
+				return 1;
+		}
+	}
+
+	for (i = 0; i < common_size; i++) {
+		if (self->words[i] & ~other->words[i])
+			return 1;
+	}
+	return 0;
+}
+
 void bitmap_reset(struct bitmap *bitmap)
 {
 	memset(bitmap->words, 0x0, bitmap->word_alloc * sizeof(eword_t));
diff --git a/ewah/ewok.h b/ewah/ewok.h
index 1fc555e672..66920965da 100644
--- a/ewah/ewok.h
+++ b/ewah/ewok.h
@@ -180,7 +180,7 @@ int bitmap_get(struct bitmap *self, size_t pos);
 void bitmap_reset(struct bitmap *self);
 void bitmap_free(struct bitmap *self);
 int bitmap_equals(struct bitmap *self, struct bitmap *other);
-int bitmap_is_subset(struct bitmap *self, struct bitmap *super);
+int bitmap_is_subset(struct bitmap *self, struct bitmap *other);
 
 struct ewah_bitmap * bitmap_to_ewah(struct bitmap *bitmap);
 struct bitmap *ewah_to_bitmap(struct ewah_bitmap *ewah);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 14/24] commit: implement commit_list_contains()
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (12 preceding siblings ...)
  2020-12-08 22:04   ` [PATCH v4 13/24] bitmap: implement bitmap_is_subset() Taylor Blau
@ 2020-12-08 22:04   ` Taylor Blau
  2020-12-08 22:04   ` [PATCH v4 15/24] t5310: add branch-based checks Taylor Blau
                     ` (9 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

It can be helpful to check if a commit_list contains a commit. Use
pointer equality, assuming lookup_commit() was used.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 commit.c | 11 +++++++++++
 commit.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/commit.c b/commit.c
index fe1fa3dc41..9a785bf906 100644
--- a/commit.c
+++ b/commit.c
@@ -544,6 +544,17 @@ struct commit_list *commit_list_insert(struct commit *item, struct commit_list *
 	return new_list;
 }
 
+int commit_list_contains(struct commit *item, struct commit_list *list)
+{
+	while (list) {
+		if (list->item == item)
+			return 1;
+		list = list->next;
+	}
+
+	return 0;
+}
+
 unsigned commit_list_count(const struct commit_list *l)
 {
 	unsigned c = 0;
diff --git a/commit.h b/commit.h
index 5467786c7b..742a6de460 100644
--- a/commit.h
+++ b/commit.h
@@ -167,6 +167,8 @@ int find_commit_subject(const char *commit_buffer, const char **subject);
 
 struct commit_list *commit_list_insert(struct commit *item,
 					struct commit_list **list);
+int commit_list_contains(struct commit *item,
+			 struct commit_list *list);
 struct commit_list **commit_list_append(struct commit *commit,
 					struct commit_list **next);
 unsigned commit_list_count(const struct commit_list *l);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 15/24] t5310: add branch-based checks
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (13 preceding siblings ...)
  2020-12-08 22:04   ` [PATCH v4 14/24] commit: implement commit_list_contains() Taylor Blau
@ 2020-12-08 22:04   ` Taylor Blau
  2020-12-08 22:04   ` [PATCH v4 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
                     ` (8 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The current rev-list tests that check the bitmap data only work on HEAD
instead of multiple branches. Expand the test cases to handle both
'master' and 'other' branches.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 t/t5310-pack-bitmaps.sh | 61 +++++++++++++++++++++++------------------
 1 file changed, 34 insertions(+), 27 deletions(-)

diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index bf094cfe42..8bf02336d9 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -42,63 +42,70 @@ test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
 	git rev-list --test-bitmap HEAD
 '
 
-rev_list_tests() {
-	state=$1
-
-	test_expect_success "counting commits via bitmap ($state)" '
-		git rev-list --count HEAD >expect &&
-		git rev-list --use-bitmap-index --count HEAD >actual &&
+rev_list_tests_head () {
+	test_expect_success "counting commits via bitmap ($state, $branch)" '
+		git rev-list --count $branch >expect &&
+		git rev-list --use-bitmap-index --count $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting partial commits via bitmap ($state)" '
-		git rev-list --count HEAD~5..HEAD >expect &&
-		git rev-list --use-bitmap-index --count HEAD~5..HEAD >actual &&
+	test_expect_success "counting partial commits via bitmap ($state, $branch)" '
+		git rev-list --count $branch~5..$branch >expect &&
+		git rev-list --use-bitmap-index --count $branch~5..$branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting commits with limit ($state)" '
-		git rev-list --count -n 1 HEAD >expect &&
-		git rev-list --use-bitmap-index --count -n 1 HEAD >actual &&
+	test_expect_success "counting commits with limit ($state, $branch)" '
+		git rev-list --count -n 1 $branch >expect &&
+		git rev-list --use-bitmap-index --count -n 1 $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting non-linear history ($state)" '
+	test_expect_success "counting non-linear history ($state, $branch)" '
 		git rev-list --count other...second >expect &&
 		git rev-list --use-bitmap-index --count other...second >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting commits with limiting ($state)" '
-		git rev-list --count HEAD -- 1.t >expect &&
-		git rev-list --use-bitmap-index --count HEAD -- 1.t >actual &&
+	test_expect_success "counting commits with limiting ($state, $branch)" '
+		git rev-list --count $branch -- 1.t >expect &&
+		git rev-list --use-bitmap-index --count $branch -- 1.t >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "counting objects via bitmap ($state)" '
-		git rev-list --count --objects HEAD >expect &&
-		git rev-list --use-bitmap-index --count --objects HEAD >actual &&
+	test_expect_success "counting objects via bitmap ($state, $branch)" '
+		git rev-list --count --objects $branch >expect &&
+		git rev-list --use-bitmap-index --count --objects $branch >actual &&
 		test_cmp expect actual
 	'
 
-	test_expect_success "enumerate commits ($state)" '
-		git rev-list --use-bitmap-index HEAD >actual &&
-		git rev-list HEAD >expect &&
+	test_expect_success "enumerate commits ($state, $branch)" '
+		git rev-list --use-bitmap-index $branch >actual &&
+		git rev-list $branch >expect &&
 		test_bitmap_traversal --no-confirm-bitmaps expect actual
 	'
 
-	test_expect_success "enumerate --objects ($state)" '
-		git rev-list --objects --use-bitmap-index HEAD >actual &&
-		git rev-list --objects HEAD >expect &&
+	test_expect_success "enumerate --objects ($state, $branch)" '
+		git rev-list --objects --use-bitmap-index $branch >actual &&
+		git rev-list --objects $branch >expect &&
 		test_bitmap_traversal expect actual
 	'
 
-	test_expect_success "bitmap --objects handles non-commit objects ($state)" '
-		git rev-list --objects --use-bitmap-index HEAD tagged-blob >actual &&
+	test_expect_success "bitmap --objects handles non-commit objects ($state, $branch)" '
+		git rev-list --objects --use-bitmap-index $branch tagged-blob >actual &&
 		grep $blob actual
 	'
 }
 
+rev_list_tests () {
+	state=$1
+
+	for branch in "second" "other"
+	do
+		rev_list_tests_head
+	done
+}
+
 rev_list_tests 'full bitmap'
 
 test_expect_success 'clone from bitmapped repository' '
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 16/24] pack-bitmap-write: rename children to reverse_edges
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (14 preceding siblings ...)
  2020-12-08 22:04   ` [PATCH v4 15/24] t5310: add branch-based checks Taylor Blau
@ 2020-12-08 22:04   ` Taylor Blau
  2020-12-08 22:04   ` [PATCH v4 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
                     ` (7 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_builder_init() method walks the reachable commits in
topological order and constructs a "reverse graph" along the way. At the
moment, this reverse graph contains an edge from commit A to commit B if
and only if A is a parent of B. Thus, the name "children" is appropriate
for for this reverse graph.

In the next change, we will repurpose the reverse graph to not be
directly-adjacent commits in the commit-graph, but instead a more
abstract relationship. The previous changes have already incorporated
the necessary updates to fill_bitmap_commit() that allow these edges to
not be immediate children.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 957639241e..7e218d02a6 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -179,7 +179,7 @@ static void compute_xor_offsets(void)
 }
 
 struct bb_commit {
-	struct commit_list *children;
+	struct commit_list *reverse_edges;
 	struct bitmap *bitmap;
 	unsigned selected:1;
 	unsigned idx; /* within selected array */
@@ -228,7 +228,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		for (p = commit->parents; p; p = p->next) {
 			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->children);
+			commit_list_insert(commit, &ent->reverse_edges);
 		}
 	}
 }
@@ -358,7 +358,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 			display_progress(writer.progress, nr_stored);
 		}
 
-		while ((child = pop_commit(&ent->children))) {
+		while ((child = pop_commit(&ent->reverse_edges))) {
 			struct bb_commit *child_ent =
 				bb_data_at(&bb.data, child);
 
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 17/24] pack-bitmap.c: check reads more aggressively when loading
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (15 preceding siblings ...)
  2020-12-08 22:04   ` [PATCH v4 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
@ 2020-12-08 22:04   ` Taylor Blau
  2020-12-08 22:04   ` [PATCH v4 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
                     ` (6 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

Before 'load_bitmap_entries_v1()' reads an actual EWAH bitmap, it should
check that it can safely do so by ensuring that there are at least 6
bytes available to be read (four for the commit's index position, and
then two more for the xor offset and flags, respectively).

Likewise, it should check that the commit index it read refers to a
legitimate object in the pack.

The first fix catches a truncation bug that was exposed when testing,
and the second is purely precautionary.

There are some possible future improvements, not pursued here. They are:

  - Computing the correct boundary of the bitmap itself in the caller
    and ensuring that we don't read past it. This may or may not be
    worth it, since in a truncation situation, all bets are off: (is the
    trailer still there and the bitmap entries malformed, or is the
    trailer truncated?). The best we can do is try to read what's there
    as if it's correct data (and protect ourselves when it's obviously
    bogus).

  - Avoid the magic "6" by teaching read_be32() and read_u8() (both of
    which are custom helpers for this function) to check sizes before
    advancing the pointers.

  - Adding more tests in this area. Testing these truncation situations
    are remarkably fragile to even subtle changes in the bitmap
    generation. So, the resulting tests are likely to be quite brittle.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 4431f9f120..60c781d100 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -229,11 +229,16 @@ static int load_bitmap_entries_v1(struct bitmap_index *index)
 		uint32_t commit_idx_pos;
 		struct object_id oid;
 
+		if (index->map_size - index->map_pos < 6)
+			return error("corrupt ewah bitmap: truncated header for entry %d", i);
+
 		commit_idx_pos = read_be32(index->map, &index->map_pos);
 		xor_offset = read_u8(index->map, &index->map_pos);
 		flags = read_u8(index->map, &index->map_pos);
 
-		nth_packed_object_id(&oid, index->pack, commit_idx_pos);
+		if (nth_packed_object_id(&oid, index->pack, commit_idx_pos) < 0)
+			return error("corrupt ewah bitmap: commit index %u out of range",
+				     (unsigned)commit_idx_pos);
 
 		bitmap = read_bitmap_1(index);
 		if (!bitmap)
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 18/24] pack-bitmap-write: build fewer intermediate bitmaps
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (16 preceding siblings ...)
  2020-12-08 22:04   ` [PATCH v4 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
@ 2020-12-08 22:04   ` Taylor Blau
  2020-12-08 22:04   ` [PATCH v4 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
                     ` (5 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The bitmap_writer_build() method calls bitmap_builder_init() to
construct a list of commits reachable from the selected commits along
with a "reverse graph". This reverse graph has edges pointing from a
commit to other commits that can reach that commit. After computing a
reachability bitmap for a commit, the values in that bitmap are then
copied to the reachability bitmaps across the edges in the reverse
graph.

We can now relax the role of the reverse graph to greatly reduce the
number of intermediate reachability bitmaps we compute during this
reverse walk. The end result is that we walk objects the same number of
times as before when constructing the reachability bitmaps, but we also
spend much less time copying bits between bitmaps and have much lower
memory pressure in the process.

The core idea is to select a set of "important" commits based on
interactions among the sets of commits reachable from each selected commit.

The first technical concept is to create a new 'commit_mask' member in the
bb_commit struct. Note that the selected commits are provided in an
ordered array. The first thing to do is to mark the ith bit in the
commit_mask for the ith selected commit. As we walk the commit-graph, we
copy the bits in a commit's commit_mask to its parents. At the end of
the walk, the ith bit in the commit_mask for a commit C stores a boolean
representing "The ith selected commit can reach C."

As we walk, we will discover non-selected commits that are important. We
will get into this later, but those important commits must also receive
bit positions, growing the width of the bitmasks as we walk. At the true
end of the walk, the ith bit means "the ith _important_ commit can reach
C."

MAXIMAL COMMITS
---------------

We use a new 'maximal' bit in the bb_commit struct to represent whether
a commit is important or not. The term "maximal" comes from the
partially-ordered set of commits in the commit-graph where C >= P if P
is a parent of C, and then extending the relationship transitively.
Instead of taking the maximal commits across the entire commit-graph, we
instead focus on selecting each commit that is maximal among commits
with the same bits on in their commit_mask. This definition is
important, so let's consider an example.

Suppose we have three selected commits A, B, and C. These are assigned
bitmasks 100, 010, and 001 to start. Each of these can be marked as
maximal immediately because they each will be the uniquely maximal
commit that contains their own bit. Keep in mind that that these commits
may have different bitmasks after the walk; for example, if B can reach
C but A cannot, then the final bitmask for C is 011. Even in these
cases, C would still be a maximal commit among all commits with the
third bit on in their masks.

Now define sets X, Y, and Z to be the sets of commits reachable from A,
B, and C, respectively. The intersections of these sets correspond to
different bitmasks:

 * 100: X - (Y union Z)
 * 010: Y - (X union Z)
 * 001: Z - (X union Y)
 * 110: (X intersect Y) - Z
 * 101: (X intersect Z) - Y
 * 011: (Y intersect Z) - X
 * 111: X intersect Y intersect Z

This can be visualized with the following Hasse diagram:

	100    010    001
         | \  /   \  / |
         |  \/     \/  |
         |  /\     /\  |
         | /  \   /  \ |
        110    101    011
          \___  |  ___/
              \ | /
               111

Some of these bitmasks may not be represented, depending on the topology
of the commit-graph. In fact, we are counting on it, since the number of
possible bitmasks is exponential in the number of selected commits, but
is also limited by the total number of commits. In practice, very few
bitmasks are possible because most commits converge on a common "trunk"
in the commit history.

With this three-bit example, we wish to find commits that are maximal
for each bitmask. How can we identify this as we are walking?

As we walk, we visit a commit C. Since we are walking the commits in
topo-order, we know that C is visited after all of its children are
visited. Thus, when we get C from the revision walk we inspect the
'maximal' property of its bb_data and use that to determine if C is truly
important. Its commit_mask is also nearly final. If C is not one of the
originally-selected commits, then assign a bit position to C (by
incrementing num_maximal) and set that bit on in commit_mask. See
"MULTIPLE MAXIMAL COMMITS" below for more detail on this.

Now that the commit C is known to be maximal or not, consider each
parent P of C. Compute two new values:

 * c_not_p : true if and only if the commit_mask for C contains a bit
             that is not contained in the commit_mask for P.

 * p_not_c : true if and only if the commit_mask for P contains a bit
             that is not contained in the commit_mask for P.

If c_not_p is false, then P already has all of the bits that C would
provide to its commit_mask. In this case, move on to other parents as C
has nothing to contribute to P's state that was not already provided by
other children of P.

We continue with the case that c_not_p is true. This means there are
bits in C's commit_mask to copy to P's commit_mask, so use bitmap_or()
to add those bits.

If p_not_c is also true, then set the maximal bit for P to one. This means
that if no other commit has P as a parent, then P is definitely maximal.
This is because no child had the same bitmask. It is important to think
about the maximal bit for P at this point as a temporary state: "P is
maximal based on current information."

In contrast, if p_not_c is false, then set the maximal bit for P to
zero. Further, clear all reverse_edges for P since any edges that were
previously assigned to P are no longer important. P will gain all
reverse edges based on C.

The final thing we need to do is to update the reverse edges for P.
These reverse edges respresent "which closest maximal commits
contributed bits to my commit_mask?" Since C contributed bits to P's
commit_mask in this case, C must add to the reverse edges of P.

If C is maximal, then C is a 'closest' maximal commit that contributed
bits to P. Add C to P's reverse_edges list.

Otherwise, C has a list of maximal commits that contributed bits to its
bitmask (and this list is exactly one element). Add all of these items
to P's reverse_edges list. Be careful to ignore duplicates here.

After inspecting all parents P for a commit C, we can clear the
commit_mask for C. This reduces the memory load to be limited to the
"width" of the commit graph.

Consider our ABC/XYZ example from earlier and let's inspect the state of
the commits for an interesting bitmask, say 011. Suppose that D is the
only maximal commit with this bitmask (in the first three bits). All
other commits with bitmask 011 have D as the only entry in their
reverse_edges list. D's reverse_edges list contains B and C.

COMPUTING REACHABILITY BITMAPS
------------------------------

Now that we have our definition, let's zoom out and consider what
happens with our new reverse graph when computing reachability bitmaps.
We walk the reverse graph in reverse-topo-order, so we visit commits
with largest commit_masks first. After we compute the reachability
bitmap for a commit C, we push the bits in that bitmap to each commit D
in the reverse edge list for C. Then, when we finally visit D we already
have the bits for everything reachable from maximal commits that D can
reach and we only need to walk the objects in the set-difference.

In our ABC/XYZ example, when we finally walk for the commit A we only
need to walk commits with bitmask equal to A's bitmask. If that bitmask
is 100, then we are only walking commits in X - (Y union Z) because the
bitmap already contains the bits for objects reachable from (X intersect
Y) union (X intersect Z) (i.e. the bits from the reachability bitmaps
for the maximal commits with bitmasks 110 and 101).

The behavior is intended to walk each commit (and the trees that commit
introduces) at most once while allocating and copying fewer reachability
bitmaps. There is one caveat: what happens when there are multiple
maximal commits with the same bitmask, with respect to the initial set
of selected commits?

MULTIPLE MAXIMAL COMMITS
------------------------

Earlier, we mentioned that when we discover a new maximal commit, we
assign a new bit position to that commit and set that bit position to
one for that commit. This is absolutely important for interesting
commit-graphs such as git/git and torvalds/linux. The reason is due to
the existence of "butterflies" in the commit-graph partial order.

Here is an example of four commits forming a butterfly:

   I    J
   |\  /|
   | \/ |
   | /\ |
   |/  \|
   M    N
    \  /
     |/
     Q

Here, I and J both have parents M and N. In general, these do not need
to be exact parent relationships, but reachability relationships. The
most important part is that M and N cannot reach each other, so they are
independent in the partial order. If I had commit_mask 10 and J had
commit_mask 01, then M and N would both be assigned commit_mask 11 and
be maximal commits with the bitmask 11. Then, what happens when M and N
can both reach a commit Q? If Q is also assigned the bitmask 11, then it
is not maximal but is reachable from both M and N.

While this is not necessarily a deal-breaker for our abstract definition
of finding maximal commits according to a given bitmask, we have a few
issues that can come up in our larger picture of constructing
reachability bitmaps.

In particular, if we do not also consider Q to be a "maximal" commit,
then we will walk commits reachable from Q twice: once when computing
the reachability bitmap for M and another time when computing the
reachability bitmap for N. This becomes much worse if the topology
continues this pattern with multiple butterflies.

The solution has already been mentioned: each of M and N are assigned
their own bits to the bitmask and hence they become uniquely maximal for
their bitmasks. Finally, Q also becomes maximal and thus we do not need
to walk its commits multiple times. The final bitmasks for these commits
are as follows:

  I:10       J:01
   |\        /|
   | \ _____/ |
   | /\____   |
   |/      \  |
   M:111    N:1101
        \  /
       Q:1111

Further, Q's reverse edge list is { M, N }, while M and N both have
reverse edge list { I, J }.

PERFORMANCE MEASUREMENTS
------------------------

Now that we've spent a LOT of time on the theory of this algorithm,
let's show that this is actually worth all that effort.

To test the performance, use GIT_TRACE2_PERF=1 when running
'git repack -abd' in a repository with no existing reachability bitmaps.
This avoids any issues with keeping existing bitmaps to skew the
numbers.

Inspect the "building_bitmaps_total" region in the trace2 output to
focus on the portion of work that is affected by this change. Here are
the performance comparisons for a few repositories. The timings are for
the following versions of Git: "multi" is the timing from before any
reverse graph is constructed, where we might perform multiple
traversals. "reverse" is for the previous change where the reverse graph
has every reachable commit.  Finally "maximal" is the version introduced
here where the reverse graph only contains the maximal commits.

      Repository: git/git
           multi: 2.628 sec
         reverse: 2.344 sec
         maximal: 2.047 sec

      Repository: torvalds/linux
           multi: 64.7 sec
         reverse: 205.3 sec
         maximal: 44.7 sec

So in all cases we've not only recovered any time lost to switching to
the reverse-edge algorithm, but we come out ahead of "multi" in all
cases. Likewise, peak heap has gone back to something reasonable:

      Repository: torvalds/linux
           multi: 2.087 GB
         reverse: 3.141 GB
         maximal: 2.288 GB

While I do not have access to full fork networks on GitHub, Peff has run
this algorithm on the chromium/chromium fork network and reported a
change from 3 hours to ~233 seconds. That network is particularly
beneficial for this approach because it has a long, linear history along
with many tags. The "multi" approach was obviously quadratic and the new
approach is linear.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 72 +++++++++++++++++++++++++++++++---
 t/t5310-pack-bitmaps.sh | 85 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 148 insertions(+), 9 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 7e218d02a6..0af93193d8 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -180,8 +180,10 @@ static void compute_xor_offsets(void)
 
 struct bb_commit {
 	struct commit_list *reverse_edges;
+	struct bitmap *commit_mask;
 	struct bitmap *bitmap;
-	unsigned selected:1;
+	unsigned selected:1,
+		 maximal:1;
 	unsigned idx; /* within selected array */
 };
 
@@ -198,7 +200,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i;
+	unsigned int i, num_maximal;
 
 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -210,27 +212,85 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
 		struct bb_commit *ent = bb_data_at(&bb->data, c);
+
 		ent->selected = 1;
+		ent->maximal = 1;
 		ent->idx = i;
+
+		ent->commit_mask = bitmap_new();
+		bitmap_set(ent->commit_mask, i);
+
 		add_pending_object(&revs, &c->object, "");
 	}
+	num_maximal = writer->selected_nr;
 
 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");
 
 	while ((commit = get_revision(&revs))) {
 		struct commit_list *p;
+		struct bb_commit *c_ent;
 
 		parse_commit_or_die(commit);
 
-		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
-		bb->commits[bb->commits_nr++] = commit;
+		c_ent = bb_data_at(&bb->data, commit);
+
+		if (c_ent->maximal) {
+			if (!c_ent->selected) {
+				bitmap_set(c_ent->commit_mask, num_maximal);
+				num_maximal++;
+			}
+
+			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+			bb->commits[bb->commits_nr++] = commit;
+		}
 
 		for (p = commit->parents; p; p = p->next) {
-			struct bb_commit *ent = bb_data_at(&bb->data, p->item);
-			commit_list_insert(commit, &ent->reverse_edges);
+			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
+			int c_not_p, p_not_c;
+
+			if (!p_ent->commit_mask) {
+				p_ent->commit_mask = bitmap_new();
+				c_not_p = 1;
+				p_not_c = 0;
+			} else {
+				c_not_p = bitmap_is_subset(c_ent->commit_mask, p_ent->commit_mask);
+				p_not_c = bitmap_is_subset(p_ent->commit_mask, c_ent->commit_mask);
+			}
+
+			if (!c_not_p)
+				continue;
+
+			bitmap_or(p_ent->commit_mask, c_ent->commit_mask);
+
+			if (p_not_c)
+				p_ent->maximal = 1;
+			else {
+				p_ent->maximal = 0;
+				free_commit_list(p_ent->reverse_edges);
+				p_ent->reverse_edges = NULL;
+			}
+
+			if (c_ent->maximal) {
+				commit_list_insert(commit, &p_ent->reverse_edges);
+			} else {
+				struct commit_list *cc = c_ent->reverse_edges;
+
+				for (; cc; cc = cc->next) {
+					if (!commit_list_contains(cc->item, p_ent->reverse_edges))
+						commit_list_insert(cc->item, &p_ent->reverse_edges);
+				}
+			}
 		}
+
+		bitmap_free(c_ent->commit_mask);
+		c_ent->commit_mask = NULL;
 	}
+
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_selected_commits", writer->selected_nr);
+	trace2_data_intmax("pack-bitmap-write", the_repository,
+			   "num_maximal_commits", num_maximal);
 }
 
 static void bitmap_builder_clear(struct bitmap_builder *bb)
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 8bf02336d9..6815fb6a4e 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -20,12 +20,88 @@ has_any () {
 	grep -Ff "$1" "$2"
 }
 
+# To ensure the logic for "maximal commits" is exercised, make
+# the repository a bit more complicated.
+#
+#    other                         master
+#      *                             *
+# (99 commits)                  (99 commits)
+#      *                             *
+#      |\                           /|
+#      | * octo-other  octo-master * |
+#      |/|\_________  ____________/|\|
+#      | \          \/  __________/  |
+#      |  | ________/\ /             |
+#      *  |/          * merge-right  *
+#      | _|__________/ \____________ |
+#      |/ |                         \|
+# (l1) *  * merge-left               * (r1)
+#      | / \________________________ |
+#      |/                           \|
+# (l2) *                             * (r2)
+#       \___________________________ |
+#                                   \|
+#                                    * (base)
+#
+# The important part for the maximal commit algorithm is how
+# the bitmasks are extended. Assuming starting bit positions
+# for master (bit 0) and other (bit 1), and some flexibility
+# in the order that merge bases are visited, the bitmasks at
+# the end should be:
+#
+#      master: 1       (maximal, selected)
+#       other: 01      (maximal, selected)
+# octo-master: 1
+#  octo-other: 01
+# merge-right: 111     (maximal)
+#        (l1): 111
+#        (r1): 111
+#  merge-left: 1101    (maximal)
+#        (l2): 11111   (maximal)
+#        (r2): 111101  (maximal)
+#      (base): 1111111 (maximal)
+
 test_expect_success 'setup repo with moderate-sized history' '
-	test_commit_bulk --id=file 100 &&
+	test_commit_bulk --id=file 10 &&
 	git branch -M second &&
 	git checkout -b other HEAD~5 &&
 	test_commit_bulk --id=side 10 &&
+
+	# add complicated history setup, including merges and
+	# ambiguous merge-bases
+
+	git checkout -b merge-left other~2 &&
+	git merge second~2 -m "merge-left" &&
+
+	git checkout -b merge-right second~1 &&
+	git merge other~1 -m "merge-right" &&
+
+	git checkout -b octo-second second &&
+	git merge merge-left merge-right -m "octopus-second" &&
+
+	git checkout -b octo-other other &&
+	git merge merge-left merge-right -m "octopus-other" &&
+
+	git checkout other &&
+	git merge octo-other -m "pull octopus" &&
+
 	git checkout second &&
+	git merge octo-second -m "pull octopus" &&
+
+	# Remove these branches so they are not selected
+	# as bitmap tips
+	git branch -D merge-left &&
+	git branch -D merge-right &&
+	git branch -D octo-other &&
+	git branch -D octo-second &&
+
+	# add padding to make these merges less interesting
+	# and avoid having them selected for bitmaps
+	test_commit_bulk --id=file 100 &&
+	git checkout other &&
+	test_commit_bulk --id=side 100 &&
+	git checkout second &&
+
 	bitmaptip=$(git rev-parse second) &&
 	blob=$(echo tagged-blob | git hash-object -w --stdin) &&
 	git tag tagged-blob $blob &&
@@ -33,9 +109,12 @@ test_expect_success 'setup repo with moderate-sized history' '
 '
 
 test_expect_success 'full repack creates bitmaps' '
-	git repack -ad &&
+	GIT_TRACE2_EVENT_NESTING=4 GIT_TRACE2_EVENT="$(pwd)/trace" \
+		git repack -ad &&
 	ls .git/objects/pack/ | grep bitmap >output &&
-	test_line_count = 1 output
+	test_line_count = 1 output &&
+	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
 '
 
 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (17 preceding siblings ...)
  2020-12-08 22:04   ` [PATCH v4 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
@ 2020-12-08 22:04   ` Taylor Blau
  2020-12-08 22:05     ` Taylor Blau
                     ` (4 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Jeff King <peff@peff.net>

The on-disk bitmap format has a flag to mark a bitmap to be "reused".
This is a rather curious feature, and works like this:

  - a run of pack-objects would decide to mark the last 80% of the
    bitmaps it generates with the reuse flag

  - the next time we generate bitmaps, we'd see those reuse flags from
    the last run, and mark those commits as special:

      - we'd be more likely to select those commits to get bitmaps in
        the new output

      - when generating the bitmap for a selected commit, we'd reuse the
        old bitmap as-is (rearranging the bits to match the new pack, of
        course)

However, neither of these behaviors particularly makes sense.

Just because a commit happened to be bitmapped last time does not make
it a good candidate for having a bitmap this time. In particular, we may
choose bitmaps based on how recent they are in history, or whether a ref
tip points to them, and those things will change. We're better off
re-considering fresh which commits are good candidates.

Reusing the existing bitmap _is_ a reasonable thing to do to save
computation. But only reusing exact bitmaps is a weak form of this. If
we have an old bitmap for A and now want a new bitmap for its child, we
should be able to compute that only by looking at trees and that are new
to the child. But this code would consider only exact reuse (which is
perhaps why it was eager to select those commits in the first place).

Furthermore, the recent switch to the reverse-edge algorithm for
generating bitmaps dropped this optimization entirely (and yet still
performs better).

So let's do a few cleanups:

 - drop the whole "reusing bitmaps" phase of generating bitmaps. It's
   not helping anything, and is mostly unused code (or worse, code that
   is using CPU but not doing anything useful)

 - drop the use of the on-disk reuse flag to select commits to bitmap

 - stop setting the on-disk reuse flag in bitmaps we generate (since
   nothing respects it anymore)

We will keep a few innards of the reuse code, which will help us
implement a more capable version of the "reuse" optimization:

 - simplify rebuild_existing_bitmaps() into a function that only builds
   the mapping of bits between the old and new orders, but doesn't
   actually convert any bitmaps

 - make rebuild_bitmap() public; we'll call it lazily to convert bitmaps
   as we traverse (using the mapping created above)

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 builtin/pack-objects.c |  1 -
 pack-bitmap-write.c    | 50 +++++-------------------------------------
 pack-bitmap.c          | 46 +++++---------------------------------
 pack-bitmap.h          |  6 ++++-
 4 files changed, 16 insertions(+), 87 deletions(-)

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 5617c01b5a..2a00358f34 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -1104,7 +1104,6 @@ static void write_pack_file(void)
 				stop_progress(&progress_state);
 
 				bitmap_writer_show_progress(progress);
-				bitmap_writer_reuse_bitmaps(&to_pack);
 				bitmap_writer_select_commits(indexed_commits, indexed_commits_nr, -1);
 				bitmap_writer_build(&to_pack);
 				bitmap_writer_finish(written_list, nr_written,
diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 0af93193d8..333058854d 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -30,7 +30,6 @@ struct bitmap_writer {
 	struct ewah_bitmap *tags;
 
 	kh_oid_map_t *bitmaps;
-	kh_oid_map_t *reused;
 	struct packing_data *to_pack;
 
 	struct bitmapped_commit *selected;
@@ -112,7 +111,7 @@ void bitmap_writer_build_type_index(struct packing_data *to_pack,
  * Compute the actual bitmaps
  */
 
-static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitmap *reused)
+static inline void push_bitmapped_commit(struct commit *commit)
 {
 	if (writer.selected_nr >= writer.selected_alloc) {
 		writer.selected_alloc = (writer.selected_alloc + 32) * 2;
@@ -120,7 +119,7 @@ static inline void push_bitmapped_commit(struct commit *commit, struct ewah_bitm
 	}
 
 	writer.selected[writer.selected_nr].commit = commit;
-	writer.selected[writer.selected_nr].bitmap = reused;
+	writer.selected[writer.selected_nr].bitmap = NULL;
 	writer.selected[writer.selected_nr].flags = 0;
 
 	writer.selected_nr++;
@@ -372,13 +371,6 @@ static void store_selected(struct bb_commit *ent, struct commit *commit)
 	khiter_t hash_pos;
 	int hash_ret;
 
-	/*
-	 * the "reuse bitmaps" phase may have stored something here, but
-	 * our new algorithm doesn't use it. Drop it.
-	 */
-	if (stored->bitmap)
-		ewah_free(stored->bitmap);
-
 	stored->bitmap = bitmap_to_ewah(ent->bitmap);
 
 	hash_pos = kh_put_oid_map(writer.bitmaps, commit->object.oid, &hash_ret);
@@ -480,35 +472,6 @@ static int date_compare(const void *_a, const void *_b)
 	return (long)b->date - (long)a->date;
 }
 
-void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack)
-{
-	struct bitmap_index *bitmap_git;
-	if (!(bitmap_git = prepare_bitmap_git(to_pack->repo)))
-		return;
-
-	writer.reused = kh_init_oid_map();
-	rebuild_existing_bitmaps(bitmap_git, to_pack, writer.reused,
-				 writer.show_progress);
-	/*
-	 * NEEDSWORK: rebuild_existing_bitmaps() makes writer.reused reference
-	 * some bitmaps in bitmap_git, so we can't free the latter.
-	 */
-}
-
-static struct ewah_bitmap *find_reused_bitmap(const struct object_id *oid)
-{
-	khiter_t hash_pos;
-
-	if (!writer.reused)
-		return NULL;
-
-	hash_pos = kh_get_oid_map(writer.reused, *oid);
-	if (hash_pos >= kh_end(writer.reused))
-		return NULL;
-
-	return kh_value(writer.reused, hash_pos);
-}
-
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 				  unsigned int indexed_commits_nr,
 				  int max_bitmaps)
@@ -522,12 +485,11 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 
 	if (indexed_commits_nr < 100) {
 		for (i = 0; i < indexed_commits_nr; ++i)
-			push_bitmapped_commit(indexed_commits[i], NULL);
+			push_bitmapped_commit(indexed_commits[i]);
 		return;
 	}
 
 	for (;;) {
-		struct ewah_bitmap *reused_bitmap = NULL;
 		struct commit *chosen = NULL;
 
 		next = next_commit_index(i);
@@ -542,15 +504,13 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 
 		if (next == 0) {
 			chosen = indexed_commits[i];
-			reused_bitmap = find_reused_bitmap(&chosen->object.oid);
 		} else {
 			chosen = indexed_commits[i + next];
 
 			for (j = 0; j <= next; ++j) {
 				struct commit *cm = indexed_commits[i + j];
 
-				reused_bitmap = find_reused_bitmap(&cm->object.oid);
-				if (reused_bitmap || (cm->object.flags & NEEDS_BITMAP) != 0) {
+				if ((cm->object.flags & NEEDS_BITMAP) != 0) {
 					chosen = cm;
 					break;
 				}
@@ -560,7 +520,7 @@ void bitmap_writer_select_commits(struct commit **indexed_commits,
 			}
 		}
 
-		push_bitmapped_commit(chosen, reused_bitmap);
+		push_bitmapped_commit(chosen);
 
 		i += next + 1;
 		display_progress(writer.progress, i);
diff --git a/pack-bitmap.c b/pack-bitmap.c
index 60c781d100..d1368b69bb 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -1338,9 +1338,9 @@ void test_bitmap_walk(struct rev_info *revs)
 	free_bitmap_index(bitmap_git);
 }
 
-static int rebuild_bitmap(uint32_t *reposition,
-			  struct ewah_bitmap *source,
-			  struct bitmap *dest)
+int rebuild_bitmap(const uint32_t *reposition,
+		   struct ewah_bitmap *source,
+		   struct bitmap *dest)
 {
 	uint32_t pos = 0;
 	struct ewah_iterator it;
@@ -1369,19 +1369,11 @@ static int rebuild_bitmap(uint32_t *reposition,
 	return 0;
 }
 
-int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
-			     struct packing_data *mapping,
-			     kh_oid_map_t *reused_bitmaps,
-			     int show_progress)
+uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
+				struct packing_data *mapping)
 {
 	uint32_t i, num_objects;
 	uint32_t *reposition;
-	struct bitmap *rebuild;
-	struct stored_bitmap *stored;
-	struct progress *progress = NULL;
-
-	khiter_t hash_pos;
-	int hash_ret;
 
 	num_objects = bitmap_git->pack->num_objects;
 	reposition = xcalloc(num_objects, sizeof(uint32_t));
@@ -1399,33 +1391,7 @@ int rebuild_existing_bitmaps(struct bitmap_index *bitmap_git,
 			reposition[i] = oe_in_pack_pos(mapping, oe) + 1;
 	}
 
-	rebuild = bitmap_new();
-	i = 0;
-
-	if (show_progress)
-		progress = start_progress("Reusing bitmaps", 0);
-
-	kh_foreach_value(bitmap_git->bitmaps, stored, {
-		if (stored->flags & BITMAP_FLAG_REUSE) {
-			if (!rebuild_bitmap(reposition,
-					    lookup_stored_bitmap(stored),
-					    rebuild)) {
-				hash_pos = kh_put_oid_map(reused_bitmaps,
-							  stored->oid,
-							  &hash_ret);
-				kh_value(reused_bitmaps, hash_pos) =
-					bitmap_to_ewah(rebuild);
-			}
-			bitmap_reset(rebuild);
-			display_progress(progress, ++i);
-		}
-	});
-
-	stop_progress(&progress);
-
-	free(reposition);
-	bitmap_free(rebuild);
-	return 0;
+	return reposition;
 }
 
 void free_bitmap_index(struct bitmap_index *b)
diff --git a/pack-bitmap.h b/pack-bitmap.h
index 1203120c43..afa4115136 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -73,7 +73,11 @@ void bitmap_writer_set_checksum(unsigned char *sha1);
 void bitmap_writer_build_type_index(struct packing_data *to_pack,
 				    struct pack_idx_entry **index,
 				    uint32_t index_nr);
-void bitmap_writer_reuse_bitmaps(struct packing_data *to_pack);
+uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
+				struct packing_data *mapping);
+int rebuild_bitmap(const uint32_t *reposition,
+		   struct ewah_bitmap *source,
+		   struct bitmap *dest);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 20/24] pack-bitmap: factor out 'bitmap_for_commit()'
@ 2020-12-08 22:05     ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:04 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

A couple of callers within pack-bitmap.c duplicate logic to lookup a
given object id in the bitamps khash. Factor this out into a new
function, 'bitmap_for_commit()' to reduce some code duplication.

Make this new function non-static, since it will be used in later
commits from outside of pack-bitmap.c.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 33 +++++++++++++++++++--------------
 pack-bitmap.h |  2 ++
 2 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index d1368b69bb..5efb8af121 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -380,6 +380,16 @@ struct include_data {
 	struct bitmap *seen;
 };
 
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit)
+{
+	khiter_t hash_pos = kh_get_oid_map(bitmap_git->bitmaps,
+					   commit->object.oid);
+	if (hash_pos >= kh_end(bitmap_git->bitmaps))
+		return NULL;
+	return lookup_stored_bitmap(kh_value(bitmap_git->bitmaps, hash_pos));
+}
+
 static inline int bitmap_position_extended(struct bitmap_index *bitmap_git,
 					   const struct object_id *oid)
 {
@@ -465,10 +475,10 @@ static void show_commit(struct commit *commit, void *data)
 
 static int add_to_include_set(struct bitmap_index *bitmap_git,
 			      struct include_data *data,
-			      const struct object_id *oid,
+			      struct commit *commit,
 			      int bitmap_pos)
 {
-	khiter_t hash_pos;
+	struct ewah_bitmap *partial;
 
 	if (data->seen && bitmap_get(data->seen, bitmap_pos))
 		return 0;
@@ -476,10 +486,9 @@ static int add_to_include_set(struct bitmap_index *bitmap_git,
 	if (bitmap_get(data->base, bitmap_pos))
 		return 0;
 
-	hash_pos = kh_get_oid_map(bitmap_git->bitmaps, *oid);
-	if (hash_pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, hash_pos);
-		bitmap_or_ewah(data->base, lookup_stored_bitmap(st));
+	partial = bitmap_for_commit(bitmap_git, commit);
+	if (partial) {
+		bitmap_or_ewah(data->base, partial);
 		return 0;
 	}
 
@@ -498,8 +507,7 @@ static int should_include(struct commit *commit, void *_data)
 						  (struct object *)commit,
 						  NULL);
 
-	if (!add_to_include_set(data->bitmap_git, data, &commit->object.oid,
-				bitmap_pos)) {
+	if (!add_to_include_set(data->bitmap_git, data, commit, bitmap_pos)) {
 		struct commit_list *parent = commit->parents;
 
 		while (parent) {
@@ -1282,10 +1290,10 @@ void test_bitmap_walk(struct rev_info *revs)
 {
 	struct object *root;
 	struct bitmap *result = NULL;
-	khiter_t pos;
 	size_t result_popcnt;
 	struct bitmap_test_data tdata;
 	struct bitmap_index *bitmap_git;
+	struct ewah_bitmap *bm;
 
 	if (!(bitmap_git = prepare_bitmap_git(revs->repo)))
 		die("failed to load bitmap indexes");
@@ -1297,12 +1305,9 @@ void test_bitmap_walk(struct rev_info *revs)
 		bitmap_git->version, bitmap_git->entry_count);
 
 	root = revs->pending.objects[0].item;
-	pos = kh_get_oid_map(bitmap_git->bitmaps, root->oid);
-
-	if (pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-		struct ewah_bitmap *bm = lookup_stored_bitmap(st);
+	bm = bitmap_for_commit(bitmap_git, (struct commit *)root);
 
+	if (bm) {
 		fprintf(stderr, "Found bitmap for %s. %d bits / %08x checksum\n",
 			oid_to_hex(&root->oid), (int)bm->bit_size, ewah_checksum(bm));
 
diff --git a/pack-bitmap.h b/pack-bitmap.h
index afa4115136..25dfcf5615 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -78,6 +78,8 @@ uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
 int rebuild_bitmap(const uint32_t *reposition,
 		   struct ewah_bitmap *source,
 		   struct bitmap *dest);
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 20/24] pack-bitmap: factor out 'bitmap_for_commit()'
@ 2020-12-08 22:05     ` Taylor Blau
  0 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

A couple of callers within pack-bitmap.c duplicate logic to lookup a
given object id in the bitamps khash. Factor this out into a new
function, 'bitmap_for_commit()' to reduce some code duplication.

Make this new function non-static, since it will be used in later
commits from outside of pack-bitmap.c.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 33 +++++++++++++++++++--------------
 pack-bitmap.h |  2 ++
 2 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index d1368b69bb..5efb8af121 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -380,6 +380,16 @@ struct include_data {
 	struct bitmap *seen;
 };
 
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit)
+{
+	khiter_t hash_pos = kh_get_oid_map(bitmap_git->bitmaps,
+					   commit->object.oid);
+	if (hash_pos >= kh_end(bitmap_git->bitmaps))
+		return NULL;
+	return lookup_stored_bitmap(kh_value(bitmap_git->bitmaps, hash_pos));
+}
+
 static inline int bitmap_position_extended(struct bitmap_index *bitmap_git,
 					   const struct object_id *oid)
 {
@@ -465,10 +475,10 @@ static void show_commit(struct commit *commit, void *data)
 
 static int add_to_include_set(struct bitmap_index *bitmap_git,
 			      struct include_data *data,
-			      const struct object_id *oid,
+			      struct commit *commit,
 			      int bitmap_pos)
 {
-	khiter_t hash_pos;
+	struct ewah_bitmap *partial;
 
 	if (data->seen && bitmap_get(data->seen, bitmap_pos))
 		return 0;
@@ -476,10 +486,9 @@ static int add_to_include_set(struct bitmap_index *bitmap_git,
 	if (bitmap_get(data->base, bitmap_pos))
 		return 0;
 
-	hash_pos = kh_get_oid_map(bitmap_git->bitmaps, *oid);
-	if (hash_pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, hash_pos);
-		bitmap_or_ewah(data->base, lookup_stored_bitmap(st));
+	partial = bitmap_for_commit(bitmap_git, commit);
+	if (partial) {
+		bitmap_or_ewah(data->base, partial);
 		return 0;
 	}
 
@@ -498,8 +507,7 @@ static int should_include(struct commit *commit, void *_data)
 						  (struct object *)commit,
 						  NULL);
 
-	if (!add_to_include_set(data->bitmap_git, data, &commit->object.oid,
-				bitmap_pos)) {
+	if (!add_to_include_set(data->bitmap_git, data, commit, bitmap_pos)) {
 		struct commit_list *parent = commit->parents;
 
 		while (parent) {
@@ -1282,10 +1290,10 @@ void test_bitmap_walk(struct rev_info *revs)
 {
 	struct object *root;
 	struct bitmap *result = NULL;
-	khiter_t pos;
 	size_t result_popcnt;
 	struct bitmap_test_data tdata;
 	struct bitmap_index *bitmap_git;
+	struct ewah_bitmap *bm;
 
 	if (!(bitmap_git = prepare_bitmap_git(revs->repo)))
 		die("failed to load bitmap indexes");
@@ -1297,12 +1305,9 @@ void test_bitmap_walk(struct rev_info *revs)
 		bitmap_git->version, bitmap_git->entry_count);
 
 	root = revs->pending.objects[0].item;
-	pos = kh_get_oid_map(bitmap_git->bitmaps, root->oid);
-
-	if (pos < kh_end(bitmap_git->bitmaps)) {
-		struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-		struct ewah_bitmap *bm = lookup_stored_bitmap(st);
+	bm = bitmap_for_commit(bitmap_git, (struct commit *)root);
 
+	if (bm) {
 		fprintf(stderr, "Found bitmap for %s. %d bits / %08x checksum\n",
 			oid_to_hex(&root->oid), (int)bm->bit_size, ewah_checksum(bm));
 
diff --git a/pack-bitmap.h b/pack-bitmap.h
index afa4115136..25dfcf5615 100644
--- a/pack-bitmap.h
+++ b/pack-bitmap.h
@@ -78,6 +78,8 @@ uint32_t *create_bitmap_mapping(struct bitmap_index *bitmap_git,
 int rebuild_bitmap(const uint32_t *reposition,
 		   struct ewah_bitmap *source,
 		   struct bitmap *dest);
+struct ewah_bitmap *bitmap_for_commit(struct bitmap_index *bitmap_git,
+				      struct commit *commit);
 void bitmap_writer_select_commits(struct commit **indexed_commits,
 		unsigned int indexed_commits_nr, int max_bitmaps);
 void bitmap_writer_build(struct packing_data *to_pack);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()'
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (19 preceding siblings ...)
  2020-12-08 22:05     ` Taylor Blau
@ 2020-12-08 22:05   ` Taylor Blau
  2020-12-08 22:05   ` [PATCH v4 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
                     ` (2 subsequent siblings)
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

'find_objects()' currently needs to interact with the bitmaps khash
pretty closely. To make 'find_objects()' read a little more
straightforwardly, remove some of the khash-level details into a new
function that describes what it does: 'add_commit_to_bitmap()'.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap.c | 36 +++++++++++++++++++++---------------
 1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/pack-bitmap.c b/pack-bitmap.c
index 5efb8af121..d88745fb02 100644
--- a/pack-bitmap.c
+++ b/pack-bitmap.c
@@ -521,6 +521,23 @@ static int should_include(struct commit *commit, void *_data)
 	return 1;
 }
 
+static int add_commit_to_bitmap(struct bitmap_index *bitmap_git,
+				struct bitmap **base,
+				struct commit *commit)
+{
+	struct ewah_bitmap *or_with = bitmap_for_commit(bitmap_git, commit);
+
+	if (!or_with)
+		return 0;
+
+	if (*base == NULL)
+		*base = ewah_to_bitmap(or_with);
+	else
+		bitmap_or_ewah(*base, or_with);
+
+	return 1;
+}
+
 static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
 				   struct rev_info *revs,
 				   struct object_list *roots,
@@ -544,21 +561,10 @@ static struct bitmap *find_objects(struct bitmap_index *bitmap_git,
 		struct object *object = roots->item;
 		roots = roots->next;
 
-		if (object->type == OBJ_COMMIT) {
-			khiter_t pos = kh_get_oid_map(bitmap_git->bitmaps, object->oid);
-
-			if (pos < kh_end(bitmap_git->bitmaps)) {
-				struct stored_bitmap *st = kh_value(bitmap_git->bitmaps, pos);
-				struct ewah_bitmap *or_with = lookup_stored_bitmap(st);
-
-				if (base == NULL)
-					base = ewah_to_bitmap(or_with);
-				else
-					bitmap_or_ewah(base, or_with);
-
-				object->flags |= SEEN;
-				continue;
-			}
+		if (object->type == OBJ_COMMIT &&
+		    add_commit_to_bitmap(bitmap_git, &base, (struct commit *)object)) {
+			object->flags |= SEEN;
+			continue;
 		}
 
 		object_list_insert(object, &not_mapped);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 22/24] pack-bitmap-write: use existing bitmaps
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (20 preceding siblings ...)
  2020-12-08 22:05   ` [PATCH v4 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
@ 2020-12-08 22:05   ` Taylor Blau
  2020-12-08 22:05   ` [PATCH v4 23/24] pack-bitmap-write: relax unique revwalk condition Taylor Blau
  2020-12-08 22:05   ` [PATCH v4 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

When constructing new bitmaps, we perform a commit and tree walk in
fill_bitmap_commit() and fill_bitmap_tree(). This walk would benefit
from using existing bitmaps when available. We must track the existing
bitmaps and translate them into the new object order, but this is
generally faster than parsing trees.

In fill_bitmap_commit(), we must reorder thing somewhat. The priority
queue walks commits from newest-to-oldest, which means we correctly stop
walking when reaching a commit with a bitmap. However, if we walk trees
interleaved with the commits, then we might be parsing trees that are
actually part of a re-used bitmap. To avoid over-walking trees, add them
to a LIFO queue and walk them after exploring commits completely.

On git.git, this reduces a second immediate bitmap computation from 2.0s
to 1.0s. On linux.git, we go from 32s to 22s. On chromium's fork
network, we go from 227s to 198s.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 40 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 36 insertions(+), 4 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 333058854d..76c8236f94 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -340,20 +340,37 @@ static void fill_bitmap_tree(struct bitmap *bitmap,
 
 static void fill_bitmap_commit(struct bb_commit *ent,
 			       struct commit *commit,
-			       struct prio_queue *queue)
+			       struct prio_queue *queue,
+			       struct prio_queue *tree_queue,
+			       struct bitmap_index *old_bitmap,
+			       const uint32_t *mapping)
 {
 	if (!ent->bitmap)
 		ent->bitmap = bitmap_new();
 
-	bitmap_set(ent->bitmap, find_object_pos(&commit->object.oid));
 	prio_queue_put(queue, commit);
 
 	while (queue->nr) {
 		struct commit_list *p;
 		struct commit *c = prio_queue_get(queue);
 
+		if (old_bitmap && mapping) {
+			struct ewah_bitmap *old = bitmap_for_commit(old_bitmap, c);
+			/*
+			 * If this commit has an old bitmap, then translate that
+			 * bitmap and add its bits to this one. No need to walk
+			 * parents or the tree for this commit.
+			 */
+			if (old && !rebuild_bitmap(mapping, old, ent->bitmap))
+				continue;
+		}
+
+		/*
+		 * Mark ourselves and queue our tree. The commit
+		 * walk ensures we cover all parents.
+		 */
 		bitmap_set(ent->bitmap, find_object_pos(&c->object.oid));
-		fill_bitmap_tree(ent->bitmap, get_commit_tree(c));
+		prio_queue_put(tree_queue, get_commit_tree(c));
 
 		for (p = c->parents; p; p = p->next) {
 			int pos = find_object_pos(&p->item->object.oid);
@@ -363,6 +380,9 @@ static void fill_bitmap_commit(struct bb_commit *ent,
 			}
 		}
 	}
+
+	while (tree_queue->nr)
+		fill_bitmap_tree(ent->bitmap, prio_queue_get(tree_queue));
 }
 
 static void store_selected(struct bb_commit *ent, struct commit *commit)
@@ -386,6 +406,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	size_t i;
 	int nr_stored = 0; /* for progress */
 	struct prio_queue queue = { compare_commits_by_gen_then_commit_date };
+	struct prio_queue tree_queue = { NULL };
+	struct bitmap_index *old_bitmap;
+	uint32_t *mapping;
 
 	writer.bitmaps = kh_init_oid_map();
 	writer.to_pack = to_pack;
@@ -395,6 +418,12 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	trace2_region_enter("pack-bitmap-write", "building_bitmaps_total",
 			    the_repository);
 
+	old_bitmap = prepare_bitmap_git(to_pack->repo);
+	if (old_bitmap)
+		mapping = create_bitmap_mapping(old_bitmap, to_pack);
+	else
+		mapping = NULL;
+
 	bitmap_builder_init(&bb, &writer);
 	for (i = bb.commits_nr; i > 0; i--) {
 		struct commit *commit = bb.commits[i-1];
@@ -402,7 +431,8 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		struct commit *child;
 		int reused = 0;
 
-		fill_bitmap_commit(ent, commit, &queue);
+		fill_bitmap_commit(ent, commit, &queue, &tree_queue,
+				   old_bitmap, mapping);
 
 		if (ent->selected) {
 			store_selected(ent, commit);
@@ -428,7 +458,9 @@ void bitmap_writer_build(struct packing_data *to_pack)
 		ent->bitmap = NULL;
 	}
 	clear_prio_queue(&queue);
+	clear_prio_queue(&tree_queue);
 	bitmap_builder_clear(&bb);
+	free(mapping);
 
 	trace2_region_leave("pack-bitmap-write", "building_bitmaps_total",
 			    the_repository);
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 23/24] pack-bitmap-write: relax unique revwalk condition
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (21 preceding siblings ...)
  2020-12-08 22:05   ` [PATCH v4 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
@ 2020-12-08 22:05   ` Taylor Blau
  2020-12-08 22:05   ` [PATCH v4 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

The previous commits improved the bitmap computation process for very
long, linear histories with many refs by removing quadratic growth in
how many objects were walked. The strategy of computing "intermediate
commits" using bitmasks for which refs can reach those commits
partitioned the poset of reachable objects so each part could be walked
exactly once. This was effective for linear histories.

However, there was a (significant) drawback: wide histories with many
refs had an explosion of memory costs to compute the commit bitmasks
during the exploration that discovers these intermediate commits. Since
these wide histories are unlikely to repeat walking objects, the benefit
of walking objects multiple times was not expensive before. But now, the
commit walk *before computing bitmaps* is incredibly expensive.

In an effort to discover a happy medium, this change reduces the walk
for intermediate commits to only the first-parent history. This focuses
the walk on how the histories converge, which still has significant
reduction in repeat object walks. It is still possible to create
quadratic behavior in this version, but it is probably less likely in
realistic data shapes.

Here is some data taken on a fresh clone of the kernel:

             |   runtime (sec)    |   peak heap (GB)   |
             |                    |                    |
             |   from  |   with   |   from  |   with   |
             | scratch | existing | scratch | existing |
  -----------+---------+----------+---------+-----------
    original |  64.044 |   83.241 |   2.088 |    2.194 |
  last patch |  45.049 |   37.624 |   2.267 |    2.334 |
  this patch |  88.478 |   53.218 |   2.157 |    2.224 |

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c     | 14 +++++---------
 t/t5310-pack-bitmaps.sh | 33 +++++++++++++++++----------------
 2 files changed, 22 insertions(+), 25 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index 76c8236f94..d2af4a974f 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -199,7 +199,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 {
 	struct rev_info revs;
 	struct commit *commit;
-	unsigned int i, num_maximal;
+	unsigned int i, num_maximal = 0;
 
 	memset(bb, 0, sizeof(*bb));
 	init_bb_data(&bb->data);
@@ -207,6 +207,7 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 	reset_revision_walk();
 	repo_init_revisions(writer->to_pack->repo, &revs, NULL);
 	revs.topo_order = 1;
+	revs.first_parent_only = 1;
 
 	for (i = 0; i < writer->selected_nr; i++) {
 		struct commit *c = writer->selected[i].commit;
@@ -221,13 +222,12 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		add_pending_object(&revs, &c->object, "");
 	}
-	num_maximal = writer->selected_nr;
 
 	if (prepare_revision_walk(&revs))
 		die("revision walk setup failed");
 
 	while ((commit = get_revision(&revs))) {
-		struct commit_list *p;
+		struct commit_list *p = commit->parents;
 		struct bb_commit *c_ent;
 
 		parse_commit_or_die(commit);
@@ -235,16 +235,12 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 		c_ent = bb_data_at(&bb->data, commit);
 
 		if (c_ent->maximal) {
-			if (!c_ent->selected) {
-				bitmap_set(c_ent->commit_mask, num_maximal);
-				num_maximal++;
-			}
-
+			num_maximal++;
 			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
 			bb->commits[bb->commits_nr++] = commit;
 		}
 
-		for (p = commit->parents; p; p = p->next) {
+		if (p) {
 			struct bb_commit *p_ent = bb_data_at(&bb->data, p->item);
 			int c_not_p, p_not_c;
 
diff --git a/t/t5310-pack-bitmaps.sh b/t/t5310-pack-bitmaps.sh
index 6815fb6a4e..3a2c9d2d8e 100755
--- a/t/t5310-pack-bitmaps.sh
+++ b/t/t5310-pack-bitmaps.sh
@@ -23,12 +23,12 @@ has_any () {
 # To ensure the logic for "maximal commits" is exercised, make
 # the repository a bit more complicated.
 #
-#    other                         master
+#    other                         second
 #      *                             *
 # (99 commits)                  (99 commits)
 #      *                             *
 #      |\                           /|
-#      | * octo-other  octo-master * |
+#      | * octo-other  octo-second * |
 #      |/|\_________  ____________/|\|
 #      | \          \/  __________/  |
 #      |  | ________/\ /             |
@@ -43,23 +43,24 @@ has_any () {
 #                                   \|
 #                                    * (base)
 #
+# We only push bits down the first-parent history, which
+# makes some of these commits unimportant!
+#
 # The important part for the maximal commit algorithm is how
 # the bitmasks are extended. Assuming starting bit positions
-# for master (bit 0) and other (bit 1), and some flexibility
-# in the order that merge bases are visited, the bitmasks at
-# the end should be:
+# for second (bit 0) and other (bit 1), the bitmasks at the
+# end should be:
 #
-#      master: 1       (maximal, selected)
+#      second: 1       (maximal, selected)
 #       other: 01      (maximal, selected)
-# octo-master: 1
-#  octo-other: 01
-# merge-right: 111     (maximal)
-#        (l1): 111
-#        (r1): 111
-#  merge-left: 1101    (maximal)
-#        (l2): 11111   (maximal)
-#        (r2): 111101  (maximal)
-#      (base): 1111111 (maximal)
+#      (base): 11 (maximal)
+#
+# This complicated history was important for a previous
+# version of the walk that guarantees never walking a
+# commit multiple times. That goal might be important
+# again, so preserve this complicated case. For now, this
+# test will guarantee that the bitmaps are computed
+# correctly, even with the repeat calculations.
 
 test_expect_success 'setup repo with moderate-sized history' '
 	test_commit_bulk --id=file 10 &&
@@ -114,7 +115,7 @@ test_expect_success 'full repack creates bitmaps' '
 	ls .git/objects/pack/ | grep bitmap >output &&
 	test_line_count = 1 output &&
 	grep "\"key\":\"num_selected_commits\",\"value\":\"106\"" trace &&
-	grep "\"key\":\"num_maximal_commits\",\"value\":\"111\"" trace
+	grep "\"key\":\"num_maximal_commits\",\"value\":\"107\"" trace
 '
 
 test_expect_success 'rev-list --test-bitmap verifies bitmaps' '
-- 
2.29.2.533.g07db1f5344


^ permalink raw reply related	[flat|nested] 174+ messages in thread

* [PATCH v4 24/24] pack-bitmap-write: better reuse bitmaps
  2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
                     ` (22 preceding siblings ...)
  2020-12-08 22:05   ` [PATCH v4 23/24] pack-bitmap-write: relax unique revwalk condition Taylor Blau
@ 2020-12-08 22:05   ` Taylor Blau
  23 siblings, 0 replies; 174+ messages in thread
From: Taylor Blau @ 2020-12-08 22:05 UTC (permalink / raw)
  To: git; +Cc: peff, jonathantanmy, dstolee, gitster

From: Derrick Stolee <dstolee@microsoft.com>

If the old bitmap file contains a bitmap for a given commit, then that
commit does not need help from intermediate commits in its history to
compute its final bitmap. Eject that commit from the walk and insert it
into a separate list of reusable commits that are eventually stored in
the list of commits for computing bitmaps.

This helps the repeat bitmap computation task, even if the selected
commits shift drastically. This helps when a previously-bitmapped commit
exists in the first-parent history of a newly-selected commit. Since we
stop the walk at these commits and we use a first-parent walk, it is
harder to walk "around" these bitmapped commits. It's not impossible,
but we can greatly reduce the computation time for many selected
commits.

             |   runtime (sec)    |   peak heap (GB)   |
             |                    |                    |
             |   from  |   with   |   from  |   with   |
             | scratch | existing | scratch | existing |
  -----------+---------+----------+---------+-----------
  last patch |  88.478 |   53.218 |   2.157 |    2.224 |
  this patch |  86.681 |   16.164 |   2.157 |    2.222 |

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
---
 pack-bitmap-write.c | 40 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/pack-bitmap-write.c b/pack-bitmap-write.c
index d2af4a974f..cc5ead9990 100644
--- a/pack-bitmap-write.c
+++ b/pack-bitmap-write.c
@@ -195,10 +195,13 @@ struct bitmap_builder {
 };
 
 static void bitmap_builder_init(struct bitmap_builder *bb,
-				struct bitmap_writer *writer)
+				struct bitmap_writer *writer,
+				struct bitmap_index *old_bitmap)
 {
 	struct rev_info revs;
 	struct commit *commit;
+	struct commit_list *reusable = NULL;
+	struct commit_list *r;
 	unsigned int i, num_maximal = 0;
 
 	memset(bb, 0, sizeof(*bb));
@@ -234,6 +237,31 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 
 		c_ent = bb_data_at(&bb->data, commit);
 
+		/*
+		 * If there is no commit_mask, there is no reason to iterate
+		 * over this commit; it is not selected (if it were, it would
+		 * not have a blank commit mask) and all its children have
+		 * existing bitmaps (see the comment starting with "This commit
+		 * has an existing bitmap" below), so it does not contribute
+		 * anything to the final bitmap file or its descendants.
+		 */
+		if (!c_ent->commit_mask)
+			continue;
+
+		if (old_bitmap && bitmap_for_commit(old_bitmap, commit)) {
+			/*
+			 * This commit has an existing bitmap, so we can
+			 * get its bits immediately without an object
+			 * walk. That is, it is reusable as-is and there is no
+			 * need to continue walking beyond it.
+			 *
+			 * Mark it as such and add it to bb->commits separately
+			 * to avoid allocating a position in the commit mask.
+			 */
+			commit_list_insert(commit, &reusable);
+			goto next;
+		}
+
 		if (c_ent->maximal) {
 			num_maximal++;
 			ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
@@ -278,14 +306,22 @@ static void bitmap_builder_init(struct bitmap_builder *bb,
 			}
 		}
 
+next:
 		bitmap_free(c_ent->commit_mask);
 		c_ent->commit_mask = NULL;
 	}
 
+	for (r = reusable; r; r = r->next) {
+		ALLOC_GROW(bb->commits, bb->commits_nr + 1, bb->commits_alloc);
+		bb->commits[bb->commits_nr++] = r->item;
+	}
+
 	trace2_data_intmax("pack-bitmap-write", the_repository,
 			   "num_selected_commits", writer->selected_nr);
 	trace2_data_intmax("pack-bitmap-write", the_repository,
 			   "num_maximal_commits", num_maximal);
+
+	free_commit_list(reusable);
 }
 
 static void bitmap_builder_clear(struct bitmap_builder *bb)
@@ -420,7 +456,7 @@ void bitmap_writer_build(struct packing_data *to_pack)
 	else
 		mapping = NULL;
 
-	bitmap_builder_init(&bb, &writer);
+	bitmap_builder_init(&bb, &writer, old_bitmap);
 	for (i = bb.commits_nr; i > 0; i--) {
 		struct commit *commit = bb.commits[i-1];
 		struct bb_commit *ent = bb_data_at(&bb.data, commit);
-- 
2.29.2.533.g07db1f5344

^ permalink raw reply related	[flat|nested] 174+ messages in thread

end of thread, other threads:[~2020-12-08 22:06 UTC | newest]

Thread overview: 174+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-11 19:41 [PATCH 00/23] pack-bitmap: bitmap generation improvements Taylor Blau
2020-11-11 19:41 ` [PATCH 01/23] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
2020-11-22 19:36   ` Junio C Hamano
2020-11-23 16:22     ` Taylor Blau
2020-11-24  2:48       ` Jeff King
2020-11-24  2:51         ` Jeff King
2020-12-01 22:56           ` Taylor Blau
2020-11-11 19:41 ` [PATCH 02/23] pack-bitmap: fix header size check Taylor Blau
2020-11-12 17:39   ` Martin Ågren
2020-11-11 19:42 ` [PATCH 03/23] pack-bitmap: bounds-check size of cache extension Taylor Blau
2020-11-12 17:47   ` Martin Ågren
2020-11-13  4:57     ` Jeff King
2020-11-13  5:26       ` Martin Ågren
2020-11-13 21:29       ` Taylor Blau
2020-11-13 21:39         ` Jeff King
2020-11-13 21:49           ` Taylor Blau
2020-11-13 22:11             ` Jeff King
2020-11-11 19:42 ` [PATCH 04/23] t5310: drop size of truncated ewah bitmap Taylor Blau
2020-11-11 19:42 ` [PATCH 05/23] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
2020-11-11 19:42 ` [PATCH 06/23] ewah: factor out bitmap growth Taylor Blau
2020-11-11 19:42 ` [PATCH 07/23] ewah: make bitmap growth less aggressive Taylor Blau
2020-11-22 20:32   ` Junio C Hamano
2020-11-23 16:49     ` Taylor Blau
2020-11-24  3:00       ` Jeff King
2020-11-24 20:11         ` Junio C Hamano
2020-11-11 19:43 ` [PATCH 08/23] ewah: implement bitmap_or() Taylor Blau
2020-11-22 20:34   ` Junio C Hamano
2020-11-23 16:52     ` Taylor Blau
2020-11-11 19:43 ` [PATCH 09/23] ewah: add bitmap_dup() function Taylor Blau
2020-11-11 19:43 ` [PATCH 10/23] pack-bitmap-write: reimplement bitmap writing Taylor Blau
2020-11-11 19:43 ` [PATCH 11/23] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
2020-11-11 19:43 ` [PATCH 12/23] pack-bitmap-write: fill bitmap with commit history Taylor Blau
2020-11-11 19:43 ` [PATCH 13/23] bitmap: add bitmap_diff_nonzero() Taylor Blau
2020-11-11 19:43 ` [PATCH 14/23] commit: implement commit_list_contains() Taylor Blau
2020-11-11 19:43 ` [PATCH 15/23] t5310: add branch-based checks Taylor Blau
2020-11-11 20:58   ` Derrick Stolee
2020-11-11 21:04     ` Junio C Hamano
2020-11-15 23:26       ` Johannes Schindelin
2020-11-11 19:43 ` [PATCH 16/23] pack-bitmap-write: rename children to reverse_edges Taylor Blau
2020-11-11 19:43 ` [PATCH 17/23] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
2020-11-13 22:23   ` SZEDER Gábor
2020-11-13 23:03     ` Jeff King
2020-11-14  6:23       ` Jeff King
2020-11-11 19:43 ` [PATCH 18/23] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
2020-11-11 19:44 ` [PATCH 19/23] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
2020-11-11 19:44 ` [PATCH 20/23] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
2020-11-11 19:44 ` [PATCH 21/23] pack-bitmap-write: use existing bitmaps Taylor Blau
2020-11-11 19:44 ` [PATCH 22/23] pack-bitmap-write: relax unique rewalk condition Taylor Blau
2020-11-11 19:44 ` [PATCH 23/23] pack-bitmap-write: better reuse bitmaps Taylor Blau
2020-11-17 21:46 ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements Taylor Blau
2020-11-17 21:46   ` [PATCH v2 01/24] ewah/ewah_bitmap.c: grow buffer past 1 Taylor Blau
2020-11-17 21:46   ` [PATCH v2 02/24] pack-bitmap: fix header size check Taylor Blau
2020-11-17 21:46   ` [PATCH v2 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
2020-11-17 21:46   ` [PATCH v2 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
2020-11-17 21:46   ` [PATCH v2 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
2020-11-17 21:46   ` [PATCH v2 06/24] ewah: factor out bitmap growth Taylor Blau
2020-11-17 21:47   ` [PATCH v2 07/24] ewah: make bitmap growth less aggressive Taylor Blau
2020-11-17 21:47   ` [PATCH v2 08/24] ewah: implement bitmap_or() Taylor Blau
2020-11-17 21:47   ` [PATCH v2 09/24] ewah: add bitmap_dup() function Taylor Blau
2020-11-17 21:47   ` [PATCH v2 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
2020-11-25  0:53     ` Jonathan Tan
2020-11-28 17:27       ` Taylor Blau
2020-11-17 21:47   ` [PATCH v2 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
2020-11-25  1:00     ` Jonathan Tan
2020-11-17 21:47   ` [PATCH v2 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
2020-11-22 21:50     ` Junio C Hamano
2020-11-23 14:54       ` Derrick Stolee
2020-11-25  1:14     ` Jonathan Tan
2020-11-28 17:21       ` Taylor Blau
2020-11-30 18:33         ` Jonathan Tan
2020-11-17 21:47   ` [PATCH v2 13/24] bitmap: add bitmap_diff_nonzero() Taylor Blau
2020-11-22 22:01     ` Junio C Hamano
2020-11-23 20:19       ` Taylor Blau
2020-11-17 21:47   ` [PATCH v2 14/24] commit: implement commit_list_contains() Taylor Blau
2020-11-17 21:47   ` [PATCH v2 15/24] t5310: add branch-based checks Taylor Blau
2020-11-25  1:17     ` Jonathan Tan
2020-11-28 17:30       ` Taylor Blau
2020-11-17 21:47   ` [PATCH v2 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
2020-11-17 21:47   ` [PATCH v2 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
2020-11-17 21:48   ` [PATCH v2 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
2020-11-24  6:07     ` Jonathan Tan
2020-11-25  1:46     ` Jonathan Tan
2020-11-30 18:41       ` Derrick Stolee
2020-11-17 21:48   ` [PATCH v2 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
2020-12-02  7:13     ` Jonathan Tan
2020-11-17 21:48   ` [PATCH v2 20/24] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
2020-12-02  7:17     ` Jonathan Tan
2020-11-17 21:48   ` [PATCH v2 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
2020-12-02  7:20     ` Jonathan Tan
2020-11-17 21:48   ` [PATCH v2 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
2020-12-02  7:28     ` Jonathan Tan
2020-12-02 16:21       ` Taylor Blau
2020-11-17 21:48   ` [PATCH v2 23/24] pack-bitmap-write: relax unique rewalk condition Taylor Blau
2020-12-02  7:44     ` Jonathan Tan
2020-12-02 16:30       ` Taylor Blau
2020-12-07 18:19         ` Jonathan Tan
2020-12-07 18:43           ` Derrick Stolee
2020-12-07 18:45             ` Derrick Stolee
2020-12-07 18:48           ` Jeff King
2020-11-17 21:48   ` [PATCH v2 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
2020-12-02  8:08     ` Jonathan Tan
2020-12-02 16:35       ` Taylor Blau
2020-12-02 18:22         ` Derrick Stolee
2020-12-02 18:25           ` Taylor Blau
2020-12-07 18:26             ` Jonathan Tan
2020-12-07 18:24           ` Jonathan Tan
2020-12-07 19:20             ` Derrick Stolee
2020-11-18 18:32   ` [PATCH v2 00/24] pack-bitmap: bitmap generation improvements SZEDER Gábor
2020-11-18 19:51     ` Taylor Blau
2020-11-22  2:17       ` Taylor Blau
2020-11-22  2:28         ` Taylor Blau
2020-11-20  6:34   ` Martin Ågren
2020-11-21 19:37     ` Junio C Hamano
2020-11-21 20:11       ` Martin Ågren
2020-11-22  2:31         ` Taylor Blau
2020-11-24  2:43           ` Jeff King
2020-12-01 23:04             ` Taylor Blau
2020-12-01 23:37               ` Jonathan Tan
2020-12-01 23:43                 ` Taylor Blau
2020-12-02  8:11                   ` Jonathan Tan
2020-12-08  0:04 ` [PATCH v3 " Taylor Blau
2020-12-08  0:04   ` [PATCH v3 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW() Taylor Blau
2020-12-08  0:04   ` [PATCH v3 02/24] pack-bitmap: fix header size check Taylor Blau
2020-12-08  0:04   ` [PATCH v3 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
2020-12-08  0:04   ` [PATCH v3 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
2020-12-08  0:04   ` [PATCH v3 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
2020-12-08  0:04   ` [PATCH v3 06/24] ewah: factor out bitmap growth Taylor Blau
2020-12-08  0:04   ` [PATCH v3 07/24] ewah: make bitmap growth less aggressive Taylor Blau
2020-12-08  0:04   ` [PATCH v3 08/24] ewah: implement bitmap_or() Taylor Blau
2020-12-08  0:04   ` [PATCH v3 09/24] ewah: add bitmap_dup() function Taylor Blau
2020-12-08  0:04   ` [PATCH v3 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
2020-12-08  0:05   ` [PATCH v3 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
2020-12-08  0:05   ` [PATCH v3 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
2020-12-08  0:05   ` [PATCH v3 13/24] bitmap: implement bitmap_is_subset() Taylor Blau
2020-12-08  0:05   ` [PATCH v3 14/24] commit: implement commit_list_contains() Taylor Blau
2020-12-08  0:05   ` [PATCH v3 15/24] t5310: add branch-based checks Taylor Blau
2020-12-08  0:05   ` [PATCH v3 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
2020-12-08  0:05   ` [PATCH v3 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
2020-12-08  0:05   ` [PATCH v3 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
2020-12-08  0:05   ` [PATCH v3 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
2020-12-08  0:05   ` [PATCH v3 20/24] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
2020-12-08  0:05   ` [PATCH v3 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
2020-12-08  0:05   ` [PATCH v3 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
2020-12-08  0:05   ` [PATCH v3 23/24] pack-bitmap-write: relax unique rewalk condition Taylor Blau
2020-12-08  0:05   ` [PATCH v3 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau
2020-12-08 20:56   ` [PATCH v3 00/24] pack-bitmap: bitmap generation improvements Junio C Hamano
2020-12-08 21:03     ` Taylor Blau
2020-12-08 22:03       ` Junio C Hamano
2020-12-08 22:03 ` [PATCH v4 " Taylor Blau
2020-12-08 22:03   ` [PATCH v4 01/24] ewah/ewah_bitmap.c: avoid open-coding ALLOC_GROW() Taylor Blau
2020-12-08 22:03   ` [PATCH v4 02/24] pack-bitmap: fix header size check Taylor Blau
2020-12-08 22:03   ` [PATCH v4 03/24] pack-bitmap: bounds-check size of cache extension Taylor Blau
2020-12-08 22:03   ` [PATCH v4 04/24] t5310: drop size of truncated ewah bitmap Taylor Blau
2020-12-08 22:03   ` [PATCH v4 05/24] rev-list: die when --test-bitmap detects a mismatch Taylor Blau
2020-12-08 22:03   ` [PATCH v4 06/24] ewah: factor out bitmap growth Taylor Blau
2020-12-08 22:03   ` [PATCH v4 07/24] ewah: make bitmap growth less aggressive Taylor Blau
2020-12-08 22:03   ` [PATCH v4 08/24] ewah: implement bitmap_or() Taylor Blau
2020-12-08 22:03   ` [PATCH v4 09/24] ewah: add bitmap_dup() function Taylor Blau
2020-12-08 22:03   ` [PATCH v4 10/24] pack-bitmap-write: reimplement bitmap writing Taylor Blau
2020-12-08 22:03   ` [PATCH v4 11/24] pack-bitmap-write: pass ownership of intermediate bitmaps Taylor Blau
2020-12-08 22:04   ` [PATCH v4 12/24] pack-bitmap-write: fill bitmap with commit history Taylor Blau
2020-12-08 22:04   ` [PATCH v4 13/24] bitmap: implement bitmap_is_subset() Taylor Blau
2020-12-08 22:04   ` [PATCH v4 14/24] commit: implement commit_list_contains() Taylor Blau
2020-12-08 22:04   ` [PATCH v4 15/24] t5310: add branch-based checks Taylor Blau
2020-12-08 22:04   ` [PATCH v4 16/24] pack-bitmap-write: rename children to reverse_edges Taylor Blau
2020-12-08 22:04   ` [PATCH v4 17/24] pack-bitmap.c: check reads more aggressively when loading Taylor Blau
2020-12-08 22:04   ` [PATCH v4 18/24] pack-bitmap-write: build fewer intermediate bitmaps Taylor Blau
2020-12-08 22:04   ` [PATCH v4 19/24] pack-bitmap-write: ignore BITMAP_FLAG_REUSE Taylor Blau
2020-12-08 22:04   ` [PATCH v4 20/24] pack-bitmap: factor out 'bitmap_for_commit()' Taylor Blau
2020-12-08 22:05     ` Taylor Blau
2020-12-08 22:05   ` [PATCH v4 21/24] pack-bitmap: factor out 'add_commit_to_bitmap()' Taylor Blau
2020-12-08 22:05   ` [PATCH v4 22/24] pack-bitmap-write: use existing bitmaps Taylor Blau
2020-12-08 22:05   ` [PATCH v4 23/24] pack-bitmap-write: relax unique revwalk condition Taylor Blau
2020-12-08 22:05   ` [PATCH v4 24/24] pack-bitmap-write: better reuse bitmaps Taylor Blau

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.