git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] limit reused delta chains based on --depth
@ 2017-01-27 22:01 Jeff King
  2017-01-27 22:02 ` [PATCH 1/2] pack-objects: enforce --depth limit in reused deltas Jeff King
  2017-01-27 22:05 ` [PATCH 2/2] pack-objects: convert recursion to iteration in break_delta_chain() Jeff King
  0 siblings, 2 replies; 6+ messages in thread
From: Jeff King @ 2017-01-27 22:01 UTC (permalink / raw)
  To: git; +Cc: Michael Haggerty

Back when we switched pack-objects to visiting packs in
most-recently-used order last August, we realized that this could reuse
cross-pack deltas, and that the result could have longer delta chains
than any single pack contains.

I produced a patch back then[1], but we decided not to follow through
with it. Two things happened to make me revive that patch:

  1. I hit a case in the wild with a really long delta chain that caused
     pack-objects' write_one() to run out of stack space.

  2. We dropped the --aggressive depth to match the normal one. So
     there's less concern about mismatches throwing out on-disk deltas
     from a previous aggressive repack (but see the caveats in the first
     patch's commit message).

So here it is, plus another patch that converts the recursion to
iteration (the stack space needed by the function is small enough that
just the first patch was enough to fix my problem case, but it seemed
like tempting fate to leave it recursive).

  [1/2]: pack-objects: enforce --depth limit in reused deltas
  [2/2]: pack-objects: convert recursion to iteration in break_delta_chain()

 builtin/pack-objects.c      | 133 ++++++++++++++++++++++++++++++++++++--------
 pack-objects.h              |   4 ++
 t/t5316-pack-delta-depth.sh |  93 +++++++++++++++++++++++++++++++
 3 files changed, 207 insertions(+), 23 deletions(-)
 create mode 100755 t/t5316-pack-delta-depth.sh

-Peff

[1] http://public-inbox.org/git/20160811095710.p2bffympjlwmv3gc@sigill.intra.peff.net/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] pack-objects: enforce --depth limit in reused deltas
  2017-01-27 22:01 [PATCH 0/2] limit reused delta chains based on --depth Jeff King
@ 2017-01-27 22:02 ` Jeff King
  2017-01-27 23:31   ` Junio C Hamano
  2017-01-27 22:05 ` [PATCH 2/2] pack-objects: convert recursion to iteration in break_delta_chain() Jeff King
  1 sibling, 1 reply; 6+ messages in thread
From: Jeff King @ 2017-01-27 22:02 UTC (permalink / raw)
  To: git; +Cc: Michael Haggerty

Since 898b14c (pack-objects: rework check_delta_limit usage,
2007-04-16), we check the delta depth limit only when
figuring out whether we should make a new delta. We don't
consider it at all when reusing deltas, which means that
packing once with --depth=250, and then again with
--depth=50, the second pack my still contain chains larger
than 50.

This is generally considered a feature, as the results of
earlier high-depth repacks are carried forward, used for
serving fetches, etc. However, since we started using
cross-pack deltas in c9af708b1 (pack-objects: use mru list
when iterating over packs, 2016-08-11), we are no longer
bounded by the length of an existing delta chain in a single
pack.

Here's one particular pathological case: a sequence of N
packs, each with 2 objects, the base of which is stored as a
delta in a previous pack. If we chain all the deltas
together, we have a cycle of length N. We break the cycle,
but the tip delta is still at depth N-1.

This is less unlikely than it might sound. See the included
test for a reconstruction based on real-world actions.  I
ran into such a case in the wild, where a client was rapidly
sending packs, and we had accumulated 10,000 before doing a
server-side repack.  The pack that "git repack" tried to
generate had a very deep chain, which caused pack-objects to
run out of stack space in the recursive write_one().

This patch bounds the length of delta chains in the output
pack based on --depth, regardless of whether they are caused
by cross-pack deltas or existed in the input packs. This
fixes the problem, but does have two possible downsides:

  1. High-depth aggressive repacks followed by "normal"
     repacks will throw away the high-depth chains.

     In the long run this is probably OK; investigation
     showed that high-depth repacks aren't actually
     beneficial, and we dropped the aggressive depth default
     to match the normal case in 07e7dbf0d (gc: default
     aggressive depth to 50, 2016-08-11).

  2. If you really do want to store high-depth deltas on
     disk, they may be discarded and new delta computed when
     serving a fetch, unless you set pack.depth to match
     your high-depth size.

The implementation uses the existing search for delta
cycles.  That lets us compute the depth of any node based on
the depth of its base, because we know the base is DFS_DONE
by the time we look at it (modulo any cycles in the graph,
but we know there cannot be any because we break them as we
see them).

There is some subtlety worth mentioning, though. We record
the depth of each object as we compute it. It might seem
like we could save the per-object storage space by just
keeping track of the depth of our traversal (i.e., have
break_delta_chains() report how deep it went). But we may
visit an object through multiple delta paths, and on
subsequent paths we want to know its depth immediately,
without having to walk back down to its final base (doing so
would make our graph walk quadratic rather than linear).

Likewise, one could try to record the depth not from the
base, but from our starting point (i.e., start
recursion_depth at 0, and pass "recursion_depth + 1" to each
invocation of break_delta_chains()). And then when
recursion_depth gets too big, we know that we must cut the
delta chain.  But that technique is wrong if we do not visit
the nodes in topological order. In a chain A->B->C, it
if we visit "C", then "B", then "A", we will never recurse
deeper than 1 link (because we see at each node that we have
already visited it).

Signed-off-by: Jeff King <peff@peff.net>
---
 builtin/pack-objects.c      | 18 +++++++++
 pack-objects.h              |  4 ++
 t/t5316-pack-delta-depth.sh | 93 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 115 insertions(+)
 create mode 100755 t/t5316-pack-delta-depth.sh

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 8841f8b36..2b08c8121 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -1539,6 +1539,8 @@ static int pack_offset_sort(const void *_a, const void *_b)
  *   2. Updating our size/type to the non-delta representation. These were
  *      either not recorded initially (size) or overwritten with the delta type
  *      (type) when check_object() decided to reuse the delta.
+ *
+ *   3. Resetting our delta depth, as we are now a base object.
  */
 static void drop_reused_delta(struct object_entry *entry)
 {
@@ -1552,6 +1554,7 @@ static void drop_reused_delta(struct object_entry *entry)
 			p = &(*p)->delta_sibling;
 	}
 	entry->delta = NULL;
+	entry->depth = 0;
 
 	oi.sizep = &entry->size;
 	oi.typep = &entry->type;
@@ -1570,6 +1573,9 @@ static void drop_reused_delta(struct object_entry *entry)
  * Follow the chain of deltas from this entry onward, throwing away any links
  * that cause us to hit a cycle (as determined by the DFS state flags in
  * the entries).
+ *
+ * We also detect too-long reused chains that would violate our --depth
+ * limit.
  */
 static void break_delta_chains(struct object_entry *entry)
 {
@@ -1587,6 +1593,18 @@ static void break_delta_chains(struct object_entry *entry)
 		 */
 		entry->dfs_state = DFS_ACTIVE;
 		break_delta_chains(entry->delta);
+
+		/*
+		 * Once we've recursed, our base (if we still have one) knows
+		 * its depth, so we can compute ours (and check it against
+		 * the limit).
+		 */
+		if (entry->delta) {
+			entry->depth = entry->delta->depth + 1;
+			if (entry->depth > depth)
+				drop_reused_delta(entry);
+		}
+
 		entry->dfs_state = DFS_DONE;
 		break;
 
diff --git a/pack-objects.h b/pack-objects.h
index cc9b9a9b9..03f119165 100644
--- a/pack-objects.h
+++ b/pack-objects.h
@@ -30,12 +30,16 @@ struct object_entry {
 
 	/*
 	 * State flags for depth-first search used for analyzing delta cycles.
+	 *
+	 * The depth is measured in delta-links to the base (so if A is a delta
+	 * against B, then A has a depth of 1, and B a depth of 0).
 	 */
 	enum {
 		DFS_NONE = 0,
 		DFS_ACTIVE,
 		DFS_DONE
 	} dfs_state;
+	int depth;
 };
 
 struct packing_data {
diff --git a/t/t5316-pack-delta-depth.sh b/t/t5316-pack-delta-depth.sh
new file mode 100755
index 000000000..236d60fe6
--- /dev/null
+++ b/t/t5316-pack-delta-depth.sh
@@ -0,0 +1,93 @@
+#!/bin/sh
+
+test_description='pack-objects breaks long cross-pack delta chains'
+. ./test-lib.sh
+
+# This mirrors a repeated push setup:
+#
+# 1. A client repeatedly modifies some files, makes a
+#      commit, and pushes the result. It does this N times
+#      before we get around to repacking.
+#
+# 2. Each push generates a thin pack with the new version of
+#    various objects. Let's consider some file in the root tree
+#    which is updated in each commit.
+#
+#    When generating push number X, we feed commit X-1 (and
+#    thus blob X-1) as a preferred base. The resulting pack has
+#    blob X as a thin delta against blob X-1.
+#
+#    On the receiving end, "index-pack --fix-thin" will
+#    complete the pack with a base copy of tree X-1.
+#
+# 3. In older versions of git, if we used the delta from
+#    pack X, then we'd always find blob X-1 as a base in the
+#    same pack (and generate a fresh delta).
+#
+#    But with the pack mru, we jump from delta to delta
+#    following the traversal order:
+#
+#      a. We grab blob X from pack X as a delta, putting it at
+#         the tip of our mru list.
+#
+#      b. Eventually we move onto commit X-1. We need other
+#         objects which are only in pack X-1 (in the test code
+#         below, it's the containing tree). That puts pack X-1
+#         at the tip of our mru list.
+#
+#      c. Eventually we look for blob X-1, and we find the
+#         version in pack X-1 (because it's the mru tip).
+#
+# Now we have blob X as a delta against X-1, which is a delta
+# against X-2, and so forth.
+#
+# In the real world, these small pushes would get exploded by
+# unpack-objects rather than "index-pack --fix-thin", but the
+# same principle applies to larger pushes (they only need one
+# repeatedly-modified file to generate the delta chain).
+
+test_expect_success 'create series of packs' '
+	test-genrandom foo 4096 >content &&
+	prev= &&
+	for i in $(test_seq 1 10)
+	do
+		cat content >file &&
+		echo $i >>file &&
+		git add file &&
+		git commit -m $i &&
+		cur=$(git rev-parse HEAD^{tree}) &&
+		{
+			test -n "$prev" && echo "-$prev"
+			echo $cur
+			echo "$(git rev-parse :file) file"
+		} | git pack-objects --stdout >tmp &&
+		git index-pack --stdin --fix-thin <tmp || return 1
+		prev=$cur
+	done
+'
+
+max_chain() {
+	git index-pack --verify-stat-only "$1" >output &&
+	perl -lne '
+	  /chain length = (\d+)/ and $len = $1;
+	  END { print $len }
+	' output
+}
+
+# Note that this whole setup is pretty reliant on the current
+# packing heuristics. We double-check that our test case
+# actually produces a long chain. If it doesn't, it should be
+# adjusted (or scrapped if the heuristics have become too unreliable)
+test_expect_success 'packing produces a long delta' '
+	# Use --window=0 to make sure we are seeing reused deltas,
+	# not computing a new long chain.
+	pack=$(git pack-objects --all --window=0 </dev/null pack) &&
+	test 9 = "$(max_chain pack-$pack.pack)"
+'
+
+test_expect_success '--depth limits depth' '
+	pack=$(git pack-objects --all --depth=5 </dev/null pack) &&
+	test 5 = "$(max_chain pack-$pack.pack)"
+'
+
+test_done
-- 
2.11.0.914.gb3b960f50


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] pack-objects: convert recursion to iteration in break_delta_chain()
  2017-01-27 22:01 [PATCH 0/2] limit reused delta chains based on --depth Jeff King
  2017-01-27 22:02 ` [PATCH 1/2] pack-objects: enforce --depth limit in reused deltas Jeff King
@ 2017-01-27 22:05 ` Jeff King
  1 sibling, 0 replies; 6+ messages in thread
From: Jeff King @ 2017-01-27 22:05 UTC (permalink / raw)
  To: git; +Cc: Michael Haggerty

The break_delta_chain() function is recursive over the depth
of a given delta chain, which can lead to possibly running
out of stack space. Normally delta depth is quite small, but
if there _is_ a pathological case, this is where we would
find and fix it, so we should be more careful.

We can do it without recursion at all, but there's a little
bit of cleverness needed to do so. It's easiest to explain
by covering the less-clever strategies first.

The obvious thing to try is just keeping our own stack on
the heap. Whenever we would recurse, push the new entry onto
the stack and loop instead. But this gets tricky; when we
see an ACTIVE entry, we need to care if we just pushed it
(in which case it's a cycle) or if we just popped it (in
which case we dealt with its bases, and no we need to clear
the ACTIVE flag and compute its depth).

You can hack around that in various ways, like keeping a
"just pushed" flag, but the logic gets muddled. However, we
can observe that we do all of our pushes first, and then all
of our pops afterwards. In other words, we can do this in
two passes. First dig down to the base, stopping when we see
a cycle, and pushing each item onto our stack.  Then pop the
stack elements, clearing the ACTIVE flag and computing the
depth for each.

This works, and is reasonably elegant. However, why do we
need the stack for the second pass? We can just walk the
delta pointers again. There's one complication. Popping the
stack went over our list in reverse, so we could compute the
depth of each entry by incrementing the depth of its base,
which we will have just computed.  To go forward in the
second pass, we have to compute the total depth on the way
down, and then assign it as we go.

This patch implements this final strategy, because it not
only keeps the memory off the stack, but it eliminates it
entirely. Credit for the cleverness in that approach goes to
Michael Haggerty; bugs are mine.

Signed-off-by: Jeff King <peff@peff.net>
---
The diff is nearly impossible to read, so I'd recommend just looking at
the end result. I tried to document the tricky parts with comments.
There are a few parts that could be made more terse, but I screwed it up
so many times while writing it that I decided to do it in a way that
carefully documents all of the assumptions.

 builtin/pack-objects.c | 129 +++++++++++++++++++++++++++++++++++++------------
 1 file changed, 99 insertions(+), 30 deletions(-)

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 2b08c8121..c7af47548 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -1579,48 +1579,117 @@ static void drop_reused_delta(struct object_entry *entry)
  */
 static void break_delta_chains(struct object_entry *entry)
 {
-	/* If it's not a delta, it can't be part of a cycle. */
-	if (!entry->delta) {
-		entry->dfs_state = DFS_DONE;
-		return;
-	}
+	/*
+	 * The actual depth of each object we will write is stored as an int,
+	 * as it cannot exceed our int "depth" limit. But before we break
+	 * changes based no that limit, we may potentially go as deep as the
+	 * number of objects, which is elsewhere bounded to a uint32_t.
+	 */
+	uint32_t total_depth;
+	struct object_entry *cur, *next;
+
+	for (cur = entry, total_depth = 0;
+	     cur;
+	     cur = cur->delta, total_depth++) {
+		if (cur->dfs_state == DFS_DONE) {
+			/*
+			 * We've already seen this object and know it isn't
+			 * part of a cycle. We do need to append its depth
+			 * to our count.
+			 */
+			total_depth += cur->depth;
+			break;
+		}
 
-	switch (entry->dfs_state) {
-	case DFS_NONE:
 		/*
-		 * This is the first time we've seen the object. We mark it as
-		 * part of the active potential cycle and recurse.
+		 * We break cycles before looping, so an ACTIVE state (or any
+		 * other cruft which made its way into the state variable)
+		 * is a bug.
 		 */
-		entry->dfs_state = DFS_ACTIVE;
-		break_delta_chains(entry->delta);
+		if (cur->dfs_state != DFS_NONE)
+			die("BUG: confusing delta dfs state in first pass: %d",
+			    cur->dfs_state);
 
 		/*
-		 * Once we've recursed, our base (if we still have one) knows
-		 * its depth, so we can compute ours (and check it against
-		 * the limit).
+		 * Now we know this is the first time we've seen the object. If
+		 * it's not a delta, we're done traversing, but we'll mark it
+		 * done to save time on future traversals.
 		 */
-		if (entry->delta) {
-			entry->depth = entry->delta->depth + 1;
-			if (entry->depth > depth)
-				drop_reused_delta(entry);
+		if (!cur->delta) {
+			cur->dfs_state = DFS_DONE;
+			break;
 		}
 
-		entry->dfs_state = DFS_DONE;
-		break;
+		/*
+		 * Mark ourselves as active and see if the next step causes
+		 * us to cycle to another active object. It's important to do
+		 * this _before_ we loop, because it impacts where we make the
+		 * cut, and thus how our total_depth counter works.
+		 * E.g., We may see a partial loop like:
+		 *
+		 *   A -> B -> C -> D -> B
+		 *
+		 * Cutting B->C breaks the cycle. But now the depth of A is
+		 * only 1, and our total_depth counter is at 3. The size of the
+		 * error is always one less than the size of the cycle we
+		 * broke. Commits C and D were "lost" from A's chain.
+		 *
+		 * If we instead cut D->B, then the depth of A is correct at 3.
+		 * We keep all commits in the chain that we examined.
+		 */
+		cur->dfs_state = DFS_ACTIVE;
+		if (cur->delta->dfs_state == DFS_ACTIVE) {
+			drop_reused_delta(cur);
+			cur->dfs_state = DFS_DONE;
+			break;
+		}
+	}
 
-	case DFS_DONE:
-		/* object already examined, and not part of a cycle */
-		break;
+	/*
+	 * And now that we've gone all the way to the bottom of the chain, we
+	 * need to clear the active flags and set the depth fields as
+	 * appropriate. Unlike the loop above, which can quit when it drops a
+	 * delta, we need to keep going to look for more depth cuts. So we need
+	 * an extra "next" pointer to keep going after we reset cur->delta.
+	 */
+	for (cur = entry; cur; cur = next) {
+		next = cur->delta;
 
-	case DFS_ACTIVE:
 		/*
-		 * We found a cycle that needs broken. It would be correct to
-		 * break any link in the chain, but it's convenient to
-		 * break this one.
+		 * We should have a chain of zero or more ACTIVE states down to
+		 * a final DONE. We can quit after the DONE, because either it
+		 * has no bases, or we've already handled them in a previous
+		 * call.
 		 */
-		drop_reused_delta(entry);
-		entry->dfs_state = DFS_DONE;
-		break;
+		if (cur->dfs_state == DFS_DONE)
+			break;
+		else if (cur->dfs_state != DFS_ACTIVE)
+			die("BUG: confusing delta dfs state in second pass: %d",
+			    cur->dfs_state);
+
+		/*
+		 * If the total_depth is more than depth, then we need to snip
+		 * the chain into two or more smaller chains that don't exceed
+		 * the maximum depth. Most of the resulting chains will contain
+		 * (depth + 1) entries (i.e., depth deltas plus one base), and
+		 * the last chain (i.e., the one containing entry) will contain
+		 * whatever entries are left over, namely
+		 * (total_depth % (depth + 1)) of them.
+		 *
+		 * Since we are iterating towards decreasing depth, we need to
+		 * decrement total_depth as we go, and we need to write to the
+		 * entry what its final depth will be after all of the
+		 * snipping. Since we're snipping into chains of length (depth
+		 * + 1) entries, the final depth of an entry will be its
+		 * original depth modulo (depth + 1). Any time we encounter an
+		 * entry whose final depth is supposed to be zero, we snip it
+		 * from its delta base, thereby making it so.
+		 */
+		cur->depth = (total_depth--) % (depth + 1);
+		if (!cur->depth)
+			drop_reused_delta(cur);
+
+		cur->dfs_state = DFS_DONE;
 	}
 }
 
-- 
2.11.0.914.gb3b960f50

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] pack-objects: enforce --depth limit in reused deltas
  2017-01-27 22:02 ` [PATCH 1/2] pack-objects: enforce --depth limit in reused deltas Jeff King
@ 2017-01-27 23:31   ` Junio C Hamano
  2017-01-28  0:09     ` Jeff King
  0 siblings, 1 reply; 6+ messages in thread
From: Junio C Hamano @ 2017-01-27 23:31 UTC (permalink / raw)
  To: Jeff King; +Cc: git, Michael Haggerty

Jeff King <peff@peff.net> writes:

> Since 898b14c (pack-objects: rework check_delta_limit usage,
> 2007-04-16), we check the delta depth limit only when
> figuring out whether we should make a new delta. We don't
> consider it at all when reusing deltas, which means that
> packing once with --depth=250, and then again with
> --depth=50, the second pack my still contain chains larger
> than 50.

"may still contain", methinks.

> ...
>
> This patch bounds the length of delta chains in the output
> pack based on --depth, regardless of whether they are caused
> by cross-pack deltas or existed in the input packs. This
> fixes the problem, but does have two possible downsides:
>
>   1. High-depth aggressive repacks followed by "normal"
>      repacks will throw away the high-depth chains.

I actually think it is a feature that the normal one that runs later
is allowed to fix an over-deep delta chain.

>   2. If you really do want to store high-depth deltas on
>      disk, they may be discarded and new delta computed when
>      serving a fetch, unless you set pack.depth to match
>      your high-depth size.

Likewise.

> ... But we may
> visit an object through multiple delta paths, ...

Yes, good thinking.

> diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
> index 8841f8b36..2b08c8121 100644
> --- a/builtin/pack-objects.c
> +++ b/builtin/pack-objects.c
> @@ -1539,6 +1539,8 @@ static int pack_offset_sort(const void *_a, const void *_b)
>   *   2. Updating our size/type to the non-delta representation. These were
>   *      either not recorded initially (size) or overwritten with the delta type
>   *      (type) when check_object() decided to reuse the delta.
> + *
> + *   3. Resetting our delta depth, as we are now a base object.
>   */
>  static void drop_reused_delta(struct object_entry *entry)
>  {
> @@ -1552,6 +1554,7 @@ static void drop_reused_delta(struct object_entry *entry)
>  			p = &(*p)->delta_sibling;
>  	}
>  	entry->delta = NULL;
> +	entry->depth = 0;
>  
>  	oi.sizep = &entry->size;
>  	oi.typep = &entry->type;

Makes sense.

>  static void break_delta_chains(struct object_entry *entry)
>  {
> @@ -1587,6 +1593,18 @@ static void break_delta_chains(struct object_entry *entry)
>  		 */
>  		entry->dfs_state = DFS_ACTIVE;
>  		break_delta_chains(entry->delta);
> +
> +		/*
> +		 * Once we've recursed, our base (if we still have one) knows
> +		 * its depth, so we can compute ours (and check it against
> +		 * the limit).
> +		 */
> +		if (entry->delta) {
> +			entry->depth = entry->delta->depth + 1;
> +			if (entry->depth > depth)
> +				drop_reused_delta(entry);
> +		}

;-)  Surprisingly simpple and effective.

> diff --git a/t/t5316-pack-delta-depth.sh b/t/t5316-pack-delta-depth.sh
> new file mode 100755
> index 000000000..236d60fe6
> --- /dev/null
> +++ b/t/t5316-pack-delta-depth.sh
> @@ -0,0 +1,93 @@
> +#!/bin/sh
> +
> +test_description='pack-objects breaks long cross-pack delta chains'
> +. ./test-lib.sh
> +
> +# This mirrors a repeated push setup:
> +#
> +# 1. A client repeatedly modifies some files, makes a
> +#      commit, and pushes the result. It does this N times
> +#      before we get around to repacking.
> +#
> +# 2. Each push generates a thin pack with the new version of
> +#    various objects. Let's consider some file in the root tree
> +#    which is updated in each commit.
> +#
> +#    When generating push number X, we feed commit X-1 (and
> +#    thus blob X-1) as a preferred base. The resulting pack has
> +#    blob X as a thin delta against blob X-1.
> +#
> +#    On the receiving end, "index-pack --fix-thin" will
> +#    complete the pack with a base copy of tree X-1.

blob? tree? I think the argument would work the same way for either
type of objects, but the previous paragraph is using blob as the
example, so s/tree/blob/ here?

> +# 3. In older versions of git, if we used the delta from
> +#    pack X, then we'd always find blob X-1 as a base in the
> +#    same pack (and generate a fresh delta).
> +#
> +#    But with the pack mru, we jump from delta to delta
> +#    following the traversal order:
> +# ...
> +# Now we have blob X as a delta against X-1, which is a delta
> +# against X-2, and so forth.

> +# Note that this whole setup is pretty reliant on the current
> +# packing heuristics. We double-check that our test case
> +# actually produces a long chain. If it doesn't, it should be
> +# adjusted (or scrapped if the heuristics have become too unreliable)

IOW, we want something that says "we first check X and if X still
holds, then we expect Y to also hold; if X no longer hold, do not
bother to test that Y holds".  Nice food for thought.  Perhaps we
want a way to express that in our test framework, or is X in the
above merely another way to say "prerequisite"?

> +test_expect_success 'packing produces a long delta' '
> +	# Use --window=0 to make sure we are seeing reused deltas,
> +	# not computing a new long chain.
> +	pack=$(git pack-objects --all --window=0 </dev/null pack) &&
> +	test 9 = "$(max_chain pack-$pack.pack)"
> +'
> +
> +test_expect_success '--depth limits depth' '
> +	pack=$(git pack-objects --all --depth=5 </dev/null pack) &&
> +	test 5 = "$(max_chain pack-$pack.pack)"
> +'
> +
> +test_done

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] pack-objects: enforce --depth limit in reused deltas
  2017-01-27 23:31   ` Junio C Hamano
@ 2017-01-28  0:09     ` Jeff King
  2017-01-28  0:27       ` Junio C Hamano
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff King @ 2017-01-28  0:09 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git, Michael Haggerty

On Fri, Jan 27, 2017 at 03:31:36PM -0800, Junio C Hamano wrote:

> Jeff King <peff@peff.net> writes:
> 
> > Since 898b14c (pack-objects: rework check_delta_limit usage,
> > 2007-04-16), we check the delta depth limit only when
> > figuring out whether we should make a new delta. We don't
> > consider it at all when reusing deltas, which means that
> > packing once with --depth=250, and then again with
> > --depth=50, the second pack my still contain chains larger
> > than 50.
> 
> "may still contain", methinks.

Oops, yes. There was another typo there that I fixed while proofreading,
and clearly I made it worse. :-/

> > This patch bounds the length of delta chains in the output
> > pack based on --depth, regardless of whether they are caused
> > by cross-pack deltas or existed in the input packs. This
> > fixes the problem, but does have two possible downsides:
> >
> >   1. High-depth aggressive repacks followed by "normal"
> >      repacks will throw away the high-depth chains.
> 
> I actually think it is a feature that the normal one that runs later
> is allowed to fix an over-deep delta chain.

Yeah, I saw you expressing that sentiment in the earlier thread, and I
think I agree with it.

The big problem to me is mostly that people may be surprised by the
change of behavior if they have a complicated setup. But those people
read the release notes, right? ;)

Arguably setting gc.aggressiveDepth is a mistake after this patch (you
should just set pack.depth).  Possibly we should ignore it with a
warning.

> > +#    On the receiving end, "index-pack --fix-thin" will
> > +#    complete the pack with a base copy of tree X-1.
> 
> blob? tree? I think the argument would work the same way for either
> type of objects, but the previous paragraph is using blob as the
> example, so s/tree/blob/ here?

Oops, yes. I had originally sketched out the example to use trees, but
it was easier to assemble with blobs so I switched (I never actually dug
into my "wild" case to see what it was segfaulting on, but I agree it
applies equally well to either).

> > +# Note that this whole setup is pretty reliant on the current
> > +# packing heuristics. We double-check that our test case
> > +# actually produces a long chain. If it doesn't, it should be
> > +# adjusted (or scrapped if the heuristics have become too unreliable)
> 
> IOW, we want something that says "we first check X and if X still
> holds, then we expect Y to also hold; if X no longer hold, do not
> bother to test that Y holds".  Nice food for thought.  Perhaps we
> want a way to express that in our test framework, or is X in the
> above merely another way to say "prerequisite"?

It _is_ a prerequisite, but I think unlike our normal prerequisites, we
wouldn't want to just quietly skip the test if it fails. Because it's
not "oops, this system doesn't support this test" so much as "something
in Git changed, and a human needs to evaluate whether this test can
still be performed".

So I hoped that if that prerequisite test ever broke due to a change in
pack-objects, the person making the change would find the comment and
decide the appropriate next step.

I'll include a version below fixing the typos you found, in case you did
not just mark them up already.

-- >8 --
Subject: [PATCH] pack-objects: enforce --depth limit in reused deltas

Since 898b14c (pack-objects: rework check_delta_limit usage,
2007-04-16), we check the delta depth limit only when
figuring out whether we should make a new delta. We don't
consider it at all when reusing deltas, which means that
packing once with --depth=250, and then again with
--depth=50, the second pack may still contain chains larger
than 50.

This is generally considered a feature, as the results of
earlier high-depth repacks are carried forward, used for
serving fetches, etc. However, since we started using
cross-pack deltas in c9af708b1 (pack-objects: use mru list
when iterating over packs, 2016-08-11), we are no longer
bounded by the length of an existing delta chain in a single
pack.

Here's one particular pathological case: a sequence of N
packs, each with 2 objects, the base of which is stored as a
delta in a previous pack. If we chain all the deltas
together, we have a cycle of length N. We break the cycle,
but the tip delta is still at depth N-1.

This is less unlikely than it might sound. See the included
test for a reconstruction based on real-world actions.  I
ran into such a case in the wild, where a client was rapidly
sending packs, and we had accumulated 10,000 before doing a
server-side repack.  The pack that "git repack" tried to
generate had a very deep chain, which caused pack-objects to
run out of stack space in the recursive write_one().

This patch bounds the length of delta chains in the output
pack based on --depth, regardless of whether they are caused
by cross-pack deltas or existed in the input packs. This
fixes the problem, but does have two possible downsides:

  1. High-depth aggressive repacks followed by "normal"
     repacks will throw away the high-depth chains.

     In the long run this is probably OK; investigation
     showed that high-depth repacks aren't actually
     beneficial, and we dropped the aggressive depth default
     to match the normal case in 07e7dbf0d (gc: default
     aggressive depth to 50, 2016-08-11).

  2. If you really do want to store high-depth deltas on
     disk, they may be discarded and new delta computed when
     serving a fetch, unless you set pack.depth to match
     your high-depth size.

The implementation uses the existing search for delta
cycles.  That lets us compute the depth of any node based on
the depth of its base, because we know the base is DFS_DONE
by the time we look at it (modulo any cycles in the graph,
but we know there cannot be any because we break them as we
see them).

There is some subtlety worth mentioning, though. We record
the depth of each object as we compute it. It might seem
like we could save the per-object storage space by just
keeping track of the depth of our traversal (i.e., have
break_delta_chains() report how deep it went). But we may
visit an object through multiple delta paths, and on
subsequent paths we want to know its depth immediately,
without having to walk back down to its final base (doing so
would make our graph walk quadratic rather than linear).

Likewise, one could try to record the depth not from the
base, but from our starting point (i.e., start
recursion_depth at 0, and pass "recursion_depth + 1" to each
invocation of break_delta_chains()). And then when
recursion_depth gets too big, we know that we must cut the
delta chain.  But that technique is wrong if we do not visit
the nodes in topological order. In a chain A->B->C, it
if we visit "C", then "B", then "A", we will never recurse
deeper than 1 link (because we see at each node that we have
already visited it).

Signed-off-by: Jeff King <peff@peff.net>
---
 builtin/pack-objects.c      | 18 +++++++++
 pack-objects.h              |  4 ++
 t/t5316-pack-delta-depth.sh | 93 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 115 insertions(+)
 create mode 100755 t/t5316-pack-delta-depth.sh

diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c
index 8841f8b36..2b08c8121 100644
--- a/builtin/pack-objects.c
+++ b/builtin/pack-objects.c
@@ -1539,6 +1539,8 @@ static int pack_offset_sort(const void *_a, const void *_b)
  *   2. Updating our size/type to the non-delta representation. These were
  *      either not recorded initially (size) or overwritten with the delta type
  *      (type) when check_object() decided to reuse the delta.
+ *
+ *   3. Resetting our delta depth, as we are now a base object.
  */
 static void drop_reused_delta(struct object_entry *entry)
 {
@@ -1552,6 +1554,7 @@ static void drop_reused_delta(struct object_entry *entry)
 			p = &(*p)->delta_sibling;
 	}
 	entry->delta = NULL;
+	entry->depth = 0;
 
 	oi.sizep = &entry->size;
 	oi.typep = &entry->type;
@@ -1570,6 +1573,9 @@ static void drop_reused_delta(struct object_entry *entry)
  * Follow the chain of deltas from this entry onward, throwing away any links
  * that cause us to hit a cycle (as determined by the DFS state flags in
  * the entries).
+ *
+ * We also detect too-long reused chains that would violate our --depth
+ * limit.
  */
 static void break_delta_chains(struct object_entry *entry)
 {
@@ -1587,6 +1593,18 @@ static void break_delta_chains(struct object_entry *entry)
 		 */
 		entry->dfs_state = DFS_ACTIVE;
 		break_delta_chains(entry->delta);
+
+		/*
+		 * Once we've recursed, our base (if we still have one) knows
+		 * its depth, so we can compute ours (and check it against
+		 * the limit).
+		 */
+		if (entry->delta) {
+			entry->depth = entry->delta->depth + 1;
+			if (entry->depth > depth)
+				drop_reused_delta(entry);
+		}
+
 		entry->dfs_state = DFS_DONE;
 		break;
 
diff --git a/pack-objects.h b/pack-objects.h
index cc9b9a9b9..03f119165 100644
--- a/pack-objects.h
+++ b/pack-objects.h
@@ -30,12 +30,16 @@ struct object_entry {
 
 	/*
 	 * State flags for depth-first search used for analyzing delta cycles.
+	 *
+	 * The depth is measured in delta-links to the base (so if A is a delta
+	 * against B, then A has a depth of 1, and B a depth of 0).
 	 */
 	enum {
 		DFS_NONE = 0,
 		DFS_ACTIVE,
 		DFS_DONE
 	} dfs_state;
+	int depth;
 };
 
 struct packing_data {
diff --git a/t/t5316-pack-delta-depth.sh b/t/t5316-pack-delta-depth.sh
new file mode 100755
index 000000000..236d60fe6
--- /dev/null
+++ b/t/t5316-pack-delta-depth.sh
@@ -0,0 +1,93 @@
+#!/bin/sh
+
+test_description='pack-objects breaks long cross-pack delta chains'
+. ./test-lib.sh
+
+# This mirrors a repeated push setup:
+#
+# 1. A client repeatedly modifies some files, makes a
+#      commit, and pushes the result. It does this N times
+#      before we get around to repacking.
+#
+# 2. Each push generates a thin pack with the new version of
+#    various objects. Let's consider some file in the root tree
+#    which is updated in each commit.
+#
+#    When generating push number X, we feed commit X-1 (and
+#    thus blob X-1) as a preferred base. The resulting pack has
+#    blob X as a thin delta against blob X-1.
+#
+#    On the receiving end, "index-pack --fix-thin" will
+#    complete the pack with a base copy of tree X-1.
+#
+# 3. In older versions of git, if we used the delta from
+#    pack X, then we'd always find blob X-1 as a base in the
+#    same pack (and generate a fresh delta).
+#
+#    But with the pack mru, we jump from delta to delta
+#    following the traversal order:
+#
+#      a. We grab blob X from pack X as a delta, putting it at
+#         the tip of our mru list.
+#
+#      b. Eventually we move onto commit X-1. We need other
+#         objects which are only in pack X-1 (in the test code
+#         below, it's the containing tree). That puts pack X-1
+#         at the tip of our mru list.
+#
+#      c. Eventually we look for blob X-1, and we find the
+#         version in pack X-1 (because it's the mru tip).
+#
+# Now we have blob X as a delta against X-1, which is a delta
+# against X-2, and so forth.
+#
+# In the real world, these small pushes would get exploded by
+# unpack-objects rather than "index-pack --fix-thin", but the
+# same principle applies to larger pushes (they only need one
+# repeatedly-modified file to generate the delta chain).
+
+test_expect_success 'create series of packs' '
+	test-genrandom foo 4096 >content &&
+	prev= &&
+	for i in $(test_seq 1 10)
+	do
+		cat content >file &&
+		echo $i >>file &&
+		git add file &&
+		git commit -m $i &&
+		cur=$(git rev-parse HEAD^{tree}) &&
+		{
+			test -n "$prev" && echo "-$prev"
+			echo $cur
+			echo "$(git rev-parse :file) file"
+		} | git pack-objects --stdout >tmp &&
+		git index-pack --stdin --fix-thin <tmp || return 1
+		prev=$cur
+	done
+'
+
+max_chain() {
+	git index-pack --verify-stat-only "$1" >output &&
+	perl -lne '
+	  /chain length = (\d+)/ and $len = $1;
+	  END { print $len }
+	' output
+}
+
+# Note that this whole setup is pretty reliant on the current
+# packing heuristics. We double-check that our test case
+# actually produces a long chain. If it doesn't, it should be
+# adjusted (or scrapped if the heuristics have become too unreliable)
+test_expect_success 'packing produces a long delta' '
+	# Use --window=0 to make sure we are seeing reused deltas,
+	# not computing a new long chain.
+	pack=$(git pack-objects --all --window=0 </dev/null pack) &&
+	test 9 = "$(max_chain pack-$pack.pack)"
+'
+
+test_expect_success '--depth limits depth' '
+	pack=$(git pack-objects --all --depth=5 </dev/null pack) &&
+	test 5 = "$(max_chain pack-$pack.pack)"
+'
+
+test_done
-- 
2.11.0.914.gb3b960f50



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] pack-objects: enforce --depth limit in reused deltas
  2017-01-28  0:09     ` Jeff King
@ 2017-01-28  0:27       ` Junio C Hamano
  0 siblings, 0 replies; 6+ messages in thread
From: Junio C Hamano @ 2017-01-28  0:27 UTC (permalink / raw)
  To: Jeff King; +Cc: git, Michael Haggerty

Jeff King <peff@peff.net> writes:

>> > +#    On the receiving end, "index-pack --fix-thin" will
>> > +#    complete the pack with a base copy of tree X-1.
>> 
>> blob? tree? I think the argument would work the same way for either
>> type of objects, but the previous paragraph is using blob as the
>> example, so s/tree/blob/ here?
>
> Oops, yes. I had originally sketched out the example to use trees, but
> it was easier to assemble with blobs so I switched (I never actually dug
> into my "wild" case to see what it was segfaulting on, but I agree it
> applies equally well to either).

OK, then I'll squash in s/tree/blob/ here, in addition to the "my"
thing.  Thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-01-28  1:02 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-27 22:01 [PATCH 0/2] limit reused delta chains based on --depth Jeff King
2017-01-27 22:02 ` [PATCH 1/2] pack-objects: enforce --depth limit in reused deltas Jeff King
2017-01-27 23:31   ` Junio C Hamano
2017-01-28  0:09     ` Jeff King
2017-01-28  0:27       ` Junio C Hamano
2017-01-27 22:05 ` [PATCH 2/2] pack-objects: convert recursion to iteration in break_delta_chain() Jeff King

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).