git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] oidset: use khash
@ 2018-10-03 13:11 René Scharfe
  2018-10-03 13:12 ` [PATCH v2 1/2] khash: factor out kh_release_* René Scharfe
                   ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-03 13:11 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King

Continue the discussion of speeding up oidset started in [1] here in
its own thread.

[1] https://public-inbox.org/git/20180811172350.GA2689@sigill.intra.peff.net/

The first patch does a mild refactoring to support khash structures
on the stack, and the second one converts oidset to khash.

  khash: factor out kh_release_*
  oidset: use khash

 fetch-pack.c |  2 +-
 khash.h      |  9 +++++++--
 oidset.c     | 34 ++++++++++++----------------------
 oidset.h     | 36 ++++++++++++++++++++++++++++--------
 4 files changed, 48 insertions(+), 33 deletions(-)

-- 
2.19.0

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 1/2] khash: factor out kh_release_*
  2018-10-03 13:11 [PATCH v2 0/2] oidset: use khash René Scharfe
@ 2018-10-03 13:12 ` René Scharfe
  2018-10-03 13:16 ` [PATCH v2 2/2] oidset: use khash René Scharfe
  2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
  2 siblings, 0 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-03 13:12 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King

Add a function for releasing the khash-internal allocations, but not the
khash structure itself.  It can be used with on-stack khash structs.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
---
1 tab = 4 spaces here.

 khash.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/khash.h b/khash.h
index 07b4cc2e67..d10caa0c35 100644
--- a/khash.h
+++ b/khash.h
@@ -82,11 +82,16 @@ static const double __ac_HASH_UPPER = 0.77;
 	SCOPE kh_##name##_t *kh_init_##name(void) {							\
 		return (kh_##name##_t*)xcalloc(1, sizeof(kh_##name##_t));		\
 	}																	\
+	SCOPE void kh_release_##name(kh_##name##_t *h)						\
+	{																	\
+		free(h->flags);													\
+		free((void *)h->keys);											\
+		free((void *)h->vals);											\
+	}																	\
 	SCOPE void kh_destroy_##name(kh_##name##_t *h)						\
 	{																	\
 		if (h) {														\
-			free((void *)h->keys); free(h->flags);					\
-			free((void *)h->vals);										\
+			kh_release_##name(h);										\
 			free(h);													\
 		}																\
 	}																	\
-- 
2.19.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 2/2] oidset: use khash
  2018-10-03 13:11 [PATCH v2 0/2] oidset: use khash René Scharfe
  2018-10-03 13:12 ` [PATCH v2 1/2] khash: factor out kh_release_* René Scharfe
@ 2018-10-03 13:16 ` René Scharfe
  2018-10-03 19:40   ` Jeff King
  2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
  2 siblings, 1 reply; 26+ messages in thread
From: René Scharfe @ 2018-10-03 13:16 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King

Reimplement oidset using khash.h in order to reduce its memory footprint
and make it faster.

Performance of a command that mainly checks for duplicate objects using
an oidset, with master and Clang 6.0.1:

  $ cmd="./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'"

  $ /usr/bin/time $cmd >/dev/null
  0.22user 0.03system 0:00.25elapsed 99%CPU (0avgtext+0avgdata 48484maxresident)k
  0inputs+0outputs (0major+11204minor)pagefaults 0swaps

  $ hyperfine "$cmd"
  Benchmark #1: ./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'

    Time (mean ± σ):     250.0 ms ±   6.0 ms    [User: 225.9 ms, System: 23.6 ms]

    Range (min … max):   242.0 ms … 261.1 ms

And with this patch:

  $ /usr/bin/time $cmd >/dev/null
  0.14user 0.00system 0:00.15elapsed 100%CPU (0avgtext+0avgdata 41396maxresident)k
  0inputs+0outputs (0major+8318minor)pagefaults 0swaps

  $ hyperfine "$cmd"
  Benchmark #1: ./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'

    Time (mean ± σ):     151.9 ms ±   4.9 ms    [User: 130.5 ms, System: 21.2 ms]

    Range (min … max):   148.2 ms … 170.4 ms

Initial-patch-by: Jeff King <peff@peff.net>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
---
 fetch-pack.c |  2 +-
 oidset.c     | 34 ++++++++++++----------------------
 oidset.h     | 36 ++++++++++++++++++++++++++++--------
 3 files changed, 41 insertions(+), 31 deletions(-)

diff --git a/fetch-pack.c b/fetch-pack.c
index 75047a4b2a..a839315726 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -536,7 +536,7 @@ static int tip_oids_contain(struct oidset *tip_oids,
 	 * add to "newlist" between calls, the additions will always be for
 	 * oids that are already in the set.
 	 */
-	if (!tip_oids->map.map.tablesize) {
+	if (!tip_oids->set.n_buckets) {
 		add_refs_to_oidset(tip_oids, unmatched);
 		add_refs_to_oidset(tip_oids, newlist);
 	}
diff --git a/oidset.c b/oidset.c
index 454c54f933..9836d427ef 100644
--- a/oidset.c
+++ b/oidset.c
@@ -3,38 +3,28 @@
 
 int oidset_contains(const struct oidset *set, const struct object_id *oid)
 {
-	if (!set->map.map.tablesize)
-		return 0;
-	return !!oidmap_get(&set->map, oid);
+	khiter_t pos = kh_get_oid(&set->set, *oid);
+	return pos != kh_end(&set->set);
 }
 
 int oidset_insert(struct oidset *set, const struct object_id *oid)
 {
-	struct oidmap_entry *entry;
-
-	if (!set->map.map.tablesize)
-		oidmap_init(&set->map, 0);
-	else if (oidset_contains(set, oid))
-		return 1;
-
-	entry = xmalloc(sizeof(*entry));
-	oidcpy(&entry->oid, oid);
-
-	oidmap_put(&set->map, entry);
-	return 0;
+	int added;
+	kh_put_oid(&set->set, *oid, &added);
+	return !added;
 }
 
 int oidset_remove(struct oidset *set, const struct object_id *oid)
 {
-	struct oidmap_entry *entry;
-
-	entry = oidmap_remove(&set->map, oid);
-	free(entry);
-
-	return (entry != NULL);
+	khiter_t pos = kh_get_oid(&set->set, *oid);
+	if (pos == kh_end(&set->set))
+		return 0;
+	kh_del_oid(&set->set, pos);
+	return 1;
 }
 
 void oidset_clear(struct oidset *set)
 {
-	oidmap_free(&set->map, 1);
+	kh_release_oid(&set->set);
+	oidset_init(set, 0);
 }
diff --git a/oidset.h b/oidset.h
index 40ec5f87fe..4b90540cd4 100644
--- a/oidset.h
+++ b/oidset.h
@@ -1,7 +1,8 @@
 #ifndef OIDSET_H
 #define OIDSET_H
 
-#include "oidmap.h"
+#include "hashmap.h"
+#include "khash.h"
 
 /**
  * This API is similar to sha1-array, in that it maintains a set of object ids
@@ -15,19 +16,33 @@
  *      table overhead.
  */
 
+static inline unsigned int oid_hash(struct object_id oid)
+{
+	return sha1hash(oid.hash);
+}
+
+static inline int oid_equal(struct object_id a, struct object_id b)
+{
+	return oideq(&a, &b);
+}
+
+KHASH_INIT(oid, struct object_id, int, 0, oid_hash, oid_equal)
+
 /**
  * A single oidset; should be zero-initialized (or use OIDSET_INIT).
  */
 struct oidset {
-	struct oidmap map;
+	kh_oid_t set;
 };
 
-#define OIDSET_INIT { OIDMAP_INIT }
+#define OIDSET_INIT { { 0 } }
 
 
 static inline void oidset_init(struct oidset *set, size_t initial_size)
 {
-	oidmap_init(&set->map, initial_size);
+	memset(&set->set, 0, sizeof(set->set));
+	if (initial_size)
+		kh_resize_oid(&set->set, initial_size);
 }
 
 /**
@@ -58,19 +73,24 @@ int oidset_remove(struct oidset *set, const struct object_id *oid);
 void oidset_clear(struct oidset *set);
 
 struct oidset_iter {
-	struct oidmap_iter m_iter;
+	kh_oid_t *set;
+	khiter_t iter;
 };
 
 static inline void oidset_iter_init(struct oidset *set,
 				    struct oidset_iter *iter)
 {
-	oidmap_iter_init(&set->map, &iter->m_iter);
+	iter->set = &set->set;
+	iter->iter = kh_begin(iter->set);
 }
 
 static inline struct object_id *oidset_iter_next(struct oidset_iter *iter)
 {
-	struct oidmap_entry *e = oidmap_iter_next(&iter->m_iter);
-	return e ? &e->oid : NULL;
+	for (; iter->iter != kh_end(iter->set); iter->iter++) {
+		if (kh_exist(iter->set, iter->iter))
+			return &kh_key(iter->set, iter->iter++);
+	}
+	return NULL;
 }
 
 static inline struct object_id *oidset_iter_first(struct oidset *set,
-- 
2.19.0


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/2] oidset: use khash
  2018-10-03 13:16 ` [PATCH v2 2/2] oidset: use khash René Scharfe
@ 2018-10-03 19:40   ` Jeff King
  2018-10-04  5:56     ` René Scharfe
  0 siblings, 1 reply; 26+ messages in thread
From: Jeff King @ 2018-10-03 19:40 UTC (permalink / raw)
  To: René Scharfe; +Cc: Git List, Junio C Hamano

On Wed, Oct 03, 2018 at 03:16:39PM +0200, René Scharfe wrote:

> Performance of a command that mainly checks for duplicate objects using
> an oidset, with master and Clang 6.0.1:
> 
>   $ cmd="./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'"
> 
>   $ /usr/bin/time $cmd >/dev/null
>   0.22user 0.03system 0:00.25elapsed 99%CPU (0avgtext+0avgdata 48484maxresident)k
>   0inputs+0outputs (0major+11204minor)pagefaults 0swaps
> 
>   $ hyperfine "$cmd"
>   Benchmark #1: ./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'
> 
>     Time (mean ± σ):     250.0 ms ±   6.0 ms    [User: 225.9 ms, System: 23.6 ms]
> 
>     Range (min … max):   242.0 ms … 261.1 ms
> 
> And with this patch:
> 
>   $ /usr/bin/time $cmd >/dev/null
>   0.14user 0.00system 0:00.15elapsed 100%CPU (0avgtext+0avgdata 41396maxresident)k
>   0inputs+0outputs (0major+8318minor)pagefaults 0swaps
> 
>   $ hyperfine "$cmd"
>   Benchmark #1: ./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'
> 
>     Time (mean ± σ):     151.9 ms ±   4.9 ms    [User: 130.5 ms, System: 21.2 ms]
> 
>     Range (min … max):   148.2 ms … 170.4 ms

Thanks. Just to reinforce these timings, a similar command on linux.git
drops from 4.0s to 2.3s.

The patch itself mostly looks good. A few small comments:

> diff --git a/fetch-pack.c b/fetch-pack.c
> index 75047a4b2a..a839315726 100644
> --- a/fetch-pack.c
> +++ b/fetch-pack.c
> @@ -536,7 +536,7 @@ static int tip_oids_contain(struct oidset *tip_oids,
>  	 * add to "newlist" between calls, the additions will always be for
>  	 * oids that are already in the set.
>  	 */
> -	if (!tip_oids->map.map.tablesize) {
> +	if (!tip_oids->set.n_buckets) {
>  		add_refs_to_oidset(tip_oids, unmatched);
>  		add_refs_to_oidset(tip_oids, newlist);
>  	}

This is a little intimate with the implementation of khash, but I think
it's probably OK (and really no worse than what was there before).

As the comment above notes, I think we're really looking at the case
where this gets populated on the first call, but not subsequent ones. It
might be less hacky to use a "static int initialized" here. Or if we
want to avoid hidden globals, put the logic into filter_refs() to decide
when to populate.

I don't think any of that needs to hold up this series, though.

>  int oidset_insert(struct oidset *set, const struct object_id *oid)
>  {
> -	struct oidmap_entry *entry;
> -
> -	if (!set->map.map.tablesize)
> -		oidmap_init(&set->map, 0);
> -	else if (oidset_contains(set, oid))
> -		return 1;
> -
> -	entry = xmalloc(sizeof(*entry));
> -	oidcpy(&entry->oid, oid);
> -
> -	oidmap_put(&set->map, entry);
> -	return 0;
> +	int added;
> +	kh_put_oid(&set->set, *oid, &added);
> +	return !added;
>  }

This actually does the check-and-insert in a single lookup, which is
nice. ;)

> diff --git a/oidset.h b/oidset.h
> index 40ec5f87fe..4b90540cd4 100644
> --- a/oidset.h
> +++ b/oidset.h
> [...]
> +KHASH_INIT(oid, struct object_id, int, 0, oid_hash, oid_equal)

This will declare these "static inline". Our other major "macros become
inline functions" code is commit-slab.h, and there we found it necessary
to add MAYBE_UNUSED. I wonder if we ought to be doing the same here (I
don't get any warnings, but I suspect sparse might complain).

It might be nice if these functions could hide inside oidset.c (and just
declare the struct here). It looks like we might be able to do that with
__KHASH_TYPE(), but the double-underscore implies that we're not
supposed to. ;)

I guess we also use a few of them in our inlines here. I'm not 100% sure
that oidset_* needs to be inlined either, but this is at least a pretty
faithful conversion of the original.

-Peff

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/2] oidset: use khash
  2018-10-03 19:40   ` Jeff King
@ 2018-10-04  5:56     ` René Scharfe
  2018-10-04  6:48       ` Jeff King
  0 siblings, 1 reply; 26+ messages in thread
From: René Scharfe @ 2018-10-04  5:56 UTC (permalink / raw)
  To: Jeff King; +Cc: Git List, Junio C Hamano

Am 03.10.2018 um 21:40 schrieb Jeff King:
> On Wed, Oct 03, 2018 at 03:16:39PM +0200, René Scharfe wrote:
>> diff --git a/fetch-pack.c b/fetch-pack.c
>> index 75047a4b2a..a839315726 100644
>> --- a/fetch-pack.c
>> +++ b/fetch-pack.c
>> @@ -536,7 +536,7 @@ static int tip_oids_contain(struct oidset *tip_oids,
>>  	 * add to "newlist" between calls, the additions will always be for
>>  	 * oids that are already in the set.
>>  	 */
>> -	if (!tip_oids->map.map.tablesize) {
>> +	if (!tip_oids->set.n_buckets) {
>>  		add_refs_to_oidset(tip_oids, unmatched);
>>  		add_refs_to_oidset(tip_oids, newlist);
>>  	}
> 
> This is a little intimate with the implementation of khash, but I think
> it's probably OK (and really no worse than what was there before).
> 
> As the comment above notes, I think we're really looking at the case
> where this gets populated on the first call, but not subsequent ones. It
> might be less hacky to use a "static int initialized" here. Or if we
> want to avoid hidden globals, put the logic into filter_refs() to decide
> when to populate.

Right.  I'd prefer the latter, but was unable to find a nice way that
still populates the oidset lazily.  It's certainly worth another look,
and a separate series.

>> diff --git a/oidset.h b/oidset.h
>> index 40ec5f87fe..4b90540cd4 100644
>> --- a/oidset.h
>> +++ b/oidset.h
>> [...]
>> +KHASH_INIT(oid, struct object_id, int, 0, oid_hash, oid_equal)
> 
> This will declare these "static inline". Our other major "macros become
> inline functions" code is commit-slab.h, and there we found it necessary
> to add MAYBE_UNUSED. I wonder if we ought to be doing the same here (I
> don't get any warnings, but I suspect sparse might complain).

I doubt it (but didn't check) because khash.h defines kh_clear_##name(),
which we don't use it anywhere and there have been no complaints so far.
And if we wanted to add MAYBE_UNUSED then the right place for that would
be in KHASH_INIT, no?

> It might be nice if these functions could hide inside oidset.c (and just
> declare the struct here). It looks like we might be able to do that with
> __KHASH_TYPE(), but the double-underscore implies that we're not
> supposed to. ;)
> 
> I guess we also use a few of them in our inlines here. I'm not 100% sure
> that oidset_* needs to be inlined either, but this is at least a pretty
> faithful conversion of the original.

We could inline all of the oidset functions, following the spirit of
klib/khash.h.

Or we could uninline all of them and then may be able to clean up
oidset.h by using KHASH_DECLARE.  Perhaps we'd need to guard with an
"#ifndef THIS_IS_OIDSET_C" or similar to avoid a clash with KHASH_INIT.

Not sure if any of that would be a worthwhile improvement..

René

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/2] oidset: use khash
  2018-10-04  5:56     ` René Scharfe
@ 2018-10-04  6:48       ` Jeff King
  2018-10-04  6:50         ` Jeff King
  2018-10-04 15:05         ` René Scharfe
  0 siblings, 2 replies; 26+ messages in thread
From: Jeff King @ 2018-10-04  6:48 UTC (permalink / raw)
  To: René Scharfe; +Cc: Git List, Junio C Hamano

On Thu, Oct 04, 2018 at 07:56:44AM +0200, René Scharfe wrote:

> > As the comment above notes, I think we're really looking at the case
> > where this gets populated on the first call, but not subsequent ones. It
> > might be less hacky to use a "static int initialized" here. Or if we
> > want to avoid hidden globals, put the logic into filter_refs() to decide
> > when to populate.
> 
> Right.  I'd prefer the latter, but was unable to find a nice way that
> still populates the oidset lazily.  It's certainly worth another look,
> and a separate series.

It's a little awkward because the lazy load happens in a conditional.
You can fully encapsulate it like the patch below, but I actually don't
think it's really helping readability.

> >> +KHASH_INIT(oid, struct object_id, int, 0, oid_hash, oid_equal)
> > 
> > This will declare these "static inline". Our other major "macros become
> > inline functions" code is commit-slab.h, and there we found it necessary
> > to add MAYBE_UNUSED. I wonder if we ought to be doing the same here (I
> > don't get any warnings, but I suspect sparse might complain).
> 
> I doubt it (but didn't check) because khash.h defines kh_clear_##name(),
> which we don't use it anywhere and there have been no complaints so far.
> And if we wanted to add MAYBE_UNUSED then the right place for that would
> be in KHASH_INIT, no?

Right, that would be the correct spot. I'm OK to leave it until somebody
complains. Looking at commit-slab again, its functions are straight
statics, not static inline. That's probably the important difference.

> > It might be nice if these functions could hide inside oidset.c (and just
> > declare the struct here). It looks like we might be able to do that with
> > __KHASH_TYPE(), but the double-underscore implies that we're not
> > supposed to. ;)
> > 
> > I guess we also use a few of them in our inlines here. I'm not 100% sure
> > that oidset_* needs to be inlined either, but this is at least a pretty
> > faithful conversion of the original.
> 
> We could inline all of the oidset functions, following the spirit of
> klib/khash.h.
> 
> Or we could uninline all of them and then may be able to clean up
> oidset.h by using KHASH_DECLARE.  Perhaps we'd need to guard with an
> "#ifndef THIS_IS_OIDSET_C" or similar to avoid a clash with KHASH_INIT.
> 
> Not sure if any of that would be a worthwhile improvement..

Unless we know something is a performance win to inline, I'd generally
prefer not to.

For a case like this with auto-generated functions, I'm mostly worried
about bloating the compiled code. Either with a bunch of inlined
non-trivial functions, or cases where the compiler says "this is too big
to inline" and generates an anonymous file-scope function, but we end up
with a bunch of duplicates, because we're generating the same functions
in a bunch of C files.

I may be worried about nothing, though. I don't know how clever the
compiler and linker can be there.

-Peff

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/2] oidset: use khash
  2018-10-04  6:48       ` Jeff King
@ 2018-10-04  6:50         ` Jeff King
  2018-10-04 15:05         ` René Scharfe
  1 sibling, 0 replies; 26+ messages in thread
From: Jeff King @ 2018-10-04  6:50 UTC (permalink / raw)
  To: René Scharfe; +Cc: Git List, Junio C Hamano

On Thu, Oct 04, 2018 at 02:48:33AM -0400, Jeff King wrote:

> On Thu, Oct 04, 2018 at 07:56:44AM +0200, René Scharfe wrote:
> 
> > > As the comment above notes, I think we're really looking at the case
> > > where this gets populated on the first call, but not subsequent ones. It
> > > might be less hacky to use a "static int initialized" here. Or if we
> > > want to avoid hidden globals, put the logic into filter_refs() to decide
> > > when to populate.
> > 
> > Right.  I'd prefer the latter, but was unable to find a nice way that
> > still populates the oidset lazily.  It's certainly worth another look,
> > and a separate series.
> 
> It's a little awkward because the lazy load happens in a conditional.
> You can fully encapsulate it like the patch below, but I actually don't
> think it's really helping readability.

I forgot the patch, of course. ;)

I'm not really proposing this, just illustrating one direction (that I
think is kind of ugly). Notably it doesn't get rid of the tricky comment
in tip_oids_contain(), because that is explaining why the single load
works even on a list we're still adding to.

diff --git a/fetch-pack.c b/fetch-pack.c
index a839315726..a6212c8758 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -526,8 +526,14 @@ static void add_refs_to_oidset(struct oidset *oids, struct ref *refs)
 		oidset_insert(oids, &refs->old_oid);
 }
 
-static int tip_oids_contain(struct oidset *tip_oids,
-			    struct ref *unmatched, struct ref *newlist,
+struct lazy_tip_oids {
+	int loaded;
+	struct oidset oids;
+	struct ref *unmatched;
+	struct ref *newlist;
+};
+
+static int tip_oids_contain(struct lazy_tip_oids *tip_oids,
 			    const struct object_id *id)
 {
 	/*
@@ -536,11 +542,12 @@ static int tip_oids_contain(struct oidset *tip_oids,
 	 * add to "newlist" between calls, the additions will always be for
 	 * oids that are already in the set.
 	 */
-	if (!tip_oids->set.n_buckets) {
-		add_refs_to_oidset(tip_oids, unmatched);
-		add_refs_to_oidset(tip_oids, newlist);
+	if (!tip_oids->loaded) {
+		add_refs_to_oidset(&tip_oids->oids, tip_oids->unmatched);
+		add_refs_to_oidset(&tip_oids->oids, tip_oids->newlist);
+		tip_oids->loaded = 1;
 	}
-	return oidset_contains(tip_oids, id);
+	return oidset_contains(&tip_oids->oids, id);
 }
 
 static void filter_refs(struct fetch_pack_args *args,
@@ -551,7 +558,7 @@ static void filter_refs(struct fetch_pack_args *args,
 	struct ref **newtail = &newlist;
 	struct ref *unmatched = NULL;
 	struct ref *ref, *next;
-	struct oidset tip_oids = OIDSET_INIT;
+	struct lazy_tip_oids tip_oids = { 0 };
 	int i;
 
 	i = 0;
@@ -589,6 +596,9 @@ static void filter_refs(struct fetch_pack_args *args,
 		}
 	}
 
+	tip_oids.unmatched = unmatched;
+	tip_oids.newlist = newlist;
+
 	/* Append unmatched requests to the list */
 	for (i = 0; i < nr_sought; i++) {
 		struct object_id oid;
@@ -604,8 +614,7 @@ static void filter_refs(struct fetch_pack_args *args,
 
 		if ((allow_unadvertised_object_request &
 		     (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1)) ||
-		    tip_oids_contain(&tip_oids, unmatched, newlist,
-				     &ref->old_oid)) {
+		    tip_oids_contain(&tip_oids, &ref->old_oid)) {
 			ref->match_status = REF_MATCHED;
 			*newtail = copy_ref(ref);
 			newtail = &(*newtail)->next;
@@ -614,7 +623,7 @@ static void filter_refs(struct fetch_pack_args *args,
 		}
 	}
 
-	oidset_clear(&tip_oids);
+	oidset_clear(&tip_oids.oids);
 	for (ref = unmatched; ref; ref = next) {
 		next = ref->next;
 		free(ref);

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 2/2] oidset: use khash
  2018-10-04  6:48       ` Jeff King
  2018-10-04  6:50         ` Jeff King
@ 2018-10-04 15:05         ` René Scharfe
  1 sibling, 0 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-04 15:05 UTC (permalink / raw)
  To: Jeff King; +Cc: Git List, Junio C Hamano

Am 04.10.2018 um 08:48 schrieb Jeff King:
> On Thu, Oct 04, 2018 at 07:56:44AM +0200, René Scharfe wrote:
> 
>>> As the comment above notes, I think we're really looking at the case
>>> where this gets populated on the first call, but not subsequent ones. It
>>> might be less hacky to use a "static int initialized" here. Or if we
>>> want to avoid hidden globals, put the logic into filter_refs() to decide
>>> when to populate.
>>
>> Right.  I'd prefer the latter, but was unable to find a nice way that
>> still populates the oidset lazily.  It's certainly worth another look,
>> and a separate series.
> 
> It's a little awkward because the lazy load happens in a conditional.
> You can fully encapsulate it like the patch below, but I actually don't
> think it's really helping readability.

*Shudder*

>>> It might be nice if these functions could hide inside oidset.c (and just
>>> declare the struct here). It looks like we might be able to do that with
>>> __KHASH_TYPE(), but the double-underscore implies that we're not
>>> supposed to. ;)
>>>
>>> I guess we also use a few of them in our inlines here. I'm not 100% sure
>>> that oidset_* needs to be inlined either, but this is at least a pretty
>>> faithful conversion of the original.
>>
>> We could inline all of the oidset functions, following the spirit of
>> klib/khash.h.
>>
>> Or we could uninline all of them and then may be able to clean up
>> oidset.h by using KHASH_DECLARE.  Perhaps we'd need to guard with an
>> "#ifndef THIS_IS_OIDSET_C" or similar to avoid a clash with KHASH_INIT.
>>
>> Not sure if any of that would be a worthwhile improvement..
> 
> Unless we know something is a performance win to inline, I'd generally
> prefer not to.

Agreed.

> For a case like this with auto-generated functions, I'm mostly worried
> about bloating the compiled code. Either with a bunch of inlined
> non-trivial functions, or cases where the compiler says "this is too big
> to inline" and generates an anonymous file-scope function, but we end up
> with a bunch of duplicates, because we're generating the same functions
> in a bunch of C files.

The _iter_ functions look harmless in this regard, as the only use small
functions that are not even type-specific.

oidset_init() would better be moved to oidset.c, as the code for resizing
is quite big.

René

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 0/5] oidset: use khash
  2018-10-03 13:11 [PATCH v2 0/2] oidset: use khash René Scharfe
  2018-10-03 13:12 ` [PATCH v2 1/2] khash: factor out kh_release_* René Scharfe
  2018-10-03 13:16 ` [PATCH v2 2/2] oidset: use khash René Scharfe
@ 2018-10-04 15:05 ` René Scharfe
  2018-10-04 15:09   ` [PATCH v3 1/5] fetch-pack: factor out is_unmatched_ref() René Scharfe
                     ` (5 more replies)
  2 siblings, 6 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-04 15:05 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King, Jonathan Tan

Two new patches to avoid using oidset internals in fetch-pack:

  fetch-pack: factor out is_unmatched_ref()
  fetch-pack: load tip_oids eagerly iff needed

Unchanged patch:

  khash: factor out kh_release_*

Unchanged, except it doesn't touch fetch-pack anymore:

  oidset: use khash

A new patch, to reduce object text size:

  oidset: uninline oidset_init()

 fetch-pack.c | 49 +++++++++++++++++++++++--------------------------
 khash.h      |  9 +++++++--
 oidset.c     | 41 +++++++++++++++++++----------------------
 oidset.h     | 43 ++++++++++++++++++++++++++++++++-----------
 4 files changed, 81 insertions(+), 61 deletions(-)

-- 
2.19.0


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 1/5] fetch-pack: factor out is_unmatched_ref()
  2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
@ 2018-10-04 15:09   ` René Scharfe
  2018-10-04 15:09   ` [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed René Scharfe
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-04 15:09 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King, Jonathan Tan

Move the code to determine if a request is unmatched to its own little
helper.  This allows us to reuse it in a subsequent patch.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
---
 fetch-pack.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/fetch-pack.c b/fetch-pack.c
index 75047a4b2a..3b317952f0 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -543,6 +543,16 @@ static int tip_oids_contain(struct oidset *tip_oids,
 	return oidset_contains(tip_oids, id);
 }
 
+static int is_unmatched_ref(const struct ref *ref)
+{
+	struct object_id oid;
+	const char *p;
+	return	ref->match_status == REF_NOT_MATCHED &&
+		!parse_oid_hex(ref->name, &oid, &p) &&
+		*p == '\0' &&
+		oideq(&oid, &ref->old_oid);
+}
+
 static void filter_refs(struct fetch_pack_args *args,
 			struct ref **refs,
 			struct ref **sought, int nr_sought)
@@ -591,15 +601,8 @@ static void filter_refs(struct fetch_pack_args *args,
 
 	/* Append unmatched requests to the list */
 	for (i = 0; i < nr_sought; i++) {
-		struct object_id oid;
-		const char *p;
-
 		ref = sought[i];
-		if (ref->match_status != REF_NOT_MATCHED)
-			continue;
-		if (parse_oid_hex(ref->name, &oid, &p) ||
-		    *p != '\0' ||
-		    !oideq(&oid, &ref->old_oid))
+		if (!is_unmatched_ref(ref))
 			continue;
 
 		if ((allow_unadvertised_object_request &
-- 
2.19.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
  2018-10-04 15:09   ` [PATCH v3 1/5] fetch-pack: factor out is_unmatched_ref() René Scharfe
@ 2018-10-04 15:09   ` René Scharfe
  2018-10-04 21:38     ` Jonathan Tan
  2018-10-04 22:07     ` Jeff King
  2018-10-04 15:10   ` [PATCH v3 3/5] khash: factor out kh_release_* René Scharfe
                     ` (3 subsequent siblings)
  5 siblings, 2 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-04 15:09 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King, Jonathan Tan

tip_oids_contain() lazily loads refs into an oidset at its first call.
It abuses the internal (sub)member .map.tablesize of that oidset to
check if it has done that already.

Determine if the oidset needs to be populated upfront and then do that
instead.  This duplicates a loop, but simplifies the existing one by
separating concerns between the two.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
---
 fetch-pack.c | 36 +++++++++++++++---------------------
 1 file changed, 15 insertions(+), 21 deletions(-)

diff --git a/fetch-pack.c b/fetch-pack.c
index 3b317952f0..53914563b5 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -526,23 +526,6 @@ static void add_refs_to_oidset(struct oidset *oids, struct ref *refs)
 		oidset_insert(oids, &refs->old_oid);
 }
 
-static int tip_oids_contain(struct oidset *tip_oids,
-			    struct ref *unmatched, struct ref *newlist,
-			    const struct object_id *id)
-{
-	/*
-	 * Note that this only looks at the ref lists the first time it's
-	 * called. This works out in filter_refs() because even though it may
-	 * add to "newlist" between calls, the additions will always be for
-	 * oids that are already in the set.
-	 */
-	if (!tip_oids->map.map.tablesize) {
-		add_refs_to_oidset(tip_oids, unmatched);
-		add_refs_to_oidset(tip_oids, newlist);
-	}
-	return oidset_contains(tip_oids, id);
-}
-
 static int is_unmatched_ref(const struct ref *ref)
 {
 	struct object_id oid;
@@ -563,6 +546,8 @@ static void filter_refs(struct fetch_pack_args *args,
 	struct ref *ref, *next;
 	struct oidset tip_oids = OIDSET_INIT;
 	int i;
+	int strict = !(allow_unadvertised_object_request &
+		       (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1));
 
 	i = 0;
 	for (ref = *refs; ref; ref = next) {
@@ -599,16 +584,25 @@ static void filter_refs(struct fetch_pack_args *args,
 		}
 	}
 
+	if (strict) {
+		for (i = 0; i < nr_sought; i++) {
+			ref = sought[i];
+			if (!is_unmatched_ref(ref))
+				continue;
+
+			add_refs_to_oidset(&tip_oids, unmatched);
+			add_refs_to_oidset(&tip_oids, newlist);
+			break;
+		}
+	}
+
 	/* Append unmatched requests to the list */
 	for (i = 0; i < nr_sought; i++) {
 		ref = sought[i];
 		if (!is_unmatched_ref(ref))
 			continue;
 
-		if ((allow_unadvertised_object_request &
-		     (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1)) ||
-		    tip_oids_contain(&tip_oids, unmatched, newlist,
-				     &ref->old_oid)) {
+		if (!strict || oidset_contains(&tip_oids, &ref->old_oid)) {
 			ref->match_status = REF_MATCHED;
 			*newtail = copy_ref(ref);
 			newtail = &(*newtail)->next;
-- 
2.19.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 3/5] khash: factor out kh_release_*
  2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
  2018-10-04 15:09   ` [PATCH v3 1/5] fetch-pack: factor out is_unmatched_ref() René Scharfe
  2018-10-04 15:09   ` [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed René Scharfe
@ 2018-10-04 15:10   ` René Scharfe
  2018-10-04 15:13   ` [PATCH v3 4/5] oidset: use khash René Scharfe
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-04 15:10 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King, Jonathan Tan

Add a function for releasing the khash-internal allocations, but not the
khash structure itself.  It can be used with on-stack khash structs.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
---
1 tab = 4 spaces here.

 khash.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/khash.h b/khash.h
index 07b4cc2e67..d10caa0c35 100644
--- a/khash.h
+++ b/khash.h
@@ -82,11 +82,16 @@ static const double __ac_HASH_UPPER = 0.77;
 	SCOPE kh_##name##_t *kh_init_##name(void) {							\
 		return (kh_##name##_t*)xcalloc(1, sizeof(kh_##name##_t));		\
 	}																	\
+	SCOPE void kh_release_##name(kh_##name##_t *h)						\
+	{																	\
+		free(h->flags);													\
+		free((void *)h->keys);											\
+		free((void *)h->vals);											\
+	}																	\
 	SCOPE void kh_destroy_##name(kh_##name##_t *h)						\
 	{																	\
 		if (h) {														\
-			free((void *)h->keys); free(h->flags);					\
-			free((void *)h->vals);										\
+			kh_release_##name(h);										\
 			free(h);													\
 		}																\
 	}																	\
-- 
2.19.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 4/5] oidset: use khash
  2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
                     ` (2 preceding siblings ...)
  2018-10-04 15:10   ` [PATCH v3 3/5] khash: factor out kh_release_* René Scharfe
@ 2018-10-04 15:13   ` René Scharfe
  2018-10-04 15:14   ` [PATCH 5/5] oidset: uninline oidset_init() René Scharfe
  2018-10-04 22:15   ` [PATCH v3 0/5] oidset: use khash Jeff King
  5 siblings, 0 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-04 15:13 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King, Jonathan Tan

Reimplement oidset using khash.h in order to reduce its memory footprint
and make it faster.

Performance of a command that mainly checks for duplicate objects using
an oidset, with master and Clang 6.0.1:

  $ cmd="./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'"

  $ /usr/bin/time $cmd >/dev/null
  0.22user 0.03system 0:00.25elapsed 99%CPU (0avgtext+0avgdata 48484maxresident)k
  0inputs+0outputs (0major+11204minor)pagefaults 0swaps

  $ hyperfine "$cmd"
  Benchmark #1: ./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'

    Time (mean ± σ):     250.0 ms ±   6.0 ms    [User: 225.9 ms, System: 23.6 ms]

    Range (min … max):   242.0 ms … 261.1 ms

And with this patch:

  $ /usr/bin/time $cmd >/dev/null
  0.14user 0.00system 0:00.15elapsed 100%CPU (0avgtext+0avgdata 41396maxresident)k
  0inputs+0outputs (0major+8318minor)pagefaults 0swaps

  $ hyperfine "$cmd"
  Benchmark #1: ./git-cat-file --batch-all-objects --unordered --buffer --batch-check='%(objectname)'

    Time (mean ± σ):     151.9 ms ±   4.9 ms    [User: 130.5 ms, System: 21.2 ms]

    Range (min … max):   148.2 ms … 170.4 ms

Initial-patch-by: Jeff King <peff@peff.net>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
---
 oidset.c | 34 ++++++++++++----------------------
 oidset.h | 36 ++++++++++++++++++++++++++++--------
 2 files changed, 40 insertions(+), 30 deletions(-)

diff --git a/oidset.c b/oidset.c
index 454c54f933..9836d427ef 100644
--- a/oidset.c
+++ b/oidset.c
@@ -3,38 +3,28 @@
 
 int oidset_contains(const struct oidset *set, const struct object_id *oid)
 {
-	if (!set->map.map.tablesize)
-		return 0;
-	return !!oidmap_get(&set->map, oid);
+	khiter_t pos = kh_get_oid(&set->set, *oid);
+	return pos != kh_end(&set->set);
 }
 
 int oidset_insert(struct oidset *set, const struct object_id *oid)
 {
-	struct oidmap_entry *entry;
-
-	if (!set->map.map.tablesize)
-		oidmap_init(&set->map, 0);
-	else if (oidset_contains(set, oid))
-		return 1;
-
-	entry = xmalloc(sizeof(*entry));
-	oidcpy(&entry->oid, oid);
-
-	oidmap_put(&set->map, entry);
-	return 0;
+	int added;
+	kh_put_oid(&set->set, *oid, &added);
+	return !added;
 }
 
 int oidset_remove(struct oidset *set, const struct object_id *oid)
 {
-	struct oidmap_entry *entry;
-
-	entry = oidmap_remove(&set->map, oid);
-	free(entry);
-
-	return (entry != NULL);
+	khiter_t pos = kh_get_oid(&set->set, *oid);
+	if (pos == kh_end(&set->set))
+		return 0;
+	kh_del_oid(&set->set, pos);
+	return 1;
 }
 
 void oidset_clear(struct oidset *set)
 {
-	oidmap_free(&set->map, 1);
+	kh_release_oid(&set->set);
+	oidset_init(set, 0);
 }
diff --git a/oidset.h b/oidset.h
index 40ec5f87fe..4b90540cd4 100644
--- a/oidset.h
+++ b/oidset.h
@@ -1,7 +1,8 @@
 #ifndef OIDSET_H
 #define OIDSET_H
 
-#include "oidmap.h"
+#include "hashmap.h"
+#include "khash.h"
 
 /**
  * This API is similar to sha1-array, in that it maintains a set of object ids
@@ -15,19 +16,33 @@
  *      table overhead.
  */
 
+static inline unsigned int oid_hash(struct object_id oid)
+{
+	return sha1hash(oid.hash);
+}
+
+static inline int oid_equal(struct object_id a, struct object_id b)
+{
+	return oideq(&a, &b);
+}
+
+KHASH_INIT(oid, struct object_id, int, 0, oid_hash, oid_equal)
+
 /**
  * A single oidset; should be zero-initialized (or use OIDSET_INIT).
  */
 struct oidset {
-	struct oidmap map;
+	kh_oid_t set;
 };
 
-#define OIDSET_INIT { OIDMAP_INIT }
+#define OIDSET_INIT { { 0 } }
 
 
 static inline void oidset_init(struct oidset *set, size_t initial_size)
 {
-	oidmap_init(&set->map, initial_size);
+	memset(&set->set, 0, sizeof(set->set));
+	if (initial_size)
+		kh_resize_oid(&set->set, initial_size);
 }
 
 /**
@@ -58,19 +73,24 @@ int oidset_remove(struct oidset *set, const struct object_id *oid);
 void oidset_clear(struct oidset *set);
 
 struct oidset_iter {
-	struct oidmap_iter m_iter;
+	kh_oid_t *set;
+	khiter_t iter;
 };
 
 static inline void oidset_iter_init(struct oidset *set,
 				    struct oidset_iter *iter)
 {
-	oidmap_iter_init(&set->map, &iter->m_iter);
+	iter->set = &set->set;
+	iter->iter = kh_begin(iter->set);
 }
 
 static inline struct object_id *oidset_iter_next(struct oidset_iter *iter)
 {
-	struct oidmap_entry *e = oidmap_iter_next(&iter->m_iter);
-	return e ? &e->oid : NULL;
+	for (; iter->iter != kh_end(iter->set); iter->iter++) {
+		if (kh_exist(iter->set, iter->iter))
+			return &kh_key(iter->set, iter->iter++);
+	}
+	return NULL;
 }
 
 static inline struct object_id *oidset_iter_first(struct oidset *set,
-- 
2.19.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 5/5] oidset: uninline oidset_init()
  2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
                     ` (3 preceding siblings ...)
  2018-10-04 15:13   ` [PATCH v3 4/5] oidset: use khash René Scharfe
@ 2018-10-04 15:14   ` René Scharfe
  2018-10-04 22:15   ` [PATCH v3 0/5] oidset: use khash Jeff King
  5 siblings, 0 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-04 15:14 UTC (permalink / raw)
  To: Git List; +Cc: Junio C Hamano, Jeff King, Jonathan Tan

There is no need to inline oidset_init(), as it's typically only called
twice in the lifetime of an oidset (once at the beginning and at the end
by oidset_clear()) and kh_resize_* is quite big, so move its definition
to oidset.c.  Document it while we're at it.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
---
 oidset.c |  7 +++++++
 oidset.h | 13 +++++++------
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/oidset.c b/oidset.c
index 9836d427ef..fe4eb921df 100644
--- a/oidset.c
+++ b/oidset.c
@@ -1,6 +1,13 @@
 #include "cache.h"
 #include "oidset.h"
 
+void oidset_init(struct oidset *set, size_t initial_size)
+{
+	memset(&set->set, 0, sizeof(set->set));
+	if (initial_size)
+		kh_resize_oid(&set->set, initial_size);
+}
+
 int oidset_contains(const struct oidset *set, const struct object_id *oid)
 {
 	khiter_t pos = kh_get_oid(&set->set, *oid);
diff --git a/oidset.h b/oidset.h
index 4b90540cd4..c9d0f6d3cc 100644
--- a/oidset.h
+++ b/oidset.h
@@ -38,12 +38,13 @@ struct oidset {
 #define OIDSET_INIT { { 0 } }
 
 
-static inline void oidset_init(struct oidset *set, size_t initial_size)
-{
-	memset(&set->set, 0, sizeof(set->set));
-	if (initial_size)
-		kh_resize_oid(&set->set, initial_size);
-}
+/**
+ * Initialize the oidset structure `set`.
+ *
+ * If `initial_size` is bigger than 0 then preallocate to allow inserting
+ * the specified number of elements without further allocations.
+ */
+void oidset_init(struct oidset *set, size_t initial_size);
 
 /**
  * Returns true iff `set` contains `oid`.
-- 
2.19.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 15:09   ` [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed René Scharfe
@ 2018-10-04 21:38     ` Jonathan Tan
  2018-10-04 22:11       ` René Scharfe
  2018-10-04 22:14       ` Jeff King
  2018-10-04 22:07     ` Jeff King
  1 sibling, 2 replies; 26+ messages in thread
From: Jonathan Tan @ 2018-10-04 21:38 UTC (permalink / raw)
  To: l.s.r; +Cc: git, gitster, peff, jonathantanmy

> Determine if the oidset needs to be populated upfront and then do that
> instead.  This duplicates a loop, but simplifies the existing one by
> separating concerns between the two.

[snip]

> +	if (strict) {
> +		for (i = 0; i < nr_sought; i++) {
> +			ref = sought[i];
> +			if (!is_unmatched_ref(ref))
> +				continue;
> +
> +			add_refs_to_oidset(&tip_oids, unmatched);
> +			add_refs_to_oidset(&tip_oids, newlist);
> +			break;
> +		}
> +	}
> +
>  	/* Append unmatched requests to the list */
>  	for (i = 0; i < nr_sought; i++) {
>  		ref = sought[i];
>  		if (!is_unmatched_ref(ref))
>  			continue;
>  
> -		if ((allow_unadvertised_object_request &
> -		     (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1)) ||
> -		    tip_oids_contain(&tip_oids, unmatched, newlist,
> -				     &ref->old_oid)) {
> +		if (!strict || oidset_contains(&tip_oids, &ref->old_oid)) {
>  			ref->match_status = REF_MATCHED;
>  			*newtail = copy_ref(ref);
>  			newtail = &(*newtail)->next;

I don't think the concerns are truly separated - the first loop quoted
needs to know that in the second loop, tip_oids is accessed only if
there is at least one unmatched ref. Would it be better to expose the
size of the oidset and then use it in place of
"tip_oids->map.map.tablesize"? Checking for initialization (using
"tablesize") is conceptually different from checking the size, but in
this code, both initialization and the first increase in size occur upon
the first oidset_insert(), so we should still get the same result.

But if we do end up going with the approach in this patch, then this
patch is obviously correct.

I also looked at patch 1 of this patch set and it is obviously correct.
I didn't look at the other patches.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 15:09   ` [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed René Scharfe
  2018-10-04 21:38     ` Jonathan Tan
@ 2018-10-04 22:07     ` Jeff King
  2018-10-05 20:13       ` René Scharfe
  1 sibling, 1 reply; 26+ messages in thread
From: Jeff King @ 2018-10-04 22:07 UTC (permalink / raw)
  To: René Scharfe; +Cc: Git List, Junio C Hamano, Jonathan Tan

On Thu, Oct 04, 2018 at 05:09:39PM +0200, René Scharfe wrote:

> tip_oids_contain() lazily loads refs into an oidset at its first call.
> It abuses the internal (sub)member .map.tablesize of that oidset to
> check if it has done that already.
> 
> Determine if the oidset needs to be populated upfront and then do that
> instead.  This duplicates a loop, but simplifies the existing one by
> separating concerns between the two.

I like this approach much better than what I showed earlier. But...

> diff --git a/fetch-pack.c b/fetch-pack.c
> index 3b317952f0..53914563b5 100644
> --- a/fetch-pack.c
> +++ b/fetch-pack.c
> @@ -526,23 +526,6 @@ static void add_refs_to_oidset(struct oidset *oids, struct ref *refs)
>  		oidset_insert(oids, &refs->old_oid);
>  }
>  
> -static int tip_oids_contain(struct oidset *tip_oids,
> -			    struct ref *unmatched, struct ref *newlist,
> -			    const struct object_id *id)
> -{
> -	/*
> -	 * Note that this only looks at the ref lists the first time it's
> -	 * called. This works out in filter_refs() because even though it may
> -	 * add to "newlist" between calls, the additions will always be for
> -	 * oids that are already in the set.
> -	 */

I don't think the subtle point this comment is making goes away. We're
still growing the list in the loop that calls tip_oids_contain() (and
which now calls just oidset_contains). That's OK for the reasons given
here, but I think that would need to be moved down to this code:

> +	if (strict) {
> +		for (i = 0; i < nr_sought; i++) {
> +			ref = sought[i];
> +			if (!is_unmatched_ref(ref))
> +				continue;
> +
> +			add_refs_to_oidset(&tip_oids, unmatched);
> +			add_refs_to_oidset(&tip_oids, newlist);
> +			break;
> +		}
> +	}

I.e., we need to say here why it's OK to summarize newlist in the
oidset, even though we're adding to it later.

-Peff

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 21:38     ` Jonathan Tan
@ 2018-10-04 22:11       ` René Scharfe
  2018-10-05 20:13         ` René Scharfe
  2018-10-04 22:14       ` Jeff King
  1 sibling, 1 reply; 26+ messages in thread
From: René Scharfe @ 2018-10-04 22:11 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: git, gitster, peff

Am 04.10.2018 um 23:38 schrieb Jonathan Tan:
>> Determine if the oidset needs to be populated upfront and then do that
>> instead.  This duplicates a loop, but simplifies the existing one by
>> separating concerns between the two.
> 
> [snip]
> 
>> +	if (strict) {
>> +		for (i = 0; i < nr_sought; i++) {
>> +			ref = sought[i];
>> +			if (!is_unmatched_ref(ref))
>> +				continue;
>> +
>> +			add_refs_to_oidset(&tip_oids, unmatched);
>> +			add_refs_to_oidset(&tip_oids, newlist);
>> +			break;
>> +		}
>> +	}
>> +
>>  	/* Append unmatched requests to the list */
>>  	for (i = 0; i < nr_sought; i++) {
>>  		ref = sought[i];
>>  		if (!is_unmatched_ref(ref))
>>  			continue;
>>  
>> -		if ((allow_unadvertised_object_request &
>> -		     (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1)) ||
>> -		    tip_oids_contain(&tip_oids, unmatched, newlist,
>> -				     &ref->old_oid)) {
>> +		if (!strict || oidset_contains(&tip_oids, &ref->old_oid)) {
>>  			ref->match_status = REF_MATCHED;
>>  			*newtail = copy_ref(ref);
>>  			newtail = &(*newtail)->next;
> 
> I don't think the concerns are truly separated - the first loop quoted
> needs to know that in the second loop, tip_oids is accessed only if
> there is at least one unmatched ref.

Right, the two loops are still closely related, but only the first one
is concerned with loading refs.

For a true separation we could first build a list of unmatched refs and
then loop through that instead of `sought`.  Not sure if that's better,
but maybe the overhead I imagine it would introduce isn't all that big.

> Would it be better to expose the
> size of the oidset and then use it in place of
> "tip_oids->map.map.tablesize"? Checking for initialization (using
> "tablesize") is conceptually different from checking the size, but in
> this code, both initialization and the first increase in size occur upon
> the first oidset_insert(), so we should still get the same result.

It would work in practice.  If there are no refs to load then it would
try to load those zero refs for each unmatched ref, which shouldn't
really be a problem, but I still find it a wee bit sloppy.  Too
theoretical?

René

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 21:38     ` Jonathan Tan
  2018-10-04 22:11       ` René Scharfe
@ 2018-10-04 22:14       ` Jeff King
  2018-10-04 22:52         ` Jonathan Tan
  1 sibling, 1 reply; 26+ messages in thread
From: Jeff King @ 2018-10-04 22:14 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: l.s.r, git, gitster

On Thu, Oct 04, 2018 at 02:38:13PM -0700, Jonathan Tan wrote:

> > -		if ((allow_unadvertised_object_request &
> > -		     (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1)) ||
> > -		    tip_oids_contain(&tip_oids, unmatched, newlist,
> > -				     &ref->old_oid)) {
> > +		if (!strict || oidset_contains(&tip_oids, &ref->old_oid)) {
> >  			ref->match_status = REF_MATCHED;
> >  			*newtail = copy_ref(ref);
> >  			newtail = &(*newtail)->next;
> 
> I don't think the concerns are truly separated - the first loop quoted
> needs to know that in the second loop, tip_oids is accessed only if
> there is at least one unmatched ref. Would it be better to expose the
> size of the oidset and then use it in place of
> "tip_oids->map.map.tablesize"? Checking for initialization (using
> "tablesize") is conceptually different from checking the size, but in
> this code, both initialization and the first increase in size occur upon
> the first oidset_insert(), so we should still get the same result.

Yes, I agree with you that the loops are still entwined. They're at
least now in a single function, though, which IMHO is a slight
improvement.

I agree with you that just checking:

  if (oidset_count() != 0)

would be fine, too. Or I am even OK with leaving the existing tablesize
check. It is a little intimate with the implementation details, but I
suspect that if oidset were to change (e.g., to initialize the buckets
immediately), the problem would be pretty apparent in the tests.

And in fact, we can test by just changing the conditional in
tip_oid_contains to if(0), which quite clearly fails t5500.60 (along
with others).

So I don't think it's the end of the world to leave it (but I also am
not opposed to the other options discussed).

-Peff

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/5] oidset: use khash
  2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
                     ` (4 preceding siblings ...)
  2018-10-04 15:14   ` [PATCH 5/5] oidset: uninline oidset_init() René Scharfe
@ 2018-10-04 22:15   ` Jeff King
  5 siblings, 0 replies; 26+ messages in thread
From: Jeff King @ 2018-10-04 22:15 UTC (permalink / raw)
  To: René Scharfe; +Cc: Git List, Junio C Hamano, Jonathan Tan

On Thu, Oct 04, 2018 at 05:05:37PM +0200, René Scharfe wrote:

> Two new patches to avoid using oidset internals in fetch-pack:
> 
>   fetch-pack: factor out is_unmatched_ref()
>   fetch-pack: load tip_oids eagerly iff needed
> 
> Unchanged patch:
> 
>   khash: factor out kh_release_*
> 
> Unchanged, except it doesn't touch fetch-pack anymore:
> 
>   oidset: use khash
> 
> A new patch, to reduce object text size:
> 
>   oidset: uninline oidset_init()

I left a few responses to the fetch-pack changes. The rest of it all
looks good to me. Certainly it's satisfying for the implementation-swap
in patch 4 to be able to touch _only_ oidset.[ch].

-Peff

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 22:14       ` Jeff King
@ 2018-10-04 22:52         ` Jonathan Tan
  2018-10-04 23:18           ` Jeff King
  0 siblings, 1 reply; 26+ messages in thread
From: Jonathan Tan @ 2018-10-04 22:52 UTC (permalink / raw)
  To: peff; +Cc: jonathantanmy, l.s.r, git, gitster

> Yes, I agree with you that the loops are still entwined. They're at
> least now in a single function, though, which IMHO is a slight
> improvement.

Hmm...originally, the main functionality was in a single loop in a
single function. But I say that because I consider the lazy loading in
tip_oids_contain() as something both peripheral and independent (if the
main loop's logic changes, the lazy loading most likely does not need to
be changed).

> I agree with you that just checking:
> 
>   if (oidset_count() != 0)
> 
> would be fine, too.

OK, we're agreed on this :-)

> Or I am even OK with leaving the existing tablesize
> check. It is a little intimate with the implementation details, but I
> suspect that if oidset were to change (e.g., to initialize the buckets
> immediately), the problem would be pretty apparent in the tests.

I am OK with this too, except that (as far as I can tell) the point of
this patch set is to replace the internals of oidset, so we no longer
have the tablesize check. Unless you meant the khash analog of
tablesize? I would be OK if all tablesize references are replaced with
the khash analog in the same patch that the oidset internals are
replaced.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 22:52         ` Jonathan Tan
@ 2018-10-04 23:18           ` Jeff King
  0 siblings, 0 replies; 26+ messages in thread
From: Jeff King @ 2018-10-04 23:18 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: l.s.r, git, gitster

On Thu, Oct 04, 2018 at 03:52:05PM -0700, Jonathan Tan wrote:

> > Or I am even OK with leaving the existing tablesize
> > check. It is a little intimate with the implementation details, but I
> > suspect that if oidset were to change (e.g., to initialize the buckets
> > immediately), the problem would be pretty apparent in the tests.
> 
> I am OK with this too, except that (as far as I can tell) the point of
> this patch set is to replace the internals of oidset, so we no longer
> have the tablesize check. Unless you meant the khash analog of
> tablesize? I would be OK if all tablesize references are replaced with
> the khash analog in the same patch that the oidset internals are
> replaced.

Yeah, in khash it's n_buckets, but it's basically the same thing. René's
original patch did that update, and we were musing on whether there was
a way to avoid crossing the module boundary so intimately. Hence the
patch you saw. :)

-Peff

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 22:11       ` René Scharfe
@ 2018-10-05 20:13         ` René Scharfe
  0 siblings, 0 replies; 26+ messages in thread
From: René Scharfe @ 2018-10-05 20:13 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: git, gitster, peff

Am 05.10.2018 um 00:11 schrieb René Scharfe:
> Am 04.10.2018 um 23:38 schrieb Jonathan Tan:
>> I don't think the concerns are truly separated - the first loop quoted
>> needs to know that in the second loop, tip_oids is accessed only if
>> there is at least one unmatched ref.
> 
> Right, the two loops are still closely related, but only the first one
> is concerned with loading refs.
> 
> For a true separation we could first build a list of unmatched refs and
> then loop through that instead of `sought`.  Not sure if that's better,
> but maybe the overhead I imagine it would introduce isn't all that big.

Here's what I mean, on top of the other two patches:

---
 fetch-pack.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/fetch-pack.c b/fetch-pack.c
index 53914563b5..7f28584bce 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -543,6 +543,8 @@ static void filter_refs(struct fetch_pack_args *args,
 	struct ref *newlist = NULL;
 	struct ref **newtail = &newlist;
 	struct ref *unmatched = NULL;
+	struct ref **unfound;
+	int nr_unfound = 0;
 	struct ref *ref, *next;
 	struct oidset tip_oids = OIDSET_INIT;
 	int i;
@@ -584,23 +586,19 @@ static void filter_refs(struct fetch_pack_args *args,
 		}
 	}
 
-	if (strict) {
-		for (i = 0; i < nr_sought; i++) {
-			ref = sought[i];
-			if (!is_unmatched_ref(ref))
-				continue;
-
-			add_refs_to_oidset(&tip_oids, unmatched);
-			add_refs_to_oidset(&tip_oids, newlist);
-			break;
-		}
+	ALLOC_ARRAY(unfound, nr_sought);
+	for (i = 0; i < nr_sought; i++) {
+		if (is_unmatched_ref(sought[i]))
+			unfound[nr_unfound++] = sought[i];
+	}
+	if (strict && nr_unfound) {
+		add_refs_to_oidset(&tip_oids, unmatched);
+		add_refs_to_oidset(&tip_oids, newlist);
 	}
 
 	/* Append unmatched requests to the list */
-	for (i = 0; i < nr_sought; i++) {
-		ref = sought[i];
-		if (!is_unmatched_ref(ref))
-			continue;
+	for (i = 0; i < nr_unfound; i++) {
+		ref = unfound[i];
 
 		if (!strict || oidset_contains(&tip_oids, &ref->old_oid)) {
 			ref->match_status = REF_MATCHED;
@@ -611,6 +609,7 @@ static void filter_refs(struct fetch_pack_args *args,
 		}
 	}
 
+	free(unfound);
 	oidset_clear(&tip_oids);
 	for (ref = unmatched; ref; ref = next) {
 		next = ref->next;
-- 
2.19.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-04 22:07     ` Jeff King
@ 2018-10-05 20:13       ` René Scharfe
  2018-10-05 20:27         ` Jeff King
  0 siblings, 1 reply; 26+ messages in thread
From: René Scharfe @ 2018-10-05 20:13 UTC (permalink / raw)
  To: Jeff King; +Cc: Git List, Junio C Hamano, Jonathan Tan

Am 05.10.2018 um 00:07 schrieb Jeff King:
> On Thu, Oct 04, 2018 at 05:09:39PM +0200, René Scharfe wrote:
> 
>> tip_oids_contain() lazily loads refs into an oidset at its first call.
>> It abuses the internal (sub)member .map.tablesize of that oidset to
>> check if it has done that already.
>>
>> Determine if the oidset needs to be populated upfront and then do that
>> instead.  This duplicates a loop, but simplifies the existing one by
>> separating concerns between the two.
> 
> I like this approach much better than what I showed earlier. But...
> 
>> diff --git a/fetch-pack.c b/fetch-pack.c
>> index 3b317952f0..53914563b5 100644
>> --- a/fetch-pack.c
>> +++ b/fetch-pack.c
>> @@ -526,23 +526,6 @@ static void add_refs_to_oidset(struct oidset *oids, struct ref *refs)
>>  		oidset_insert(oids, &refs->old_oid);
>>  }
>>  
>> -static int tip_oids_contain(struct oidset *tip_oids,
>> -			    struct ref *unmatched, struct ref *newlist,
>> -			    const struct object_id *id)
>> -{
>> -	/*
>> -	 * Note that this only looks at the ref lists the first time it's
>> -	 * called. This works out in filter_refs() because even though it may
>> -	 * add to "newlist" between calls, the additions will always be for
>> -	 * oids that are already in the set.
>> -	 */
> 
> I don't think the subtle point this comment is making goes away. We're
> still growing the list in the loop that calls tip_oids_contain() (and
> which now calls just oidset_contains). That's OK for the reasons given
> here, but I think that would need to be moved down to this code:
> 
>> +	if (strict) {
>> +		for (i = 0; i < nr_sought; i++) {
>> +			ref = sought[i];
>> +			if (!is_unmatched_ref(ref))
>> +				continue;
>> +
>> +			add_refs_to_oidset(&tip_oids, unmatched);
>> +			add_refs_to_oidset(&tip_oids, newlist);
>> +			break;
>> +		}
>> +	}
> 
> I.e., we need to say here why it's OK to summarize newlist in the
> oidset, even though we're adding to it later.

There is already this comment:

	/* Append unmatched requests to the list */

And that's enough in my eyes.  The refs loop at the top splits the list
into matched ("the list") and unmatched, and the loop below said comment
adds a few more.  I see no subtlety left -- what do I miss?

René

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-05 20:13       ` René Scharfe
@ 2018-10-05 20:27         ` Jeff King
  2018-10-05 21:22           ` René Scharfe
  0 siblings, 1 reply; 26+ messages in thread
From: Jeff King @ 2018-10-05 20:27 UTC (permalink / raw)
  To: René Scharfe; +Cc: Git List, Junio C Hamano, Jonathan Tan

On Fri, Oct 05, 2018 at 10:13:34PM +0200, René Scharfe wrote:

> >> -{
> >> -	/*
> >> -	 * Note that this only looks at the ref lists the first time it's
> >> -	 * called. This works out in filter_refs() because even though it may
> >> -	 * add to "newlist" between calls, the additions will always be for
> >> -	 * oids that are already in the set.
> >> -	 */
> > 
> > I don't think the subtle point this comment is making goes away. We're
> > still growing the list in the loop that calls tip_oids_contain() (and
> > which now calls just oidset_contains). That's OK for the reasons given
> > here, but I think that would need to be moved down to this code:
> > 
> >> +	if (strict) {
> >> +		for (i = 0; i < nr_sought; i++) {
> >> +			ref = sought[i];
> >> +			if (!is_unmatched_ref(ref))
> >> +				continue;
> >> +
> >> +			add_refs_to_oidset(&tip_oids, unmatched);
> >> +			add_refs_to_oidset(&tip_oids, newlist);
> >> +			break;
> >> +		}
> >> +	}
> > 
> > I.e., we need to say here why it's OK to summarize newlist in the
> > oidset, even though we're adding to it later.
> 
> There is already this comment:
> 
> 	/* Append unmatched requests to the list */
> 
> And that's enough in my eyes.  The refs loop at the top splits the list
> into matched ("the list") and unmatched, and the loop below said comment
> adds a few more.  I see no subtlety left -- what do I miss?

It looks like tip_oids is meant as a fast lookup into what's in
unmatched and newlist. But in the second loop we continue appending to
newlist. Why is it OK that we do not update tip_oids when we do so?

I.e., something like this explains it:

diff --git a/fetch-pack.c b/fetch-pack.c
index 53914563b5..c0a1b80f4c 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -606,6 +606,12 @@ static void filter_refs(struct fetch_pack_args *args,
 			ref->match_status = REF_MATCHED;
 			*newtail = copy_ref(ref);
 			newtail = &(*newtail)->next;
+			/*
+			 * No need to update tip_oids with ref->old_oid; we got
+			 * here because either it was already there, or we are
+			 * in !strict mode, in which case we do not use
+			 * tip_oids at all.
+			 */
 		} else {
 			ref->match_status = REF_UNADVERTISED_NOT_ALLOWED;
 		}

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-05 20:27         ` Jeff King
@ 2018-10-05 21:22           ` René Scharfe
  2018-10-05 21:47             ` Jeff King
  0 siblings, 1 reply; 26+ messages in thread
From: René Scharfe @ 2018-10-05 21:22 UTC (permalink / raw)
  To: Jeff King; +Cc: Git List, Junio C Hamano, Jonathan Tan

Am 05.10.2018 um 22:27 schrieb Jeff King:
> On Fri, Oct 05, 2018 at 10:13:34PM +0200, René Scharfe wrote:
> 
>>>> -{
>>>> -	/*
>>>> -	 * Note that this only looks at the ref lists the first time it's
>>>> -	 * called. This works out in filter_refs() because even though it may
>>>> -	 * add to "newlist" between calls, the additions will always be for
>>>> -	 * oids that are already in the set.
>>>> -	 */
>>>
>>> I don't think the subtle point this comment is making goes away. We're
>>> still growing the list in the loop that calls tip_oids_contain() (and
>>> which now calls just oidset_contains). That's OK for the reasons given
>>> here, but I think that would need to be moved down to this code:
>>>
>>>> +	if (strict) {
>>>> +		for (i = 0; i < nr_sought; i++) {
>>>> +			ref = sought[i];
>>>> +			if (!is_unmatched_ref(ref))
>>>> +				continue;
>>>> +
>>>> +			add_refs_to_oidset(&tip_oids, unmatched);
>>>> +			add_refs_to_oidset(&tip_oids, newlist);
>>>> +			break;
>>>> +		}
>>>> +	}
>>>
>>> I.e., we need to say here why it's OK to summarize newlist in the
>>> oidset, even though we're adding to it later.
>>
>> There is already this comment:
>>
>> 	/* Append unmatched requests to the list */
>>
>> And that's enough in my eyes.  The refs loop at the top splits the list
>> into matched ("the list") and unmatched, and the loop below said comment
>> adds a few more.  I see no subtlety left -- what do I miss?
> 
> It looks like tip_oids is meant as a fast lookup into what's in
> unmatched and newlist. But in the second loop we continue appending to
> newlist. Why is it OK that we do not update tip_oids when we do so?

`tip_oids` contains the object_ids of the all `refs` passed to
filter_refs().  Instead of adding them at the top of the function that
is done only when it has become clear that there are unmatched ones,
as an optimization.  (That optimization was implemented by lazy-loading
in tip_oids_contain() earlier.)  At that point the list has been split
into `newlist` and `unmatched`, so we load from them instead of `refs`.

> 
> I.e., something like this explains it:
> 
> diff --git a/fetch-pack.c b/fetch-pack.c
> index 53914563b5..c0a1b80f4c 100644
> --- a/fetch-pack.c
> +++ b/fetch-pack.c
> @@ -606,6 +606,12 @@ static void filter_refs(struct fetch_pack_args *args,
>  			ref->match_status = REF_MATCHED;
>  			*newtail = copy_ref(ref);
>  			newtail = &(*newtail)->next;
> +			/*
> +			 * No need to update tip_oids with ref->old_oid; we got
> +			 * here because either it was already there, or we are
> +			 * in !strict mode, in which case we do not use
> +			 * tip_oids at all.
> +			 */
>  		} else {
>  			ref->match_status = REF_UNADVERTISED_NOT_ALLOWED;
>  		}

This comment puzzles me -- why ask the question it answers?
`tip_oids` has been loaded with all `refs` at that point; adding
more seems odd.

I feel the code needs be simplified further; not sure how, though,
except perhaps by using the `unfound` array added in another reply.

René

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed
  2018-10-05 21:22           ` René Scharfe
@ 2018-10-05 21:47             ` Jeff King
  0 siblings, 0 replies; 26+ messages in thread
From: Jeff King @ 2018-10-05 21:47 UTC (permalink / raw)
  To: René Scharfe; +Cc: Git List, Junio C Hamano, Jonathan Tan

On Fri, Oct 05, 2018 at 11:22:31PM +0200, René Scharfe wrote:

> > diff --git a/fetch-pack.c b/fetch-pack.c
> > index 53914563b5..c0a1b80f4c 100644
> > --- a/fetch-pack.c
> > +++ b/fetch-pack.c
> > @@ -606,6 +606,12 @@ static void filter_refs(struct fetch_pack_args *args,
> >  			ref->match_status = REF_MATCHED;
> >  			*newtail = copy_ref(ref);
> >  			newtail = &(*newtail)->next;
> > +			/*
> > +			 * No need to update tip_oids with ref->old_oid; we got
> > +			 * here because either it was already there, or we are
> > +			 * in !strict mode, in which case we do not use
> > +			 * tip_oids at all.
> > +			 */
> >  		} else {
> >  			ref->match_status = REF_UNADVERTISED_NOT_ALLOWED;
> >  		}
> 
> This comment puzzles me -- why ask the question it answers?
> `tip_oids` has been loaded with all `refs` at that point; adding
> more seems odd.

If you think that tip_oids is a fast lookup for what's in newlist, then
I think it is a reasonable question to ask whether new additions to
newlist need the same treatment. That was what the comment in the
original lazy-load was answering.

> I feel the code needs be simplified further; not sure how, though,
> except perhaps by using the `unfound` array added in another reply.

I agree it's not the most clear code in the world, but we may be
reaching a point of diminishing returns in discussing it further.

-Peff

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2018-10-05 21:47 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-03 13:11 [PATCH v2 0/2] oidset: use khash René Scharfe
2018-10-03 13:12 ` [PATCH v2 1/2] khash: factor out kh_release_* René Scharfe
2018-10-03 13:16 ` [PATCH v2 2/2] oidset: use khash René Scharfe
2018-10-03 19:40   ` Jeff King
2018-10-04  5:56     ` René Scharfe
2018-10-04  6:48       ` Jeff King
2018-10-04  6:50         ` Jeff King
2018-10-04 15:05         ` René Scharfe
2018-10-04 15:05 ` [PATCH v3 0/5] " René Scharfe
2018-10-04 15:09   ` [PATCH v3 1/5] fetch-pack: factor out is_unmatched_ref() René Scharfe
2018-10-04 15:09   ` [PATCH v3 2/5] fetch-pack: load tip_oids eagerly iff needed René Scharfe
2018-10-04 21:38     ` Jonathan Tan
2018-10-04 22:11       ` René Scharfe
2018-10-05 20:13         ` René Scharfe
2018-10-04 22:14       ` Jeff King
2018-10-04 22:52         ` Jonathan Tan
2018-10-04 23:18           ` Jeff King
2018-10-04 22:07     ` Jeff King
2018-10-05 20:13       ` René Scharfe
2018-10-05 20:27         ` Jeff King
2018-10-05 21:22           ` René Scharfe
2018-10-05 21:47             ` Jeff King
2018-10-04 15:10   ` [PATCH v3 3/5] khash: factor out kh_release_* René Scharfe
2018-10-04 15:13   ` [PATCH v3 4/5] oidset: use khash René Scharfe
2018-10-04 15:14   ` [PATCH 5/5] oidset: uninline oidset_init() René Scharfe
2018-10-04 22:15   ` [PATCH v3 0/5] oidset: use khash Jeff King

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).