All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net 0/3] rhashtable updates
@ 2015-02-19 23:53 Daniel Borkmann
  2015-02-19 23:53 ` [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete Daniel Borkmann
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Daniel Borkmann @ 2015-02-19 23:53 UTC (permalink / raw)
  To: davem; +Cc: tgraf, johunt, netdev, Daniel Borkmann

Daniel Borkmann (3):
  rhashtable: don't test for shrink on insert, expansion on delete
  rhashtable: better high order allocation attempts
  rhashtable: allow to unload test module

 lib/rhashtable.c      | 33 +++++++++++++++++++++------------
 lib/test_rhashtable.c |  5 +++++
 2 files changed, 26 insertions(+), 12 deletions(-)

-- 
1.9.3

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete
  2015-02-19 23:53 [PATCH net 0/3] rhashtable updates Daniel Borkmann
@ 2015-02-19 23:53 ` Daniel Borkmann
  2015-02-20 10:08   ` David Laight
  2015-02-20 11:59   ` Thomas Graf
  2015-02-19 23:53 ` [PATCH net 2/3] rhashtable: better high order allocation attempts Daniel Borkmann
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 18+ messages in thread
From: Daniel Borkmann @ 2015-02-19 23:53 UTC (permalink / raw)
  To: davem; +Cc: tgraf, johunt, netdev, Daniel Borkmann, Ying Xue

Restore pre 54c5b7d311c8 behaviour and only probe for expansions on inserts
and shrinks on deletes. Currently, it will happen that on initial inserts
into a sparse hash table, we may i.e. shrink it first simply because it's
not fully populated yet, only to later realize that we need to grow again.

This however is counter intuitive, e.g. an initial default size of 64
elements is already small enough, and in case an elements size hint is given
to the hash table by a user, we should avoid unnecessary expansion steps,
so a shrink is clearly unintended here.

Fixes: 54c5b7d311c8 ("rhashtable: introduce rhashtable_wakeup_worker helper function")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Ying Xue <ying.xue@windriver.com>
---
 lib/rhashtable.c | 27 ++++++++++++++++++---------
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 9cc4c4a..38f7879 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -537,16 +537,25 @@ unlock:
 	mutex_unlock(&ht->mutex);
 }
 
-static void rhashtable_wakeup_worker(struct rhashtable *ht)
+static void rhashtable_probe_expand(struct rhashtable *ht)
 {
-	struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
-	struct bucket_table *new_tbl = rht_dereference_rcu(ht->future_tbl, ht);
-	size_t size = tbl->size;
+	const struct bucket_table *new_tbl = rht_dereference_rcu(ht->future_tbl, ht);
+	const struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
 
 	/* Only adjust the table if no resizing is currently in progress. */
-	if (tbl == new_tbl &&
-	    ((ht->p.grow_decision && ht->p.grow_decision(ht, size)) ||
-	     (ht->p.shrink_decision && ht->p.shrink_decision(ht, size))))
+	if (tbl == new_tbl && ht->p.grow_decision &&
+	    ht->p.grow_decision(ht, tbl->size))
+		schedule_work(&ht->run_work);
+}
+
+static void rhashtable_probe_shrink(struct rhashtable *ht)
+{
+	const struct bucket_table *new_tbl = rht_dereference_rcu(ht->future_tbl, ht);
+	const struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
+
+	/* Only adjust the table if no resizing is currently in progress. */
+	if (tbl == new_tbl && ht->p.shrink_decision &&
+	    ht->p.shrink_decision(ht, tbl->size))
 		schedule_work(&ht->run_work);
 }
 
@@ -569,7 +578,7 @@ static void __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
 
 	atomic_inc(&ht->nelems);
 
-	rhashtable_wakeup_worker(ht);
+	rhashtable_probe_expand(ht);
 }
 
 /**
@@ -682,7 +691,7 @@ found:
 
 	if (ret) {
 		atomic_dec(&ht->nelems);
-		rhashtable_wakeup_worker(ht);
+		rhashtable_probe_shrink(ht);
 	}
 
 	rcu_read_unlock();
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-19 23:53 [PATCH net 0/3] rhashtable updates Daniel Borkmann
  2015-02-19 23:53 ` [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete Daniel Borkmann
@ 2015-02-19 23:53 ` Daniel Borkmann
  2015-02-20 10:11   ` David Laight
  2015-02-20 12:01   ` Thomas Graf
  2015-02-19 23:53 ` [PATCH net 3/3] rhashtable: allow to unload test module Daniel Borkmann
  2015-02-20 22:38 ` [PATCH net 0/3] rhashtable updates David Miller
  3 siblings, 2 replies; 18+ messages in thread
From: Daniel Borkmann @ 2015-02-19 23:53 UTC (permalink / raw)
  To: davem; +Cc: tgraf, johunt, netdev, Daniel Borkmann

When trying to allocate future tables via bucket_table_alloc(), it seems
overkill on large table shifts that we probe for kzalloc() unconditionally
first, as it's likely to fail.

Only probe with kzalloc() for more reasonable table sizes and use vzalloc()
either as a fallback on failure or directly in case of large table sizes.

Fixes: 7e1e77636e36 ("lib: Resizable, Scalable, Concurrent Hash Table")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 lib/rhashtable.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 38f7879..b41a5c0 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -217,15 +217,15 @@ static void bucket_table_free(const struct bucket_table *tbl)
 static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
 					       size_t nbuckets)
 {
-	struct bucket_table *tbl;
+	struct bucket_table *tbl = NULL;
 	size_t size;
 	int i;
 
 	size = sizeof(*tbl) + nbuckets * sizeof(tbl->buckets[0]);
-	tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN);
+	if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER))
+		tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
 	if (tbl == NULL)
 		tbl = vzalloc(size);
-
 	if (tbl == NULL)
 		return NULL;
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH net 3/3] rhashtable: allow to unload test module
  2015-02-19 23:53 [PATCH net 0/3] rhashtable updates Daniel Borkmann
  2015-02-19 23:53 ` [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete Daniel Borkmann
  2015-02-19 23:53 ` [PATCH net 2/3] rhashtable: better high order allocation attempts Daniel Borkmann
@ 2015-02-19 23:53 ` Daniel Borkmann
  2015-02-20 12:01   ` Thomas Graf
  2015-02-20 22:38 ` [PATCH net 0/3] rhashtable updates David Miller
  3 siblings, 1 reply; 18+ messages in thread
From: Daniel Borkmann @ 2015-02-19 23:53 UTC (permalink / raw)
  To: davem; +Cc: tgraf, johunt, netdev, Daniel Borkmann

There's no good reason why to disallow unloading of the rhashtable
test case module.

Commit 9d6dbe1bbaf8 moved the code from a boot test into a stand-alone
module, but only converted the subsys_initcall() handler into a
module_init() function without a related exit handler, and thus
preventing the test module from unloading.

Fixes: 9d6dbe1bbaf8 ("rhashtable: Make selftest modular")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 lib/test_rhashtable.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
index 1dfeba7..9c5fcce 100644
--- a/lib/test_rhashtable.c
+++ b/lib/test_rhashtable.c
@@ -222,6 +222,11 @@ static int __init test_rht_init(void)
 	return err;
 }
 
+static void __exit test_rht_exit(void)
+{
+}
+
 module_init(test_rht_init);
+module_exit(test_rht_exit);
 
 MODULE_LICENSE("GPL v2");
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* RE: [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete
  2015-02-19 23:53 ` [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete Daniel Borkmann
@ 2015-02-20 10:08   ` David Laight
  2015-02-20 10:13     ` Daniel Borkmann
  2015-02-20 11:59   ` Thomas Graf
  1 sibling, 1 reply; 18+ messages in thread
From: David Laight @ 2015-02-20 10:08 UTC (permalink / raw)
  To: 'Daniel Borkmann', davem; +Cc: tgraf, johunt, netdev, Ying Xue

From: Daniel Borkmann
> Restore pre 54c5b7d311c8 behaviour and only probe for expansions on inserts
> and shrinks on deletes. Currently, it will happen that on initial inserts
> into a sparse hash table, we may i.e. shrink it first simply because it's
> not fully populated yet, only to later realize that we need to grow again.
> 
> This however is counter intuitive, e.g. an initial default size of 64
> elements is already small enough, and in case an elements size hint is given
> to the hash table by a user, we should avoid unnecessary expansion steps,
> so a shrink is clearly unintended here.

Does it actually make sense to shrink below the initial default size?

	David

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-19 23:53 ` [PATCH net 2/3] rhashtable: better high order allocation attempts Daniel Borkmann
@ 2015-02-20 10:11   ` David Laight
  2015-02-20 10:23     ` Daniel Borkmann
  2015-02-20 13:46     ` Eric Dumazet
  2015-02-20 12:01   ` Thomas Graf
  1 sibling, 2 replies; 18+ messages in thread
From: David Laight @ 2015-02-20 10:11 UTC (permalink / raw)
  To: 'Daniel Borkmann', davem; +Cc: tgraf, johunt, netdev

From: Daniel Borkmann
> When trying to allocate future tables via bucket_table_alloc(), it seems
> overkill on large table shifts that we probe for kzalloc() unconditionally
> first, as it's likely to fail.

How about a two-level array for large tables?
Then you don't need to allocate more than 1 page at a time?

	David

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete
  2015-02-20 10:08   ` David Laight
@ 2015-02-20 10:13     ` Daniel Borkmann
  2015-02-20 11:56       ` tgraf
  0 siblings, 1 reply; 18+ messages in thread
From: Daniel Borkmann @ 2015-02-20 10:13 UTC (permalink / raw)
  To: David Laight, davem; +Cc: tgraf, johunt, netdev, Ying Xue

On 02/20/2015 11:08 AM, David Laight wrote:
...
> Does it actually make sense to shrink below the initial default size?

rhashtable has a min_shift parameter, shrinks cannot go below that.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-20 10:11   ` David Laight
@ 2015-02-20 10:23     ` Daniel Borkmann
  2015-02-20 13:46     ` Eric Dumazet
  1 sibling, 0 replies; 18+ messages in thread
From: Daniel Borkmann @ 2015-02-20 10:23 UTC (permalink / raw)
  To: David Laight, davem; +Cc: tgraf, johunt, netdev

On 02/20/2015 11:11 AM, David Laight wrote:
...
> How about a two-level array for large tables?
> Then you don't need to allocate more than 1 page at a time?

Sorry, I currently don't see how this fits into the rhashtable
algorithm i.e. with regards to the expansion and shrink logic?

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete
  2015-02-20 10:13     ` Daniel Borkmann
@ 2015-02-20 11:56       ` tgraf
  0 siblings, 0 replies; 18+ messages in thread
From: tgraf @ 2015-02-20 11:56 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: David Laight, davem, johunt, netdev, Ying Xue

On 02/20/15 at 11:13am, Daniel Borkmann wrote:
> On 02/20/2015 11:08 AM, David Laight wrote:
> ...
> >Does it actually make sense to shrink below the initial default size?
> 
> rhashtable has a min_shift parameter, shrinks cannot go below that.

Right, it's up to the user to set min_shift according to nelems hint
or not. I see use cases for both behaviours.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete
  2015-02-19 23:53 ` [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete Daniel Borkmann
  2015-02-20 10:08   ` David Laight
@ 2015-02-20 11:59   ` Thomas Graf
  1 sibling, 0 replies; 18+ messages in thread
From: Thomas Graf @ 2015-02-20 11:59 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: davem, johunt, netdev, Ying Xue

On 02/20/15 at 12:53am, Daniel Borkmann wrote:
> Restore pre 54c5b7d311c8 behaviour and only probe for expansions on inserts
> and shrinks on deletes. Currently, it will happen that on initial inserts
> into a sparse hash table, we may i.e. shrink it first simply because it's
> not fully populated yet, only to later realize that we need to grow again.
> 
> This however is counter intuitive, e.g. an initial default size of 64
> elements is already small enough, and in case an elements size hint is given
> to the hash table by a user, we should avoid unnecessary expansion steps,
> so a shrink is clearly unintended here.
> 
> Fixes: 54c5b7d311c8 ("rhashtable: introduce rhashtable_wakeup_worker helper function")
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Ying Xue <ying.xue@windriver.com>

Acked-by: Thomas Graf <tgraf@suug.ch>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-19 23:53 ` [PATCH net 2/3] rhashtable: better high order allocation attempts Daniel Borkmann
  2015-02-20 10:11   ` David Laight
@ 2015-02-20 12:01   ` Thomas Graf
  1 sibling, 0 replies; 18+ messages in thread
From: Thomas Graf @ 2015-02-20 12:01 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: davem, johunt, netdev

On 02/20/15 at 12:53am, Daniel Borkmann wrote:
> When trying to allocate future tables via bucket_table_alloc(), it seems
> overkill on large table shifts that we probe for kzalloc() unconditionally
> first, as it's likely to fail.
> 
> Only probe with kzalloc() for more reasonable table sizes and use vzalloc()
> either as a fallback on failure or directly in case of large table sizes.
> 
> Fixes: 7e1e77636e36 ("lib: Resizable, Scalable, Concurrent Hash Table")
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

Acked-by: Thomas Graf <tgraf@suug.ch>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 3/3] rhashtable: allow to unload test module
  2015-02-19 23:53 ` [PATCH net 3/3] rhashtable: allow to unload test module Daniel Borkmann
@ 2015-02-20 12:01   ` Thomas Graf
  0 siblings, 0 replies; 18+ messages in thread
From: Thomas Graf @ 2015-02-20 12:01 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: davem, johunt, netdev

On 02/20/15 at 12:53am, Daniel Borkmann wrote:
> There's no good reason why to disallow unloading of the rhashtable
> test case module.
> 
> Commit 9d6dbe1bbaf8 moved the code from a boot test into a stand-alone
> module, but only converted the subsys_initcall() handler into a
> module_init() function without a related exit handler, and thus
> preventing the test module from unloading.
> 
> Fixes: 9d6dbe1bbaf8 ("rhashtable: Make selftest modular")
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>

Acked-by: Thomas Graf <tgraf@suug.ch>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-20 10:11   ` David Laight
  2015-02-20 10:23     ` Daniel Borkmann
@ 2015-02-20 13:46     ` Eric Dumazet
  2015-02-20 14:27       ` David Laight
  1 sibling, 1 reply; 18+ messages in thread
From: Eric Dumazet @ 2015-02-20 13:46 UTC (permalink / raw)
  To: David Laight; +Cc: 'Daniel Borkmann', davem, tgraf, johunt, netdev

On Fri, 2015-02-20 at 10:11 +0000, David Laight wrote:
> From: Daniel Borkmann
> > When trying to allocate future tables via bucket_table_alloc(), it seems
> > overkill on large table shifts that we probe for kzalloc() unconditionally
> > first, as it's likely to fail.
> 
> How about a two-level array for large tables?
> Then you don't need to allocate more than 1 page at a time?

This is called vmalloc() in linux kernel.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-20 13:46     ` Eric Dumazet
@ 2015-02-20 14:27       ` David Laight
  2015-02-20 14:31         ` tgraf
  2015-02-20 14:54         ` Eric Dumazet
  0 siblings, 2 replies; 18+ messages in thread
From: David Laight @ 2015-02-20 14:27 UTC (permalink / raw)
  To: 'Eric Dumazet'
  Cc: 'Daniel Borkmann', davem, tgraf, johunt, netdev

From: Eric Dumazet
> On Fri, 2015-02-20 at 10:11 +0000, David Laight wrote:
> > From: Daniel Borkmann
> > > When trying to allocate future tables via bucket_table_alloc(), it seems
> > > overkill on large table shifts that we probe for kzalloc() unconditionally
> > > first, as it's likely to fail.
> >
> > How about a two-level array for large tables?
> > Then you don't need to allocate more than 1 page at a time?
> 
> This is called vmalloc() in linux kernel.

vmalloc() still requires contiguous KVA, just not contiguous physical memory.
I also believe that (on some systems at least) the address space for
vmalloc() is much smaller than that for kalloc().

IIRC At least one piece of historic documentation says that vmalloc() should
only be used for short term allocations.

I presume this code is allocating very large arrays for the base of the hash lists.
Since there is no absolute requirement for contiguous KVA (nothing
sequentially accesses the entire array) it can be coded differently.

I realize that this would involve an extra data cache line access.
So you'd want to avoid it for small tables.
(I expected this to be your objection)

	David


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-20 14:27       ` David Laight
@ 2015-02-20 14:31         ` tgraf
  2015-02-20 14:58           ` Eric Dumazet
  2015-02-20 14:54         ` Eric Dumazet
  1 sibling, 1 reply; 18+ messages in thread
From: tgraf @ 2015-02-20 14:31 UTC (permalink / raw)
  To: David Laight
  Cc: 'Eric Dumazet', 'Daniel Borkmann', davem, johunt, netdev

On 02/20/15 at 02:27pm, David Laight wrote:
> I presume this code is allocating very large arrays for the base of the hash lists.
> Since there is no absolute requirement for contiguous KVA (nothing
> sequentially accesses the entire array) it can be coded differently.
> 
> I realize that this would involve an extra data cache line access.
> So you'd want to avoid it for small tables.
> (I expected this to be your objection)

We can consider switching over to flex arrays at some point
although NUMA considerations seem much more important to me
at this point.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-20 14:27       ` David Laight
  2015-02-20 14:31         ` tgraf
@ 2015-02-20 14:54         ` Eric Dumazet
  1 sibling, 0 replies; 18+ messages in thread
From: Eric Dumazet @ 2015-02-20 14:54 UTC (permalink / raw)
  To: David Laight; +Cc: 'Daniel Borkmann', davem, tgraf, johunt, netdev

On Fri, 2015-02-20 at 14:27 +0000, David Laight wrote:

> I presume this code is allocating very large arrays for the base of the hash lists.
> Since there is no absolute requirement for contiguous KVA (nothing
> sequentially accesses the entire array) it can be coded differently.

vmalloc() is the proper allocator for hash table, even very big ones.

So far, no user wants huge rhashtable that would not fit in vmalloc
reserved space.

$ grep Vmalloc /proc/meminfo 
VmallocTotal:   34359738367 kB
VmallocUsed:      377492 kB
VmallocChunk:   34359350872 kB

It seems we have some room.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 2/3] rhashtable: better high order allocation attempts
  2015-02-20 14:31         ` tgraf
@ 2015-02-20 14:58           ` Eric Dumazet
  0 siblings, 0 replies; 18+ messages in thread
From: Eric Dumazet @ 2015-02-20 14:58 UTC (permalink / raw)
  To: tgraf; +Cc: David Laight, 'Daniel Borkmann', davem, johunt, netdev

On Fri, 2015-02-20 at 14:31 +0000, tgraf@suug.ch wrote:

> We can consider switching over to flex arrays at some point
> although NUMA considerations seem much more important to me
> at this point.

flex arrays add a performance cost, much higher than TLB misses of
vmalloc()

I mentioned in the past that vmalloc() could be using hugepages
eventually, so even this TLB cost could be lowered if wanted.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH net 0/3] rhashtable updates
  2015-02-19 23:53 [PATCH net 0/3] rhashtable updates Daniel Borkmann
                   ` (2 preceding siblings ...)
  2015-02-19 23:53 ` [PATCH net 3/3] rhashtable: allow to unload test module Daniel Borkmann
@ 2015-02-20 22:38 ` David Miller
  3 siblings, 0 replies; 18+ messages in thread
From: David Miller @ 2015-02-20 22:38 UTC (permalink / raw)
  To: daniel; +Cc: tgraf, johunt, netdev

From: Daniel Borkmann <daniel@iogearbox.net>
Date: Fri, 20 Feb 2015 00:53:36 +0100

> Daniel Borkmann (3):
>   rhashtable: don't test for shrink on insert, expansion on delete
>   rhashtable: better high order allocation attempts
>   rhashtable: allow to unload test module

Series applied, thanks Daniel.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2015-02-20 22:38 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-19 23:53 [PATCH net 0/3] rhashtable updates Daniel Borkmann
2015-02-19 23:53 ` [PATCH net 1/3] rhashtable: don't test for shrink on insert, expansion on delete Daniel Borkmann
2015-02-20 10:08   ` David Laight
2015-02-20 10:13     ` Daniel Borkmann
2015-02-20 11:56       ` tgraf
2015-02-20 11:59   ` Thomas Graf
2015-02-19 23:53 ` [PATCH net 2/3] rhashtable: better high order allocation attempts Daniel Borkmann
2015-02-20 10:11   ` David Laight
2015-02-20 10:23     ` Daniel Borkmann
2015-02-20 13:46     ` Eric Dumazet
2015-02-20 14:27       ` David Laight
2015-02-20 14:31         ` tgraf
2015-02-20 14:58           ` Eric Dumazet
2015-02-20 14:54         ` Eric Dumazet
2015-02-20 12:01   ` Thomas Graf
2015-02-19 23:53 ` [PATCH net 3/3] rhashtable: allow to unload test module Daniel Borkmann
2015-02-20 12:01   ` Thomas Graf
2015-02-20 22:38 ` [PATCH net 0/3] rhashtable updates David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.