[v2] rhashtable: Still do rehash when we get EEXIST
diff mbox series

Message ID 20190124030841.n4jtsqka5zji3e62@gondor.apana.org.au
State Accepted
Commit 408f13ef358aa5ad56dc6230c2c7deb92cf462b1
Headers show
Series
  • [v2] rhashtable: Still do rehash when we get EEXIST
Related show

Commit Message

Herbert Xu Jan. 24, 2019, 3:08 a.m. UTC
On Wed, Jan 23, 2019 at 01:17:58PM -0800, Josh Elsasser wrote:
> When running workloads with large bursts of fragmented packets, we've seen
> a few machines stuck returning -EEXIST from rht_shrink() and endlessly
> rescheduling their hash table's deferred work, pegging a CPU core.
> 
> Root cause is commit da20420f83ea ("rhashtable: Add nested tables"), which
> stops ignoring the return code of rhashtable_shrink() and the reallocs
> used to grow the hashtable. This uncovers a bug in the shrink logic where
> "needs to shrink" check runs against the last table but the actual shrink
> operation runs on the first bucket_table in the hashtable (see below):
> 
>  +-------+    +--------------+          +---------------+
>  | ht    |    | "first" tbl  |          | "last" tbl    |
>  | - tbl ---> | - future_tbl ---------> |  - future_tbl ---> NULL
>  +-------+    +--------------+          +---------------+
>                ^^^                          ^^^
> 	       used by rhashtable_shrink()  used by rht_shrink_below_30()
> 
> A rehash then stalls out when both the last table needs to shrink, the
> first table has more elements than the target size, but rht_shrink() hits
> a non-NULL future_tbl and returns -EEXIST. This skips the item rehashing
> and kicks off a reschedule loop, as no forward progress can be made while
> the rhashtable needs to shrink.
> 
> Extend rhashtable_shrink() with a "tbl" param to avoid endless exit-and-
> reschedules after hitting the EEXIST, allowing it to check a future_tbl
> pointer that can actually be non-NULL and make forward progress when the
> hashtable needs to shrink.
> 
> Fixes: da20420f83ea ("rhashtable: Add nested tables")
> Signed-off-by: Josh Elsasser <jelsasser@appneta.com>

Thanks for catching this!

Although I think we should fix this in a different way.  The problem
here is that the shrink cannot proceed because there was a previous
rehash that is still incomplete.  We should wait for its completion
and then reattempt a shrinnk should it still be necessary.

So something like this:

---8<---
As it stands if a shrink is delayed because of an outstanding
rehash, we will go into a rescheduling loop without ever doing
the rehash.

This patch fixes this by still carrying out the rehash and then
rescheduling so that we can shrink after the completion of the
rehash should it still be necessary.

The return value of EEXIST captures this case and other cases
(e.g., another thread expanded/rehashed the table at the same
time) where we should still proceed with the rehash.

Fixes: da20420f83ea ("rhashtable: Add nested tables")
Reported-by: Josh Elsasser <jelsasser@appneta.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Comments

Josh Elsasser Jan. 24, 2019, 3:40 a.m. UTC | #1
On Jan 23, 2019, at 7:08 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:

> Thanks for catching this!
> 
> Although I think we should fix this in a different way.  The problem
> here is that the shrink cannot proceed because there was a previous
> rehash that is still incomplete.  We should wait for its completion
> and then reattempt a shrinnk should it still be necessary.
> 
> So something like this:

SGTM. 

I can't test this right now because our VM server's down after a power
 outage this evening, but I tried a similar patch that swallowed the
 -EEXIST err and even with that oversight the hashtable dodged the
reschedule loop.

- Josh
Josh Elsasser Jan. 26, 2019, 10:02 p.m. UTC | #2
On Jan 23, 2019, at 7:40 PM, Josh Elsasser <jelsasser@appneta.com> wrote:
> On Jan 23, 2019, at 7:08 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> 
>> Thanks for catching this!
>> 
>> Although I think we should fix this in a different way.  The problem
>> here is that the shrink cannot proceed because there was a previous
>> rehash that is still incomplete.  We should wait for its completion
>> and then reattempt a shrinnk should it still be necessary.
> 
> I can't test this right now because our VM server's down 

Got one of the poor little reproducer VM's back up and running and loaded
up this patch. Works like a charm. For the v2 PATCH, can add my:

Tested-by: Josh Elsasser <jelsasser@appneta.com>
Josh Hunt March 20, 2019, 10:39 p.m. UTC | #3
On Sat, Jan 26, 2019 at 2:03 PM Josh Elsasser <jelsasser@appneta.com> wrote:
>
> On Jan 23, 2019, at 7:40 PM, Josh Elsasser <jelsasser@appneta.com> wrote:
> > On Jan 23, 2019, at 7:08 PM, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> >
> >> Thanks for catching this!
> >>
> >> Although I think we should fix this in a different way.  The problem
> >> here is that the shrink cannot proceed because there was a previous
> >> rehash that is still incomplete.  We should wait for its completion
> >> and then reattempt a shrinnk should it still be necessary.
> >
> > I can't test this right now because our VM server's down
>
> Got one of the poor little reproducer VM's back up and running and loaded
> up this patch. Works like a charm. For the v2 PATCH, can add my:
>
> Tested-by: Josh Elsasser <jelsasser@appneta.com>

Trying again... gmail sent HTML mail first time.

Herbert

We're seeing this pretty regularly on 4.14 LTS kernels. I didn't see
your change in any of the regular trees. Are there plans to submit
this? If so, can it get queued up for 4.14 stable too?

Thanks!

Patch
diff mbox series

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 852ffa5160f1..4edcf3310513 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -416,8 +416,12 @@  static void rht_deferred_worker(struct work_struct *work)
 	else if (tbl->nest)
 		err = rhashtable_rehash_alloc(ht, tbl, tbl->size);
 
-	if (!err)
-		err = rhashtable_rehash_table(ht);
+	if (!err || err == -EEXIST) {
+		int nerr;
+
+		nerr = rhashtable_rehash_table(ht);
+		err = err ?: nerr;
+	}
 
 	mutex_unlock(&ht->mutex);