All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3 net-next] Minor rhashtable fixes
@ 2015-03-13 14:45 Thomas Graf
  2015-03-13 14:45 ` [PATCH 1/3 net-next] rhashtable: Avoid calculating hash again to unlock Thomas Graf
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Thomas Graf @ 2015-03-13 14:45 UTC (permalink / raw)
  To: davem; +Cc: netdev, herbert, daniel

Thomas Graf (3):
  rhashtable: Avoid calculating hash again to unlock
  rhashtable: Use spin_lock_bh_nested() consistently
  rhashtable: Annotate RCU locking of walkers

 lib/rhashtable.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

---
Applies on top of Herbert's fixes:
("[PATCH 0/6] rhashtable: Fixes + cleanups + preparation for multiple rehash")

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3 net-next] rhashtable: Avoid calculating hash again to unlock
  2015-03-13 14:45 [PATCH 0/3 net-next] Minor rhashtable fixes Thomas Graf
@ 2015-03-13 14:45 ` Thomas Graf
  2015-03-14  2:20   ` Herbert Xu
  2015-03-13 14:45 ` [PATCH 2/3 net-next] rhashtable: Use spin_lock_bh_nested() consistently Thomas Graf
  2015-03-13 14:45 ` [PATCH 3/3 net-next] rhashtable: Annotate RCU locking of walkers Thomas Graf
  2 siblings, 1 reply; 8+ messages in thread
From: Thomas Graf @ 2015-03-13 14:45 UTC (permalink / raw)
  To: davem; +Cc: netdev, herbert, daniel

Caching the lock pointer avoids having to hash on the object
again to unlock the bucket locks.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
---
 lib/rhashtable.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 9d53a46..963aa03 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -384,14 +384,16 @@ static bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
 	struct rhash_head *head;
 	bool no_resize_running;
 	unsigned hash;
+	spinlock_t *old_lock;
 	bool success = true;
 
 	rcu_read_lock();
 
 	old_tbl = rht_dereference_rcu(ht->tbl, ht);
 	hash = head_hashfn(ht, old_tbl, obj);
+	old_lock = bucket_lock(old_tbl, hash);
 
-	spin_lock_bh(bucket_lock(old_tbl, hash));
+	spin_lock_bh(old_lock);
 
 	/* Because we have already taken the bucket lock in old_tbl,
 	 * if we find that future_tbl is not yet visible then that
@@ -428,13 +430,10 @@ static bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
 		schedule_work(&ht->run_work);
 
 exit:
-	if (tbl != old_tbl) {
-		hash = head_hashfn(ht, tbl, obj);
+	if (tbl != old_tbl)
 		spin_unlock(bucket_lock(tbl, hash));
-	}
 
-	hash = head_hashfn(ht, old_tbl, obj);
-	spin_unlock_bh(bucket_lock(old_tbl, hash));
+	spin_unlock_bh(old_lock);
 
 	rcu_read_unlock();
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3 net-next] rhashtable: Use spin_lock_bh_nested() consistently
  2015-03-13 14:45 [PATCH 0/3 net-next] Minor rhashtable fixes Thomas Graf
  2015-03-13 14:45 ` [PATCH 1/3 net-next] rhashtable: Avoid calculating hash again to unlock Thomas Graf
@ 2015-03-13 14:45 ` Thomas Graf
  2015-03-13 16:54   ` David Miller
  2015-03-13 14:45 ` [PATCH 3/3 net-next] rhashtable: Annotate RCU locking of walkers Thomas Graf
  2 siblings, 1 reply; 8+ messages in thread
From: Thomas Graf @ 2015-03-13 14:45 UTC (permalink / raw)
  To: davem; +Cc: netdev, herbert, daniel

No change in behaviour as the outer lock already disables softirq
but it prevents bugs down the line as this lock logically requires
the BH variant.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
---
 lib/rhashtable.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 963aa03..4de97e1f 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -233,7 +233,7 @@ static int rhashtable_rehash_one(struct rhashtable *ht, unsigned old_hash)
 
 	new_bucket_lock = bucket_lock(new_tbl, new_hash);
 
-	spin_lock_nested(new_bucket_lock, SINGLE_DEPTH_NESTING);
+	spin_lock_bh_nested(new_bucket_lock, SINGLE_DEPTH_NESTING);
 	head = rht_dereference_bucket(new_tbl->buckets[new_hash],
 				      new_tbl, new_hash);
 
@@ -243,7 +243,7 @@ static int rhashtable_rehash_one(struct rhashtable *ht, unsigned old_hash)
 		RCU_INIT_POINTER(entry->next, head);
 
 	rcu_assign_pointer(new_tbl->buckets[new_hash], entry);
-	spin_unlock(new_bucket_lock);
+	spin_unlock_bh(new_bucket_lock);
 
 	rcu_assign_pointer(*pprev, next);
 
@@ -404,7 +404,7 @@ static bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
 	tbl = rht_dereference_rcu(old_tbl->future_tbl, ht) ?: old_tbl;
 	if (tbl != old_tbl) {
 		hash = head_hashfn(ht, tbl, obj);
-		spin_lock_nested(bucket_lock(tbl, hash), SINGLE_DEPTH_NESTING);
+		spin_lock_bh_nested(bucket_lock(tbl, hash), SINGLE_DEPTH_NESTING);
 	}
 
 	if (compare &&
@@ -431,7 +431,7 @@ static bool __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
 
 exit:
 	if (tbl != old_tbl)
-		spin_unlock(bucket_lock(tbl, hash));
+		spin_unlock_bh(bucket_lock(tbl, hash));
 
 	spin_unlock_bh(old_lock);
 
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3 net-next] rhashtable: Annotate RCU locking of walkers
  2015-03-13 14:45 [PATCH 0/3 net-next] Minor rhashtable fixes Thomas Graf
  2015-03-13 14:45 ` [PATCH 1/3 net-next] rhashtable: Avoid calculating hash again to unlock Thomas Graf
  2015-03-13 14:45 ` [PATCH 2/3 net-next] rhashtable: Use spin_lock_bh_nested() consistently Thomas Graf
@ 2015-03-13 14:45 ` Thomas Graf
  2 siblings, 0 replies; 8+ messages in thread
From: Thomas Graf @ 2015-03-13 14:45 UTC (permalink / raw)
  To: davem; +Cc: netdev, herbert, daniel

Fixes the following sparse warnings:

lib/rhashtable.c:767:5: warning: context imbalance in 'rhashtable_walk_start' - wrong count at exit
lib/rhashtable.c:849:6: warning: context imbalance in 'rhashtable_walk_stop' - unexpected unlock

Fixes: f2dba9c6ff0d ("rhashtable: Introduce rhashtable_walk_*")
Signed-off-by: Thomas Graf <tgraf@suug.ch>
---
 lib/rhashtable.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 4de97e1f..d87b989 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -762,6 +762,7 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_exit);
  * by calling rhashtable_walk_next.
  */
 int rhashtable_walk_start(struct rhashtable_iter *iter)
+	__acquires(RCU)
 {
 	struct rhashtable *ht = iter->ht;
 
@@ -849,6 +850,7 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_next);
  * Finish a hash table walk.
  */
 void rhashtable_walk_stop(struct rhashtable_iter *iter)
+	__releases(RCU)
 {
 	struct rhashtable *ht;
 	struct bucket_table *tbl = iter->walker->tbl;
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3 net-next] rhashtable: Use spin_lock_bh_nested() consistently
  2015-03-13 14:45 ` [PATCH 2/3 net-next] rhashtable: Use spin_lock_bh_nested() consistently Thomas Graf
@ 2015-03-13 16:54   ` David Miller
  2015-03-14  2:21     ` Herbert Xu
  0 siblings, 1 reply; 8+ messages in thread
From: David Miller @ 2015-03-13 16:54 UTC (permalink / raw)
  To: tgraf; +Cc: netdev, herbert, daniel

From: Thomas Graf <tgraf@suug.ch>
Date: Fri, 13 Mar 2015 15:45:20 +0100

> No change in behaviour as the outer lock already disables softirq
> but it prevents bugs down the line as this lock logically requires
> the BH variant.
> 
> Signed-off-by: Thomas Graf <tgraf@suug.ch>

I would prefer you don't do this.

x_bh() may be relatively cheap, but it is not zero cost.

If there is an invariant that when we are called here BH
is disabled, make it explicit.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3 net-next] rhashtable: Avoid calculating hash again to unlock
  2015-03-13 14:45 ` [PATCH 1/3 net-next] rhashtable: Avoid calculating hash again to unlock Thomas Graf
@ 2015-03-14  2:20   ` Herbert Xu
  0 siblings, 0 replies; 8+ messages in thread
From: Herbert Xu @ 2015-03-14  2:20 UTC (permalink / raw)
  To: Thomas Graf; +Cc: davem, netdev, daniel

On Fri, Mar 13, 2015 at 03:45:19PM +0100, Thomas Graf wrote:
> Caching the lock pointer avoids having to hash on the object
> again to unlock the bucket locks.
> 
> Signed-off-by: Thomas Graf <tgraf@suug.ch>

Looks good to me.  I originally avoided doing this because I
thought I had to hold every bucket lock from the first table to
the last with multiple rehashing.  However, my new scheme only
requires the "first" table and the last table to be locked so
caching it is fine.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3 net-next] rhashtable: Use spin_lock_bh_nested() consistently
  2015-03-13 16:54   ` David Miller
@ 2015-03-14  2:21     ` Herbert Xu
  2015-03-16  8:44       ` Thomas Graf
  0 siblings, 1 reply; 8+ messages in thread
From: Herbert Xu @ 2015-03-14  2:21 UTC (permalink / raw)
  To: David Miller; +Cc: tgraf, netdev, daniel

On Fri, Mar 13, 2015 at 12:54:11PM -0400, David Miller wrote:
> From: Thomas Graf <tgraf@suug.ch>
> Date: Fri, 13 Mar 2015 15:45:20 +0100
> 
> > No change in behaviour as the outer lock already disables softirq
> > but it prevents bugs down the line as this lock logically requires
> > the BH variant.
> > 
> > Signed-off-by: Thomas Graf <tgraf@suug.ch>
> 
> I would prefer you don't do this.
> 
> x_bh() may be relatively cheap, but it is not zero cost.
> 
> If there is an invariant that when we are called here BH
> is disabled, make it explicit.

Agreed.  I dropped the _bh precisely for this reason when I did
the arbitrary rehash.  Please don't add it back because it serves
zero purpose.  Only the outside lock should do _bh while the
nested one should not.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3 net-next] rhashtable: Use spin_lock_bh_nested() consistently
  2015-03-14  2:21     ` Herbert Xu
@ 2015-03-16  8:44       ` Thomas Graf
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Graf @ 2015-03-16  8:44 UTC (permalink / raw)
  To: Herbert Xu; +Cc: David Miller, netdev, daniel

On 03/14/15 at 01:21pm, Herbert Xu wrote:
> On Fri, Mar 13, 2015 at 12:54:11PM -0400, David Miller wrote:
> > From: Thomas Graf <tgraf@suug.ch>
> > Date: Fri, 13 Mar 2015 15:45:20 +0100
> > 
> > > No change in behaviour as the outer lock already disables softirq
> > > but it prevents bugs down the line as this lock logically requires
> > > the BH variant.
> > > 
> > > Signed-off-by: Thomas Graf <tgraf@suug.ch>
> > 
> > I would prefer you don't do this.

OK, I'm dropping this patch.

> > x_bh() may be relatively cheap, but it is not zero cost.
> > 
> > If there is an invariant that when we are called here BH
> > is disabled, make it explicit.

I assume you are referring to the preempt disabled case. Fair enough.

> Agreed.  I dropped the _bh precisely for this reason when I did
> the arbitrary rehash.  Please don't add it back because it serves
> zero purpose.  Only the outside lock should do _bh while the
> nested one should not.

A note in the commit would have helped, it looked like an
accidental change.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-03-16  8:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-13 14:45 [PATCH 0/3 net-next] Minor rhashtable fixes Thomas Graf
2015-03-13 14:45 ` [PATCH 1/3 net-next] rhashtable: Avoid calculating hash again to unlock Thomas Graf
2015-03-14  2:20   ` Herbert Xu
2015-03-13 14:45 ` [PATCH 2/3 net-next] rhashtable: Use spin_lock_bh_nested() consistently Thomas Graf
2015-03-13 16:54   ` David Miller
2015-03-14  2:21     ` Herbert Xu
2015-03-16  8:44       ` Thomas Graf
2015-03-13 14:45 ` [PATCH 3/3 net-next] rhashtable: Annotate RCU locking of walkers Thomas Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.