All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels
@ 2013-06-10 21:36 Paul Gortmaker
  2013-06-10 21:36 ` [PATCH 1/2] list_bl.h: make list head locking RT safe Paul Gortmaker
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Paul Gortmaker @ 2013-06-10 21:36 UTC (permalink / raw)
  To: rostedt, tglx, bigeasy, paulmck; +Cc: linux-rt-users, Paul Gortmaker

The bit_spin_locks have been problematic on RT for a long time; dating
at least back to 2.6.11 and their use in the journalling code[1].  We
still have patches today that clobber them for cgroups and jbd/jbd2
code on RT[2].  But there have been some newer users added.

In commit 4e35e6070b1c [2.6.38] ("kernel: add bl_list") we got
the list heads with the zero'th bit reserved for locking.

It was shortly followed with ceb5bdc2d24 ("fs: dcache per-bucket
dcache hash locking") that made it clear the bit was now being used
in a bit_spin_lock context (e.g. in fs/dcache.c).

As of commit 1879fd6a265 [2.6.39] ("add hlist_bl_lock/unlock helpers")
we got helper functions that combined the open coded bit locks into
one place.  At the same time, it makes it more clear that bit_spin_lock
is being used, and where.

Assuming that we still can not use the bit_spin_lock safely on RT,
then users of these helpers will also result in unsafe usage.  Following
the style of "fix" used for jbd code[2], I've done a similar thing here
and introduce a stand-alone lock for the list head.  This may be less
than ideal from a performance standpoint -- currently unclear to me.

I can't pin an actual failing on not having these patches present; I
came by it simply by inspecting the jbd2 code while trying to diagnose
another problem (one which these patches unfortunately don't fix) and
ended up searching for users of bit_spin.

Noting the above, there is also another use case which may be
undesireable for RT -- for the RT trees which now support SLUB,
there is a bit_spin_lock used for slab_lock/slab_unlock....  I'll
only mention it here and leave it for a separate thread/discussion.

I'm calling these RFC patches, meant to just start discussion.  The
two patches could obviously be squashed into one, but I wanted the
2nd (rawlock) to remain separate since it shows why it becomes raw,
and I'm not 100% convinced that my assumption that it is OK from a
latency perspective to be raw is actually a valid one yet.

In addition, we probably want to be looking at Eric/PaulM's patch
currently in net-next:  c87a124a5d ("net: force a reload of first
item in hlist_nulls_for_each_entry_rcu") [3] -- as a candidate for
cherry-pick onto RT, I think.  It will get there eventually via
DaveM --> GregKH --> Steve path (for the rt-stable branches).

Patches attached here were from a v3.6.11.5-rt37 based tree.

Paul.
--

[1] http://linux.kernel.narkive.com/octAmqz8/patch-real-time-preemption-rt-2-6-11-rc3-v0-7-38-01.4

[2] http://git.kernel.org/cgit/linux/kernel/git/paulg/3.6-rt-patches.git/tree/mm-cgroup-page-bit-spinlock.patch
    http://git.kernel.org/cgit/linux/kernel/git/paulg/3.6-rt-patches.git/tree/fs-jbd-replace-bh_state-lock.patch

[3] http://patchwork.ozlabs.org/patch/247360/
    http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=c87a124a5d5e8cf8e21c4363c3372bcaf53ea190

Paul Gortmaker (2):
  list_bl.h: make list head locking RT safe
  list_bl: make list head lock a raw lock

 include/linux/list_bl.h | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

-- 
1.8.1.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] list_bl.h: make list head locking RT safe
  2013-06-10 21:36 [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels Paul Gortmaker
@ 2013-06-10 21:36 ` Paul Gortmaker
  2013-06-21 12:23   ` Sebastian Andrzej Siewior
  2013-06-10 21:36 ` [PATCH 2/2] list_bl: make list head lock a raw lock Paul Gortmaker
  2013-06-10 21:56 ` [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels Paul E. McKenney
  2 siblings, 1 reply; 9+ messages in thread
From: Paul Gortmaker @ 2013-06-10 21:36 UTC (permalink / raw)
  To: rostedt, tglx, bigeasy, paulmck; +Cc: linux-rt-users, Paul Gortmaker

As per changes in include/linux/jbd_common.h for avoiding the
bit_spin_locks on RT ("fs: jbd/jbd2: Make state lock and journal
head lock rt safe") we continue the assumption that bit_spin_locks
are unsafe for RT and hence do the same thing here.

Force the LIST_BL_LOCKMASK to zero for RT, since we won't be
setting the zero'th bit as lock/unlocked state.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---
 include/linux/list_bl.h | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
index 31f9d75..9c46fea 100644
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -2,6 +2,7 @@
 #define _LINUX_LIST_BL_H
 
 #include <linux/list.h>
+#include <linux/spinlock.h>
 #include <linux/bit_spinlock.h>
 
 /*
@@ -17,7 +18,7 @@
  * some fast and compact auxiliary data.
  */
 
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
+#if (defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)) && !defined(CONFIG_PREEMPT_RT_BASE)
 #define LIST_BL_LOCKMASK	1UL
 #else
 #define LIST_BL_LOCKMASK	0UL
@@ -32,13 +33,22 @@
 
 struct hlist_bl_head {
 	struct hlist_bl_node *first;
+#ifdef CONFIG_PREEMPT_RT_BASE
+	spinlock_t lock;
+#endif
 };
 
 struct hlist_bl_node {
 	struct hlist_bl_node *next, **pprev;
 };
-#define INIT_HLIST_BL_HEAD(ptr) \
-	((ptr)->first = NULL)
+
+static inline void INIT_HLIST_BL_HEAD(struct hlist_bl_head *h)
+{
+	h->first = NULL;
+#ifdef CONFIG_PREEMPT_RT_BASE
+	spin_lock_init(&h->lock);
+#endif
+}
 
 static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
 {
@@ -117,12 +127,20 @@ static inline void hlist_bl_del_init(struct hlist_bl_node *n)
 
 static inline void hlist_bl_lock(struct hlist_bl_head *b)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	bit_spin_lock(0, (unsigned long *)b);
+#else
+	spin_lock(&b->lock);
+#endif
 }
 
 static inline void hlist_bl_unlock(struct hlist_bl_head *b)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	__bit_spin_unlock(0, (unsigned long *)b);
+#else
+	spin_unlock(&b->lock);
+#endif
 }
 
 /**
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] list_bl: make list head lock a raw lock
  2013-06-10 21:36 [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels Paul Gortmaker
  2013-06-10 21:36 ` [PATCH 1/2] list_bl.h: make list head locking RT safe Paul Gortmaker
@ 2013-06-10 21:36 ` Paul Gortmaker
  2013-06-10 21:56 ` [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels Paul E. McKenney
  2 siblings, 0 replies; 9+ messages in thread
From: Paul Gortmaker @ 2013-06-10 21:36 UTC (permalink / raw)
  To: rostedt, tglx, bigeasy, paulmck; +Cc: linux-rt-users, Paul Gortmaker

As a bit spinlock, we had no lockdep visibility into the usage
of the list head locking.  Now, as a separate lock, we see:

[    3.613354] BUG: sleeping function called from invalid context at kernel/rtmutex.c:658
[    3.613356] in_atomic(): 1, irqs_disabled(): 0, pid: 122, name: udevd
[    3.613357] 5 locks held by udevd/122:
[    3.613358]  #0:  (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffff811967e8>] lock_rename+0xe8/0xf0
[    3.613363]  #1:  (rename_lock){+.+...}, at: [<ffffffff811a277c>] d_move+0x2c/0x60
[    3.613367]  #2:  (&dentry->d_lock){+.+...}, at: [<ffffffff811a0763>] dentry_lock_for_move+0xf3/0x130
[    3.613370]  #3:  (&dentry->d_lock/2){+.+...}, at: [<ffffffff811a0734>] dentry_lock_for_move+0xc4/0x130
[    3.613373]  #4:  (&dentry->d_lock/3){+.+...}, at: [<ffffffff811a0747>] dentry_lock_for_move+0xd7/0x130
[    3.613377] Pid: 122, comm: udevd Not tainted 3.4.47-rt62-00002-gfedcea8 #7
[    3.613378] Call Trace:
[    3.613382]  [<ffffffff810b9624>] __might_sleep+0x134/0x1f0
[    3.613385]  [<ffffffff817a24d4>] rt_spin_lock+0x24/0x60
[    3.613387]  [<ffffffff811a0c4c>] __d_shrink+0x5c/0xa0
[    3.613389]  [<ffffffff811a1b2d>] __d_drop+0x1d/0x40
[    3.613391]  [<ffffffff811a24be>] __d_move+0x8e/0x320
[    3.613393]  [<ffffffff811a278e>] d_move+0x3e/0x60
[    3.613394]  [<ffffffff81199598>] vfs_rename+0x198/0x4c0
[    3.613396]  [<ffffffff8119b093>] sys_renameat+0x213/0x240
[    3.613398]  [<ffffffff817a2de5>] ? _raw_spin_unlock+0x35/0x60
[    3.613401]  [<ffffffff8107781c>] ? do_page_fault+0x1ec/0x4b0
[    3.613403]  [<ffffffff817a32ca>] ? retint_swapgs+0xe/0x13
[    3.613406]  [<ffffffff813eb0e6>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[    3.613408]  [<ffffffff8119b0db>] sys_rename+0x1b/0x20
[    3.613410]  [<ffffffff817a3b96>] system_call_fastpath+0x1a/0x1f

For now, lets assume that the list head lock isn't held for big
stretches, and hence it being raw won't be a significant latency
concern.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---
 include/linux/list_bl.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
index 9c46fea..64ba33b 100644
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -34,7 +34,7 @@
 struct hlist_bl_head {
 	struct hlist_bl_node *first;
 #ifdef CONFIG_PREEMPT_RT_BASE
-	spinlock_t lock;
+	raw_spinlock_t lock;
 #endif
 };
 
@@ -46,7 +46,7 @@ static inline void INIT_HLIST_BL_HEAD(struct hlist_bl_head *h)
 {
 	h->first = NULL;
 #ifdef CONFIG_PREEMPT_RT_BASE
-	spin_lock_init(&h->lock);
+	raw_spin_lock_init(&h->lock);
 #endif
 }
 
@@ -130,7 +130,7 @@ static inline void hlist_bl_lock(struct hlist_bl_head *b)
 #ifndef CONFIG_PREEMPT_RT_BASE
 	bit_spin_lock(0, (unsigned long *)b);
 #else
-	spin_lock(&b->lock);
+	raw_spin_lock(&b->lock);
 #endif
 }
 
@@ -139,7 +139,7 @@ static inline void hlist_bl_unlock(struct hlist_bl_head *b)
 #ifndef CONFIG_PREEMPT_RT_BASE
 	__bit_spin_unlock(0, (unsigned long *)b);
 #else
-	spin_unlock(&b->lock);
+	raw_spin_unlock(&b->lock);
 #endif
 }
 
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels
  2013-06-10 21:36 [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels Paul Gortmaker
  2013-06-10 21:36 ` [PATCH 1/2] list_bl.h: make list head locking RT safe Paul Gortmaker
  2013-06-10 21:36 ` [PATCH 2/2] list_bl: make list head lock a raw lock Paul Gortmaker
@ 2013-06-10 21:56 ` Paul E. McKenney
  2 siblings, 0 replies; 9+ messages in thread
From: Paul E. McKenney @ 2013-06-10 21:56 UTC (permalink / raw)
  To: Paul Gortmaker; +Cc: rostedt, tglx, bigeasy, linux-rt-users

On Mon, Jun 10, 2013 at 05:36:47PM -0400, Paul Gortmaker wrote:
> The bit_spin_locks have been problematic on RT for a long time; dating
> at least back to 2.6.11 and their use in the journalling code[1].  We
> still have patches today that clobber them for cgroups and jbd/jbd2
> code on RT[2].  But there have been some newer users added.
> 
> In commit 4e35e6070b1c [2.6.38] ("kernel: add bl_list") we got
> the list heads with the zero'th bit reserved for locking.
> 
> It was shortly followed with ceb5bdc2d24 ("fs: dcache per-bucket
> dcache hash locking") that made it clear the bit was now being used
> in a bit_spin_lock context (e.g. in fs/dcache.c).
> 
> As of commit 1879fd6a265 [2.6.39] ("add hlist_bl_lock/unlock helpers")
> we got helper functions that combined the open coded bit locks into
> one place.  At the same time, it makes it more clear that bit_spin_lock
> is being used, and where.
> 
> Assuming that we still can not use the bit_spin_lock safely on RT,
> then users of these helpers will also result in unsafe usage.  Following
> the style of "fix" used for jbd code[2], I've done a similar thing here
> and introduce a stand-alone lock for the list head.  This may be less
> than ideal from a performance standpoint -- currently unclear to me.
> 
> I can't pin an actual failing on not having these patches present; I
> came by it simply by inspecting the jbd2 code while trying to diagnose
> another problem (one which these patches unfortunately don't fix) and
> ended up searching for users of bit_spin.
> 
> Noting the above, there is also another use case which may be
> undesireable for RT -- for the RT trees which now support SLUB,
> there is a bit_spin_lock used for slab_lock/slab_unlock....  I'll
> only mention it here and leave it for a separate thread/discussion.
> 
> I'm calling these RFC patches, meant to just start discussion.  The
> two patches could obviously be squashed into one, but I wanted the
> 2nd (rawlock) to remain separate since it shows why it becomes raw,
> and I'm not 100% convinced that my assumption that it is OK from a
> latency perspective to be raw is actually a valid one yet.
> 
> In addition, we probably want to be looking at Eric/PaulM's patch
> currently in net-next:  c87a124a5d ("net: force a reload of first
> item in hlist_nulls_for_each_entry_rcu") [3] -- as a candidate for
> cherry-pick onto RT, I think.  It will get there eventually via
> DaveM --> GregKH --> Steve path (for the rt-stable branches).
> 
> Patches attached here were from a v3.6.11.5-rt37 based tree.

They both look good to me:

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Paul.
> --
> 
> [1] http://linux.kernel.narkive.com/octAmqz8/patch-real-time-preemption-rt-2-6-11-rc3-v0-7-38-01.4
> 
> [2] http://git.kernel.org/cgit/linux/kernel/git/paulg/3.6-rt-patches.git/tree/mm-cgroup-page-bit-spinlock.patch
>     http://git.kernel.org/cgit/linux/kernel/git/paulg/3.6-rt-patches.git/tree/fs-jbd-replace-bh_state-lock.patch
> 
> [3] http://patchwork.ozlabs.org/patch/247360/
>     http://git.kernel.org/cgit/linux/kernel/git/davem/net-next.git/commit/?id=c87a124a5d5e8cf8e21c4363c3372bcaf53ea190
> 
> Paul Gortmaker (2):
>   list_bl.h: make list head locking RT safe
>   list_bl: make list head lock a raw lock
> 
>  include/linux/list_bl.h | 24 +++++++++++++++++++++---
>  1 file changed, 21 insertions(+), 3 deletions(-)
> 
> -- 
> 1.8.1.2
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] list_bl.h: make list head locking RT safe
  2013-06-10 21:36 ` [PATCH 1/2] list_bl.h: make list head locking RT safe Paul Gortmaker
@ 2013-06-21 12:23   ` Sebastian Andrzej Siewior
  2013-06-21 15:25     ` Paul Gortmaker
  0 siblings, 1 reply; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2013-06-21 12:23 UTC (permalink / raw)
  To: Paul Gortmaker; +Cc: rostedt, tglx, paulmck, linux-rt-users

* Paul Gortmaker | 2013-06-10 17:36:48 [-0400]:

>--- a/include/linux/list_bl.h
>+++ b/include/linux/list_bl.h
>@@ -17,7 +18,7 @@
>  * some fast and compact auxiliary data.
>  */
> 
>-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
>+#if (defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)) && !defined(CONFIG_PREEMPT_RT_BASE)
> #define LIST_BL_LOCKMASK	1UL
> #else
> #define LIST_BL_LOCKMASK	0UL

On RT we don't have the lock mask anymore and those LIST_BL_BUG_ON()
check don't do anything. It would be nice if you could check if the lock
in that given entry is taken or not.

All in all it looks good.

Sebastian

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] list_bl.h: make list head locking RT safe
  2013-06-21 12:23   ` Sebastian Andrzej Siewior
@ 2013-06-21 15:25     ` Paul Gortmaker
  2013-06-21 15:36       ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 9+ messages in thread
From: Paul Gortmaker @ 2013-06-21 15:25 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior; +Cc: rostedt, tglx, paulmck, linux-rt-users

[Re: [PATCH 1/2] list_bl.h: make list head locking RT safe] On 21/06/2013 (Fri 14:23) Sebastian Andrzej Siewior wrote:

> * Paul Gortmaker | 2013-06-10 17:36:48 [-0400]:
> 
> >--- a/include/linux/list_bl.h
> >+++ b/include/linux/list_bl.h
> >@@ -17,7 +18,7 @@
> >  * some fast and compact auxiliary data.
> >  */
> > 
> >-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
> >+#if (defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)) && !defined(CONFIG_PREEMPT_RT_BASE)
> > #define LIST_BL_LOCKMASK	1UL
> > #else
> > #define LIST_BL_LOCKMASK	0UL
> 
> On RT we don't have the lock mask anymore and those LIST_BL_BUG_ON()
> check don't do anything. It would be nice if you could check if the lock
> in that given entry is taken or not.
> 
> All in all it looks good.

I guess the easiest way to make use of the existing checks is to
simply set/clear the bit inside the lock.  That will allow the
existing code to check if the list got smashed by something.

If the incremental patch below is what you had in mind, then I'll
resend a v2 with this change incorporated.

Thanks,
Paul.
--


diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
index 64ba33b..a3ff8d7 100644
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -18,7 +18,7 @@
  * some fast and compact auxiliary data.
  */
 
-#if (defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)) && !defined(CONFIG_PREEMPT_RT_BASE)
+#if (defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK))
 #define LIST_BL_LOCKMASK	1UL
 #else
 #define LIST_BL_LOCKMASK	0UL
@@ -131,6 +131,7 @@ static inline void hlist_bl_lock(struct hlist_bl_head *b)
 	bit_spin_lock(0, (unsigned long *)b);
 #else
 	raw_spin_lock(&b->lock);
+	__set_bit(0, (unsigned long *)b);
 #endif
 }
 
@@ -139,6 +140,7 @@ static inline void hlist_bl_unlock(struct hlist_bl_head *b)
 #ifndef CONFIG_PREEMPT_RT_BASE
 	__bit_spin_unlock(0, (unsigned long *)b);
 #else
+	__clear_bit(0, (unsigned long *)b);
 	raw_spin_unlock(&b->lock);
 #endif
 }

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] list_bl.h: make list head locking RT safe
  2013-06-21 15:25     ` Paul Gortmaker
@ 2013-06-21 15:36       ` Sebastian Andrzej Siewior
  2013-06-21 19:07         ` [PATCH v2] " Paul Gortmaker
  0 siblings, 1 reply; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2013-06-21 15:36 UTC (permalink / raw)
  To: Paul Gortmaker; +Cc: rostedt, tglx, paulmck, linux-rt-users

On 06/21/2013 05:25 PM, Paul Gortmaker wrote:
> I guess the easiest way to make use of the existing checks is to
> simply set/clear the bit inside the lock.  That will allow the
> existing code to check if the list got smashed by something.
> 
> If the incremental patch below is what you had in mind, then I'll
> resend a v2 with this change incorporated.

Yes, that is probably the easiest thing.

> Thanks,
> Paul.
> --

Sebastian

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2] list_bl.h: make list head locking RT safe
  2013-06-21 15:36       ` Sebastian Andrzej Siewior
@ 2013-06-21 19:07         ` Paul Gortmaker
  2013-06-28 11:23           ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 9+ messages in thread
From: Paul Gortmaker @ 2013-06-21 19:07 UTC (permalink / raw)
  To: bigeasy; +Cc: rostedt, tglx, paulmck, linux-rt-users, Paul Gortmaker

As per changes in include/linux/jbd_common.h for avoiding the
bit_spin_locks on RT ("fs: jbd/jbd2: Make state lock and journal
head lock rt safe") we do the same thing here.

We use the non atomic __set_bit and __clear_bit inside the scope of
the lock to preserve the ability of the existing LIST_DEBUG code to
use the zero'th bit in the sanity checks.

As a bit spinlock, we had no lockdep visibility into the usage
of the list head locking.  Now, if we were to implement it as a
standard non-raw spinlock, we would see:

BUG: sleeping function called from invalid context at kernel/rtmutex.c:658
in_atomic(): 1, irqs_disabled(): 0, pid: 122, name: udevd
5 locks held by udevd/122:
 #0:  (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffff811967e8>] lock_rename+0xe8/0xf0
 #1:  (rename_lock){+.+...}, at: [<ffffffff811a277c>] d_move+0x2c/0x60
 #2:  (&dentry->d_lock){+.+...}, at: [<ffffffff811a0763>] dentry_lock_for_move+0xf3/0x130
 #3:  (&dentry->d_lock/2){+.+...}, at: [<ffffffff811a0734>] dentry_lock_for_move+0xc4/0x130
 #4:  (&dentry->d_lock/3){+.+...}, at: [<ffffffff811a0747>] dentry_lock_for_move+0xd7/0x130
Pid: 122, comm: udevd Not tainted 3.4.47-rt62 #7
Call Trace:
 [<ffffffff810b9624>] __might_sleep+0x134/0x1f0
 [<ffffffff817a24d4>] rt_spin_lock+0x24/0x60
 [<ffffffff811a0c4c>] __d_shrink+0x5c/0xa0
 [<ffffffff811a1b2d>] __d_drop+0x1d/0x40
 [<ffffffff811a24be>] __d_move+0x8e/0x320
 [<ffffffff811a278e>] d_move+0x3e/0x60
 [<ffffffff81199598>] vfs_rename+0x198/0x4c0
 [<ffffffff8119b093>] sys_renameat+0x213/0x240
 [<ffffffff817a2de5>] ? _raw_spin_unlock+0x35/0x60
 [<ffffffff8107781c>] ? do_page_fault+0x1ec/0x4b0
 [<ffffffff817a32ca>] ? retint_swapgs+0xe/0x13
 [<ffffffff813eb0e6>] ? trace_hardirqs_on_thunk+0x3a/0x3f
 [<ffffffff8119b0db>] sys_rename+0x1b/0x20
 [<ffffffff817a3b96>] system_call_fastpath+0x1a/0x1f

Since we are only taking the lock during short lived list operations,
lets assume for now that it being raw won't be a significant latency
concern.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
---

[v2: squash the previous two commits into one; add set_bit to
 preserve the ability of LIST_DEBUG to do sanity checks.]

 include/linux/list_bl.h | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
index 31f9d75..ddfd46a 100644
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -2,6 +2,7 @@
 #define _LINUX_LIST_BL_H
 
 #include <linux/list.h>
+#include <linux/spinlock.h>
 #include <linux/bit_spinlock.h>
 
 /*
@@ -32,13 +33,22 @@
 
 struct hlist_bl_head {
 	struct hlist_bl_node *first;
+#ifdef CONFIG_PREEMPT_RT_BASE
+	raw_spinlock_t lock;
+#endif
 };
 
 struct hlist_bl_node {
 	struct hlist_bl_node *next, **pprev;
 };
-#define INIT_HLIST_BL_HEAD(ptr) \
-	((ptr)->first = NULL)
+
+static inline void INIT_HLIST_BL_HEAD(struct hlist_bl_head *h)
+{
+	h->first = NULL;
+#ifdef CONFIG_PREEMPT_RT_BASE
+	raw_spin_lock_init(&h->lock);
+#endif
+}
 
 static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
 {
@@ -117,12 +127,22 @@ static inline void hlist_bl_del_init(struct hlist_bl_node *n)
 
 static inline void hlist_bl_lock(struct hlist_bl_head *b)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	bit_spin_lock(0, (unsigned long *)b);
+#else
+	raw_spin_lock(&b->lock);
+	__set_bit(0, (unsigned long *)b);
+#endif
 }
 
 static inline void hlist_bl_unlock(struct hlist_bl_head *b)
 {
+#ifndef CONFIG_PREEMPT_RT_BASE
 	__bit_spin_unlock(0, (unsigned long *)b);
+#else
+	__clear_bit(0, (unsigned long *)b);
+	raw_spin_unlock(&b->lock);
+#endif
 }
 
 /**
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2] list_bl.h: make list head locking RT safe
  2013-06-21 19:07         ` [PATCH v2] " Paul Gortmaker
@ 2013-06-28 11:23           ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2013-06-28 11:23 UTC (permalink / raw)
  To: Paul Gortmaker; +Cc: rostedt, tglx, paulmck, linux-rt-users

On 06/21/2013 09:07 PM, Paul Gortmaker wrote:
> As per changes in include/linux/jbd_common.h for avoiding the
> bit_spin_locks on RT ("fs: jbd/jbd2: Make state lock and journal
> head lock rt safe") we do the same thing here.

applied.

Sebastian

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-06-28 11:23 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-10 21:36 [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels Paul Gortmaker
2013-06-10 21:36 ` [PATCH 1/2] list_bl.h: make list head locking RT safe Paul Gortmaker
2013-06-21 12:23   ` Sebastian Andrzej Siewior
2013-06-21 15:25     ` Paul Gortmaker
2013-06-21 15:36       ` Sebastian Andrzej Siewior
2013-06-21 19:07         ` [PATCH v2] " Paul Gortmaker
2013-06-28 11:23           ` Sebastian Andrzej Siewior
2013-06-10 21:36 ` [PATCH 2/2] list_bl: make list head lock a raw lock Paul Gortmaker
2013-06-10 21:56 ` [PATCH 0/2] Avoid more bit_spin_lock usage on RT kernels Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.