xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] xen/locking: fix and enhance lock debugging
@ 2020-11-04  8:15 Juergen Gross
  2020-11-04  8:15 ` [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is always fine Juergen Gross
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Juergen Gross @ 2020-11-04  8:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

This small series fixes two issues with spinlock debug code and adds
lock debug code to rwlocks in order to catch IRQ violations.

Juergen Gross (3):
  xen/spinlocks: spin_trylock with interrupts off is always fine
  xen/locking: harmonize spinlocks and rwlocks regarding preemption
  xen/rwlock: add check_lock() handling to rwlocks

 xen/common/spinlock.c      | 22 ++++++++++++++--------
 xen/include/xen/rwlock.h   | 15 +++++++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 31 insertions(+), 8 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is always fine
  2020-11-04  8:15 [PATCH v3 0/3] xen/locking: fix and enhance lock debugging Juergen Gross
@ 2020-11-04  8:15 ` Juergen Gross
  2020-11-04  9:36   ` Julien Grall
  2020-11-04  8:15 ` [PATCH v3 2/3] xen/locking: harmonize spinlocks and rwlocks regarding preemption Juergen Gross
  2020-11-04  8:15 ` [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
  2 siblings, 1 reply; 8+ messages in thread
From: Juergen Gross @ 2020-11-04  8:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.

Add a bool parameter "try" to check_lock() for handling this case.

Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- corrected comment (Jan Beulich)
---
 xen/common/spinlock.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index ce3106e2d3..b4aaf6bce6 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug)
+static void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
      * 
      * To guard against this subtle bug we latch the IRQ safety of every
      * spinlock in the system, on first use.
+     *
+     * A spin_trylock() with interrupts off is always fine, as this can't
+     * block and above deadlock scenario doesn't apply.
      */
+    if ( try && irq_safe )
+        return;
+
     if ( unlikely(debug->irq_safe != irq_safe) )
     {
         union lock_debug seen, new = { 0 };
@@ -102,7 +108,7 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l) ((void)0)
+#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
@@ -159,7 +165,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
     spinlock_tickets_t tickets = SPINLOCK_TICKET_INC;
     LOCK_PROFILE_VAR;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, false);
     preempt_disable();
     tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail,
                                            tickets.head_tail);
@@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 
 int _spin_is_locked(spinlock_t *lock)
 {
-    check_lock(&lock->debug);
-
     /*
      * Recursive locks may be locked by another CPU, yet we return
      * "false" here, making this function suitable only for use in
@@ -236,7 +240,7 @@ int _spin_trylock(spinlock_t *lock)
 {
     spinlock_tickets_t old, new;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
     old = observe_lock(&lock->tickets);
     if ( old.head != old.tail )
         return 0;
@@ -294,7 +298,7 @@ int _spin_trylock_recursive(spinlock_t *lock)
     BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
     BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
 
     if ( likely(lock->recurse_cpu != cpu) )
     {
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/3] xen/locking: harmonize spinlocks and rwlocks regarding preemption
  2020-11-04  8:15 [PATCH v3 0/3] xen/locking: fix and enhance lock debugging Juergen Gross
  2020-11-04  8:15 ` [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is always fine Juergen Gross
@ 2020-11-04  8:15 ` Juergen Gross
  2020-11-04  9:38   ` Julien Grall
  2020-11-04  8:15 ` [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
  2 siblings, 1 reply; 8+ messages in thread
From: Juergen Gross @ 2020-11-04  8:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Spinlocks and rwlocks behave differently in the try variants regarding
preemption: rwlocks are switching preemption off before testing the
lock, while spinlocks do so only after the first check.

Modify _spin_trylock() to disable preemption before testing the lock
to be held in order to be preemption-ready.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- new patch
---
 xen/common/spinlock.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index b4aaf6bce6..f4eb50f030 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -240,13 +240,16 @@ int _spin_trylock(spinlock_t *lock)
 {
     spinlock_tickets_t old, new;
 
+    preempt_disable();
     check_lock(&lock->debug, true);
     old = observe_lock(&lock->tickets);
     if ( old.head != old.tail )
+    {
+        preempt_enable();
         return 0;
+    }
     new = old;
     new.tail++;
-    preempt_disable();
     if ( cmpxchg(&lock->tickets.head_tail,
                  old.head_tail, new.head_tail) != old.head_tail )
     {
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-04  8:15 [PATCH v3 0/3] xen/locking: fix and enhance lock debugging Juergen Gross
  2020-11-04  8:15 ` [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is always fine Juergen Gross
  2020-11-04  8:15 ` [PATCH v3 2/3] xen/locking: harmonize spinlocks and rwlocks regarding preemption Juergen Gross
@ 2020-11-04  8:15 ` Juergen Gross
  2020-11-04  9:41   ` Julien Grall
  2020-11-04  9:42   ` Jan Beulich
  2 siblings, 2 replies; 8+ messages in thread
From: Juergen Gross @ 2020-11-04  8:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Checking whether a lock is consistently used regarding interrupts on
or off is beneficial for rwlocks, too.

So add check_lock() calls to rwlock functions. For this purpose make
check_lock() globally accessible.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- call check_lock() unconditionally in try_lock variants (Jan Beulich)

V3:
- add comments (Jan Beulich)
- place check_lock() calls inside preempt-off region (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/spinlock.c      |  3 +--
 xen/include/xen/rwlock.h   | 15 +++++++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index f4eb50f030..b90981bb27 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug, bool try)
+void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -108,7 +108,6 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 427664037a..0cc9167715 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
     u32 cnts;
 
     preempt_disable();
+    check_lock(&lock->lock.debug, true);
     cnts = atomic_read(&lock->cnts);
     if ( likely(_can_read_lock(cnts)) )
     {
@@ -87,7 +88,11 @@ static inline void _read_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( likely(_can_read_lock(cnts)) )
+    {
+        /* The slow path calls check_lock() via spin_lock(). */
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     /* The slowpath will decrement the reader count, if necessary. */
     queue_read_lock_slowpath(lock);
@@ -162,7 +167,11 @@ static inline void _write_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
+    {
+        /* The slow path calls check_lock() via spin_lock(). */
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     queue_write_lock_slowpath(lock);
     /*
@@ -197,6 +206,7 @@ static inline int _write_trylock(rwlock_t *lock)
     u32 cnts;
 
     preempt_disable();
+    check_lock(&lock->lock.debug, true);
     cnts = atomic_read(&lock->cnts);
     if ( unlikely(cnts) ||
          unlikely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) != 0) )
@@ -328,6 +338,11 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
         /* Drop the read lock because we don't need it anymore. */
         read_unlock(&percpu_rwlock->rwlock);
     }
+    else
+    {
+        /* All other paths have implicit check_lock() calls via read_lock(). */
+        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
+    }
 }
 
 static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata,
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index ca13b600a0..9fa4e600c1 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -21,11 +21,13 @@ union lock_debug {
     };
 };
 #define _LOCK_DEBUG { LOCK_DEBUG_INITVAL }
+void check_lock(union lock_debug *debug, bool try);
 void spin_debug_enable(void);
 void spin_debug_disable(void);
 #else
 union lock_debug { };
 #define _LOCK_DEBUG { }
+#define check_lock(l, t) ((void)0)
 #define spin_debug_enable() ((void)0)
 #define spin_debug_disable() ((void)0)
 #endif
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is always fine
  2020-11-04  8:15 ` [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is always fine Juergen Gross
@ 2020-11-04  9:36   ` Julien Grall
  0 siblings, 0 replies; 8+ messages in thread
From: Julien Grall @ 2020-11-04  9:36 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Jan Beulich,
	Stefano Stabellini, Wei Liu

Hi Juergen,

On 04/11/2020 08:15, Juergen Gross wrote:
> Even if a spinlock was taken with interrupts on before calling
> spin_trylock() with interrupts off is fine, as it can't block.
> 
> Add a bool parameter "try" to check_lock() for handling this case.
> 
> Remove the call of check_lock() from _spin_is_locked(), as it really
> serves no purpose and it can even lead to false crashes, e.g. when
> a lock was taken correctly with interrupts enabled and the call of
> _spin_is_locked() happened with interrupts off. In case the lock is
> taken with wrong interrupt flags this will be catched when taking
> the lock.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> V2:
> - corrected comment (Jan Beulich)
> ---
>   xen/common/spinlock.c | 18 +++++++++++-------
>   1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
> index ce3106e2d3..b4aaf6bce6 100644
> --- a/xen/common/spinlock.c
> +++ b/xen/common/spinlock.c
> @@ -13,7 +13,7 @@
>   
>   static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
>   
> -static void check_lock(union lock_debug *debug)
> +static void check_lock(union lock_debug *debug, bool try)
>   {
>       bool irq_safe = !local_irq_is_enabled();
>   
> @@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
>        *
>        * To guard against this subtle bug we latch the IRQ safety of every
>        * spinlock in the system, on first use.
> +     *
> +     * A spin_trylock() with interrupts off is always fine, as this can't
> +     * block and above deadlock scenario doesn't apply.
>        */
> +    if ( try && irq_safe )
> +        return;
> +
>       if ( unlikely(debug->irq_safe != irq_safe) )
>       {
>           union lock_debug seen, new = { 0 };
> @@ -102,7 +108,7 @@ void spin_debug_disable(void)
>   
>   #else /* CONFIG_DEBUG_LOCKS */
>   
> -#define check_lock(l) ((void)0)
> +#define check_lock(l, t) ((void)0)
>   #define check_barrier(l) ((void)0)
>   #define got_lock(l) ((void)0)
>   #define rel_lock(l) ((void)0)
> @@ -159,7 +165,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
>       spinlock_tickets_t tickets = SPINLOCK_TICKET_INC;
>       LOCK_PROFILE_VAR;
>   
> -    check_lock(&lock->debug);
> +    check_lock(&lock->debug, false);
>       preempt_disable();
>       tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail,
>                                              tickets.head_tail);
> @@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
>   
>   int _spin_is_locked(spinlock_t *lock)
>   {
> -    check_lock(&lock->debug);
> -
>       /*
>        * Recursive locks may be locked by another CPU, yet we return
>        * "false" here, making this function suitable only for use in
> @@ -236,7 +240,7 @@ int _spin_trylock(spinlock_t *lock)
>   {
>       spinlock_tickets_t old, new;
>   
> -    check_lock(&lock->debug);
> +    check_lock(&lock->debug, true);
>       old = observe_lock(&lock->tickets);
>       if ( old.head != old.tail )
>           return 0;
> @@ -294,7 +298,7 @@ int _spin_trylock_recursive(spinlock_t *lock)
>       BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
>       BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
>   
> -    check_lock(&lock->debug);
> +    check_lock(&lock->debug, true);
>   
>       if ( likely(lock->recurse_cpu != cpu) )
>       {
> 

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/3] xen/locking: harmonize spinlocks and rwlocks regarding preemption
  2020-11-04  8:15 ` [PATCH v3 2/3] xen/locking: harmonize spinlocks and rwlocks regarding preemption Juergen Gross
@ 2020-11-04  9:38   ` Julien Grall
  0 siblings, 0 replies; 8+ messages in thread
From: Julien Grall @ 2020-11-04  9:38 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Jan Beulich,
	Stefano Stabellini, Wei Liu

Hi Juergen,

On 04/11/2020 08:15, Juergen Gross wrote:
> Spinlocks and rwlocks behave differently in the try variants regarding
> preemption: rwlocks are switching preemption off before testing the
> lock, while spinlocks do so only after the first check.
> 
> Modify _spin_trylock() to disable preemption before testing the lock
> to be held in order to be preemption-ready.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-04  8:15 ` [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
@ 2020-11-04  9:41   ` Julien Grall
  2020-11-04  9:42   ` Jan Beulich
  1 sibling, 0 replies; 8+ messages in thread
From: Julien Grall @ 2020-11-04  9:41 UTC (permalink / raw)
  To: Juergen Gross, xen-devel
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Jan Beulich,
	Stefano Stabellini, Wei Liu

Hi Juergen,

On 04/11/2020 08:15, Juergen Gross wrote:
> Checking whether a lock is consistently used regarding interrupts on
> or off is beneficial for rwlocks, too.
> 
> So add check_lock() calls to rwlock functions. For this purpose make
> check_lock() globally accessible.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-04  8:15 ` [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
  2020-11-04  9:41   ` Julien Grall
@ 2020-11-04  9:42   ` Jan Beulich
  1 sibling, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2020-11-04  9:42 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 04.11.2020 09:15, Juergen Gross wrote:
> Checking whether a lock is consistently used regarding interrupts on
> or off is beneficial for rwlocks, too.
> 
> So add check_lock() calls to rwlock functions. For this purpose make
> check_lock() globally accessible.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-11-04  9:43 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-04  8:15 [PATCH v3 0/3] xen/locking: fix and enhance lock debugging Juergen Gross
2020-11-04  8:15 ` [PATCH v3 1/3] xen/spinlocks: spin_trylock with interrupts off is always fine Juergen Gross
2020-11-04  9:36   ` Julien Grall
2020-11-04  8:15 ` [PATCH v3 2/3] xen/locking: harmonize spinlocks and rwlocks regarding preemption Juergen Gross
2020-11-04  9:38   ` Julien Grall
2020-11-04  8:15 ` [PATCH v3 3/3] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
2020-11-04  9:41   ` Julien Grall
2020-11-04  9:42   ` Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).