xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] xen/locking: fix and enhance lock debugging
@ 2020-11-02 13:12 Juergen Gross
  2020-11-02 13:12 ` [PATCH v2 1/2] xen/spinlocks: spin_trylock with interrupts off is always fine Juergen Gross
  2020-11-02 13:12 ` [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
  0 siblings, 2 replies; 9+ messages in thread
From: Juergen Gross @ 2020-11-02 13:12 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

This small series fixes two issues with spinlock debug code and adds
lock debug code to rwlocks in order to catch IRQ violations.

Juergen Gross (2):
  xen/spinlocks: spin_trylock with interrupts off is always fine
  xen/rwlock: add check_lock() handling to rwlocks

 xen/common/spinlock.c      | 17 ++++++++++-------
 xen/include/xen/rwlock.h   | 11 +++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 23 insertions(+), 7 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/2] xen/spinlocks: spin_trylock with interrupts off is always fine
  2020-11-02 13:12 [PATCH v2 0/2] xen/locking: fix and enhance lock debugging Juergen Gross
@ 2020-11-02 13:12 ` Juergen Gross
  2020-11-02 13:12 ` [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
  1 sibling, 0 replies; 9+ messages in thread
From: Juergen Gross @ 2020-11-02 13:12 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.

Add a bool parameter "try" to check_lock() for handling this case.

Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- corrected comment (Jan Beulich)
---
 xen/common/spinlock.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index ce3106e2d3..b4aaf6bce6 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug)
+static void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
      * 
      * To guard against this subtle bug we latch the IRQ safety of every
      * spinlock in the system, on first use.
+     *
+     * A spin_trylock() with interrupts off is always fine, as this can't
+     * block and above deadlock scenario doesn't apply.
      */
+    if ( try && irq_safe )
+        return;
+
     if ( unlikely(debug->irq_safe != irq_safe) )
     {
         union lock_debug seen, new = { 0 };
@@ -102,7 +108,7 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l) ((void)0)
+#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
@@ -159,7 +165,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
     spinlock_tickets_t tickets = SPINLOCK_TICKET_INC;
     LOCK_PROFILE_VAR;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, false);
     preempt_disable();
     tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail,
                                            tickets.head_tail);
@@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 
 int _spin_is_locked(spinlock_t *lock)
 {
-    check_lock(&lock->debug);
-
     /*
      * Recursive locks may be locked by another CPU, yet we return
      * "false" here, making this function suitable only for use in
@@ -236,7 +240,7 @@ int _spin_trylock(spinlock_t *lock)
 {
     spinlock_tickets_t old, new;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
     old = observe_lock(&lock->tickets);
     if ( old.head != old.tail )
         return 0;
@@ -294,7 +298,7 @@ int _spin_trylock_recursive(spinlock_t *lock)
     BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
     BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
 
     if ( likely(lock->recurse_cpu != cpu) )
     {
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-02 13:12 [PATCH v2 0/2] xen/locking: fix and enhance lock debugging Juergen Gross
  2020-11-02 13:12 ` [PATCH v2 1/2] xen/spinlocks: spin_trylock with interrupts off is always fine Juergen Gross
@ 2020-11-02 13:12 ` Juergen Gross
  2020-11-03  9:02   ` Jan Beulich
  1 sibling, 1 reply; 9+ messages in thread
From: Juergen Gross @ 2020-11-02 13:12 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Andrew Cooper, George Dunlap, Ian Jackson,
	Jan Beulich, Julien Grall, Stefano Stabellini, Wei Liu

Checking whether a lock is consistently used regarding interrupts on
or off is beneficial for rwlocks, too.

So add check_lock() calls to rwlock functions. For this purpose make
check_lock() globally accessible.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- call check_lock() unconditionally in try_lock variants (Jan Beulich)
---
 xen/common/spinlock.c      |  3 +--
 xen/include/xen/rwlock.h   | 11 +++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index b4aaf6bce6..405322c6b8 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug, bool try)
+void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -108,7 +108,6 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 427664037a..94496a0f53 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
     u32 cnts;
 
     preempt_disable();
+    check_lock(&lock->lock.debug, true);
     cnts = atomic_read(&lock->cnts);
     if ( likely(_can_read_lock(cnts)) )
     {
@@ -66,6 +67,7 @@ static inline int _read_trylock(rwlock_t *lock)
          */
         if ( likely(_can_read_lock(cnts)) )
             return 1;
+
         atomic_sub(_QR_BIAS, &lock->cnts);
     }
     preempt_enable();
@@ -87,7 +89,10 @@ static inline void _read_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( likely(_can_read_lock(cnts)) )
+    {
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     /* The slowpath will decrement the reader count, if necessary. */
     queue_read_lock_slowpath(lock);
@@ -162,7 +167,10 @@ static inline void _write_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
+    {
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     queue_write_lock_slowpath(lock);
     /*
@@ -197,6 +205,7 @@ static inline int _write_trylock(rwlock_t *lock)
     u32 cnts;
 
     preempt_disable();
+    check_lock(&lock->lock.debug, true);
     cnts = atomic_read(&lock->cnts);
     if ( unlikely(cnts) ||
          unlikely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) != 0) )
@@ -328,6 +337,8 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
         /* Drop the read lock because we don't need it anymore. */
         read_unlock(&percpu_rwlock->rwlock);
     }
+    else
+        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
 }
 
 static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata,
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index ca13b600a0..9fa4e600c1 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -21,11 +21,13 @@ union lock_debug {
     };
 };
 #define _LOCK_DEBUG { LOCK_DEBUG_INITVAL }
+void check_lock(union lock_debug *debug, bool try);
 void spin_debug_enable(void);
 void spin_debug_disable(void);
 #else
 union lock_debug { };
 #define _LOCK_DEBUG { }
+#define check_lock(l, t) ((void)0)
 #define spin_debug_enable() ((void)0)
 #define spin_debug_disable() ((void)0)
 #endif
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-02 13:12 ` [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
@ 2020-11-03  9:02   ` Jan Beulich
  2020-11-03  9:22     ` Jürgen Groß
  0 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2020-11-03  9:02 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 02.11.2020 14:12, Juergen Gross wrote:
> --- a/xen/include/xen/rwlock.h
> +++ b/xen/include/xen/rwlock.h
> @@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
>      u32 cnts;
>  
>      preempt_disable();
> +    check_lock(&lock->lock.debug, true);
>      cnts = atomic_read(&lock->cnts);
>      if ( likely(_can_read_lock(cnts)) )
>      {

I'm sorry for being picky, but this still isn't matching
_spin_trylock(). Perhaps the difference is really benign, but
there the check sits ahead of preempt_disable(). (It has a
clear reason to be this way there, but without a clear reason
for things to be differently here, I think matching ordering
may help, at least to avoid questions.)

> @@ -66,6 +67,7 @@ static inline int _read_trylock(rwlock_t *lock)
>           */
>          if ( likely(_can_read_lock(cnts)) )
>              return 1;
> +
>          atomic_sub(_QR_BIAS, &lock->cnts);
>      }
>      preempt_enable();

Stray change?

> @@ -87,7 +89,10 @@ static inline void _read_lock(rwlock_t *lock)
>       * arch_lock_acquire_barrier().
>       */
>      if ( likely(_can_read_lock(cnts)) )
> +    {
> +        check_lock(&lock->lock.debug, false);
>          return;
> +    }
>  
>      /* The slowpath will decrement the reader count, if necessary. */
>      queue_read_lock_slowpath(lock);
> @@ -162,7 +167,10 @@ static inline void _write_lock(rwlock_t *lock)
>       * arch_lock_acquire_barrier().
>       */
>      if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
> +    {
> +        check_lock(&lock->lock.debug, false);
>          return;
> +    }
>  
>      queue_write_lock_slowpath(lock);
>      /*

Maybe also for these two, but likely more importantly for ...

> @@ -328,6 +337,8 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
>          /* Drop the read lock because we don't need it anymore. */
>          read_unlock(&percpu_rwlock->rwlock);
>      }
> +    else
> +        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
>  }

... this one a brief comment may be warranted to clarify why
the call sits here rather than at the top?

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-03  9:02   ` Jan Beulich
@ 2020-11-03  9:22     ` Jürgen Groß
  2020-11-03 10:04       ` Jan Beulich
  0 siblings, 1 reply; 9+ messages in thread
From: Jürgen Groß @ 2020-11-03  9:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 03.11.20 10:02, Jan Beulich wrote:
> On 02.11.2020 14:12, Juergen Gross wrote:
>> --- a/xen/include/xen/rwlock.h
>> +++ b/xen/include/xen/rwlock.h
>> @@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
>>       u32 cnts;
>>   
>>       preempt_disable();
>> +    check_lock(&lock->lock.debug, true);
>>       cnts = atomic_read(&lock->cnts);
>>       if ( likely(_can_read_lock(cnts)) )
>>       {
> 
> I'm sorry for being picky, but this still isn't matching
> _spin_trylock(). Perhaps the difference is really benign, but
> there the check sits ahead of preempt_disable(). (It has a
> clear reason to be this way there, but without a clear reason
> for things to be differently here, I think matching ordering
> may help, at least to avoid questions.)

I think this is more an optimization opportunity: I'd rather move the
preempt_disable() into the first if clause, as there is no need to
disable preemption in case the first read of the lock already leads
to failure acquiring the lock.

If you want I can prepend a patch doing that optimization.

> 
>> @@ -66,6 +67,7 @@ static inline int _read_trylock(rwlock_t *lock)
>>            */
>>           if ( likely(_can_read_lock(cnts)) )
>>               return 1;
>> +
>>           atomic_sub(_QR_BIAS, &lock->cnts);
>>       }
>>       preempt_enable();
> 
> Stray change?

Oh yes, a leftover from the old positioning of check_lock().

> 
>> @@ -87,7 +89,10 @@ static inline void _read_lock(rwlock_t *lock)
>>        * arch_lock_acquire_barrier().
>>        */
>>       if ( likely(_can_read_lock(cnts)) )
>> +    {
>> +        check_lock(&lock->lock.debug, false);
>>           return;
>> +    }
>>   
>>       /* The slowpath will decrement the reader count, if necessary. */
>>       queue_read_lock_slowpath(lock);
>> @@ -162,7 +167,10 @@ static inline void _write_lock(rwlock_t *lock)
>>        * arch_lock_acquire_barrier().
>>        */
>>       if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
>> +    {
>> +        check_lock(&lock->lock.debug, false);
>>           return;
>> +    }
>>   
>>       queue_write_lock_slowpath(lock);
>>       /*
> 
> Maybe also for these two, but likely more importantly for ...
> 
>> @@ -328,6 +337,8 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
>>           /* Drop the read lock because we don't need it anymore. */
>>           read_unlock(&percpu_rwlock->rwlock);
>>       }
>> +    else
>> +        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
>>   }
> 
> ... this one a brief comment may be warranted to clarify why
> the call sits here rather than at the top?

Okay.


Juergen


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-03  9:22     ` Jürgen Groß
@ 2020-11-03 10:04       ` Jan Beulich
  2020-11-03 10:17         ` Jürgen Groß
  2020-11-03 10:25         ` Julien Grall
  0 siblings, 2 replies; 9+ messages in thread
From: Jan Beulich @ 2020-11-03 10:04 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 03.11.2020 10:22, Jürgen Groß wrote:
> On 03.11.20 10:02, Jan Beulich wrote:
>> On 02.11.2020 14:12, Juergen Gross wrote:
>>> --- a/xen/include/xen/rwlock.h
>>> +++ b/xen/include/xen/rwlock.h
>>> @@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
>>>       u32 cnts;
>>>   
>>>       preempt_disable();
>>> +    check_lock(&lock->lock.debug, true);
>>>       cnts = atomic_read(&lock->cnts);
>>>       if ( likely(_can_read_lock(cnts)) )
>>>       {
>>
>> I'm sorry for being picky, but this still isn't matching
>> _spin_trylock(). Perhaps the difference is really benign, but
>> there the check sits ahead of preempt_disable(). (It has a
>> clear reason to be this way there, but without a clear reason
>> for things to be differently here, I think matching ordering
>> may help, at least to avoid questions.)
> 
> I think this is more an optimization opportunity: I'd rather move the
> preempt_disable() into the first if clause, as there is no need to
> disable preemption in case the first read of the lock already leads
> to failure acquiring the lock.
> 
> If you want I can prepend a patch doing that optimization.

I'd appreciate you doing so, yet I'd like to point out that
then the same question remains for _write_trylock(). Perhaps
a similar transformation is possible there, but it'll at
least be more code churn. Which of course isn't a reason not
to go this route there too.

This said - wouldn't what you suggest be wrong if we had
actual preemption in the hypervisor? Preemption hitting
between e.g. these two lines

    cnts = atomic_read(&lock->cnts);
    if ( likely(_can_read_lock(cnts)) )

would not yield the intended result, would it? (It wouldn't
affect correctness afaics, because the caller has to be
prepared anyway that the attempt fails, but the amount of
effectively false negatives would grow, as would the number
of cases where the slower path is taken for no reason.)

Question therefore is how much we care about keeping code
ready for "real" preemption, when we have ample other places
that would need changing first, before such could be enabled.

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-03 10:04       ` Jan Beulich
@ 2020-11-03 10:17         ` Jürgen Groß
  2020-11-03 10:30           ` Jan Beulich
  2020-11-03 10:25         ` Julien Grall
  1 sibling, 1 reply; 9+ messages in thread
From: Jürgen Groß @ 2020-11-03 10:17 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 03.11.20 11:04, Jan Beulich wrote:
> On 03.11.2020 10:22, Jürgen Groß wrote:
>> On 03.11.20 10:02, Jan Beulich wrote:
>>> On 02.11.2020 14:12, Juergen Gross wrote:
>>>> --- a/xen/include/xen/rwlock.h
>>>> +++ b/xen/include/xen/rwlock.h
>>>> @@ -56,6 +56,7 @@ static inline int _read_trylock(rwlock_t *lock)
>>>>        u32 cnts;
>>>>    
>>>>        preempt_disable();
>>>> +    check_lock(&lock->lock.debug, true);
>>>>        cnts = atomic_read(&lock->cnts);
>>>>        if ( likely(_can_read_lock(cnts)) )
>>>>        {
>>>
>>> I'm sorry for being picky, but this still isn't matching
>>> _spin_trylock(). Perhaps the difference is really benign, but
>>> there the check sits ahead of preempt_disable(). (It has a
>>> clear reason to be this way there, but without a clear reason
>>> for things to be differently here, I think matching ordering
>>> may help, at least to avoid questions.)
>>
>> I think this is more an optimization opportunity: I'd rather move the
>> preempt_disable() into the first if clause, as there is no need to
>> disable preemption in case the first read of the lock already leads
>> to failure acquiring the lock.
>>
>> If you want I can prepend a patch doing that optimization.
> 
> I'd appreciate you doing so, yet I'd like to point out that
> then the same question remains for _write_trylock(). Perhaps
> a similar transformation is possible there, but it'll at
> least be more code churn. Which of course isn't a reason not
> to go this route there too.

Shouldn't be very hard. It would just need to split the if clause
into two clauses.

> This said - wouldn't what you suggest be wrong if we had
> actual preemption in the hypervisor? Preemption hitting
> between e.g. these two lines
> 
>      cnts = atomic_read(&lock->cnts);
>      if ( likely(_can_read_lock(cnts)) )
> 
> would not yield the intended result, would it? (It wouldn't
> affect correctness afaics, because the caller has to be
> prepared anyway that the attempt fails, but the amount of
> effectively false negatives would grow, as would the number
> of cases where the slower path is taken for no reason.)

And this in turn would hit _spin_trylock() the same way.

IMO we should harmonize all the trylock variants in this regard:
either they have an as small as possible preemption disabled
section or they have the initial test for success being possible
at all in this section.

> Question therefore is how much we care about keeping code
> ready for "real" preemption, when we have ample other places
> that would need changing first, before such could be enabled.

Yes. And depending on the answer the route to go (wide or narrow
no-preemption section) wither the rwlock or the spinlock trylock
variants should be adapted.


Juergen


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-03 10:04       ` Jan Beulich
  2020-11-03 10:17         ` Jürgen Groß
@ 2020-11-03 10:25         ` Julien Grall
  1 sibling, 0 replies; 9+ messages in thread
From: Julien Grall @ 2020-11-03 10:25 UTC (permalink / raw)
  To: Jan Beulich, Jürgen Groß
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Stefano Stabellini,
	Wei Liu, xen-devel

Hi Jan,

On 03/11/2020 10:04, Jan Beulich wrote:
> On 03.11.2020 10:22, Jürgen Groß wrote:
>> On 03.11.20 10:02, Jan Beulich wrote:
>>> On 02.11.2020 14:12, Juergen Gross wrote:
> Question therefore is how much we care about keeping code
> ready for "real" preemption, when we have ample other places
> that would need changing first, before such could be enabled

The question we should ask ourself is whether we think one would want to 
use preemption in Xen.

Some of the emulation in Xen on Arm (e.g. ITS, SMMUv3, set/way) would 
have been easier to implement if the code were preemptible.

I also hear time to time stakeholders asking for preemptible spin lock 
(this is useful for RT).

Therefore, I think there are values to keep the code as much as possible 
preempt-ready.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks
  2020-11-03 10:17         ` Jürgen Groß
@ 2020-11-03 10:30           ` Jan Beulich
  0 siblings, 0 replies; 9+ messages in thread
From: Jan Beulich @ 2020-11-03 10:30 UTC (permalink / raw)
  To: Jürgen Groß
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, xen-devel

On 03.11.2020 11:17, Jürgen Groß wrote:
> On 03.11.20 11:04, Jan Beulich wrote:
>> This said - wouldn't what you suggest be wrong if we had
>> actual preemption in the hypervisor? Preemption hitting
>> between e.g. these two lines
>>
>>      cnts = atomic_read(&lock->cnts);
>>      if ( likely(_can_read_lock(cnts)) )
>>
>> would not yield the intended result, would it? (It wouldn't
>> affect correctness afaics, because the caller has to be
>> prepared anyway that the attempt fails, but the amount of
>> effectively false negatives would grow, as would the number
>> of cases where the slower path is taken for no reason.)
> 
> And this in turn would hit _spin_trylock() the same way.

True.

> IMO we should harmonize all the trylock variants in this regard:
> either they have an as small as possible preemption disabled
> section or they have the initial test for success being possible
> at all in this section.
> 
>> Question therefore is how much we care about keeping code
>> ready for "real" preemption, when we have ample other places
>> that would need changing first, before such could be enabled.
> 
> Yes. And depending on the answer the route to go (wide or narrow
> no-preemption section) wither the rwlock or the spinlock trylock
> variants should be adapted.

Well, personally I'd slightly prefer the adjustment as you did
suggest, but Julien's subsequent reply points towards the other
direction.

Jan


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-11-03 10:30 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-02 13:12 [PATCH v2 0/2] xen/locking: fix and enhance lock debugging Juergen Gross
2020-11-02 13:12 ` [PATCH v2 1/2] xen/spinlocks: spin_trylock with interrupts off is always fine Juergen Gross
2020-11-02 13:12 ` [PATCH v2 2/2] xen/rwlock: add check_lock() handling to rwlocks Juergen Gross
2020-11-03  9:02   ` Jan Beulich
2020-11-03  9:22     ` Jürgen Groß
2020-11-03 10:04       ` Jan Beulich
2020-11-03 10:17         ` Jürgen Groß
2020-11-03 10:30           ` Jan Beulich
2020-11-03 10:25         ` Julien Grall

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).