* [Xen-devel] [PATCH 0/2] xen/locks: fix preempt disabling in lock handling @ 2020-03-13 8:05 Juergen Gross 2020-03-13 8:05 ` [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock Juergen Gross 2020-03-13 8:05 ` [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able() Juergen Gross 0 siblings, 2 replies; 15+ messages in thread From: Juergen Gross @ 2020-03-13 8:05 UTC (permalink / raw) To: xen-devel Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich Xen's rwlocks don't disable preemption at all, while spinlocks are doing it only after obtaining the lock. While not really critical, it is wrong. This series fixes that. Juergen Gross (2): xen/rwlocks: call preempt_disable() when taking a rwlock xen/spinlocks: fix placement of preempt_[dis|en]able() xen/common/spinlock.c | 9 ++++++--- xen/include/xen/rwlock.h | 18 +++++++++++++++++- 2 files changed, 23 insertions(+), 4 deletions(-) -- 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 8:05 [Xen-devel] [PATCH 0/2] xen/locks: fix preempt disabling in lock handling Juergen Gross @ 2020-03-13 8:05 ` Juergen Gross 2020-03-13 8:48 ` Jan Beulich 2020-03-13 10:02 ` Julien Grall 2020-03-13 8:05 ` [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able() Juergen Gross 1 sibling, 2 replies; 15+ messages in thread From: Juergen Gross @ 2020-03-13 8:05 UTC (permalink / raw) To: xen-devel Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich Similar to spinlocks preemption should be disabled while holding a rwlock. Signed-off-by: Juergen Gross <jgross@suse.com> --- xen/include/xen/rwlock.h | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h index 1c221dd0d9..4ee341a182 100644 --- a/xen/include/xen/rwlock.h +++ b/xen/include/xen/rwlock.h @@ -2,6 +2,7 @@ #define __RWLOCK_H__ #include <xen/percpu.h> +#include <xen/preempt.h> #include <xen/smp.h> #include <xen/spinlock.h> @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock) cnts = atomic_read(&lock->cnts); if ( likely(_can_read_lock(cnts)) ) { + preempt_disable(); cnts = (u32)atomic_add_return(_QR_BIAS, &lock->cnts); if ( likely(_can_read_lock(cnts)) ) return 1; atomic_sub(_QR_BIAS, &lock->cnts); + preempt_enable(); } return 0; } @@ -73,6 +76,7 @@ static inline void _read_lock(rwlock_t *lock) { u32 cnts; + preempt_disable(); cnts = atomic_add_return(_QR_BIAS, &lock->cnts); if ( likely(_can_read_lock(cnts)) ) return; @@ -106,6 +110,7 @@ static inline void _read_unlock(rwlock_t *lock) * Atomically decrement the reader count */ atomic_sub(_QR_BIAS, &lock->cnts); + preempt_enable(); } static inline void _read_unlock_irq(rwlock_t *lock) @@ -137,6 +142,7 @@ static inline unsigned int _write_lock_val(void) static inline void _write_lock(rwlock_t *lock) { /* Optimize for the unfair lock case where the fair flag is 0. */ + preempt_disable(); if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 ) return; @@ -172,13 +178,21 @@ static inline int _write_trylock(rwlock_t *lock) if ( unlikely(cnts) ) return 0; - return likely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0); + preempt_disable(); + if ( unlikely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) != 0) ) + { + preempt_enable(); + return 0; + } + + return 1; } static inline void _write_unlock(rwlock_t *lock) { ASSERT(_is_write_locked_by_me(atomic_read(&lock->cnts))); atomic_and(~(_QW_CPUMASK | _QW_WMASK), &lock->cnts); + preempt_enable(); } static inline void _write_unlock_irq(rwlock_t *lock) @@ -274,6 +288,7 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata, } /* Indicate this cpu is reading. */ + preempt_disable(); this_cpu_ptr(per_cpudata) = percpu_rwlock; smp_mb(); /* Check if a writer is waiting. */ @@ -308,6 +323,7 @@ static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata, return; } this_cpu_ptr(per_cpudata) = NULL; + preempt_enable(); smp_wmb(); } -- 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 8:05 ` [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock Juergen Gross @ 2020-03-13 8:48 ` Jan Beulich 2020-03-13 10:02 ` Julien Grall 2020-03-13 10:02 ` Julien Grall 1 sibling, 1 reply; 15+ messages in thread From: Jan Beulich @ 2020-03-13 8:48 UTC (permalink / raw) To: Juergen Gross Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, xen-devel On 13.03.2020 09:05, Juergen Gross wrote: > Similar to spinlocks preemption should be disabled while holding a > rwlock. > > Signed-off-by: Juergen Gross <jgross@suse.com> Just one note/question: > @@ -308,6 +323,7 @@ static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata, > return; > } > this_cpu_ptr(per_cpudata) = NULL; > + preempt_enable(); > smp_wmb(); > } It would seem more logical to me to insert this after the smp_wmb(). Thoughts? I'll be happy to give my R-b once we've settled on this. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 8:48 ` Jan Beulich @ 2020-03-13 10:02 ` Julien Grall 0 siblings, 0 replies; 15+ messages in thread From: Julien Grall @ 2020-03-13 10:02 UTC (permalink / raw) To: Jan Beulich, Juergen Gross Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, xen-devel On 13/03/2020 08:48, Jan Beulich wrote: > On 13.03.2020 09:05, Juergen Gross wrote: >> Similar to spinlocks preemption should be disabled while holding a >> rwlock. >> >> Signed-off-by: Juergen Gross <jgross@suse.com> > > Just one note/question: > >> @@ -308,6 +323,7 @@ static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata, >> return; >> } >> this_cpu_ptr(per_cpudata) = NULL; >> + preempt_enable(); >> smp_wmb(); >> } > > It would seem more logical to me to insert this after the smp_wmb(). +1 > Thoughts? I'll be happy to give my R-b once we've settled on this. > > Jan > -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 8:05 ` [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock Juergen Gross 2020-03-13 8:48 ` Jan Beulich @ 2020-03-13 10:02 ` Julien Grall 2020-03-13 10:15 ` Jürgen Groß 1 sibling, 1 reply; 15+ messages in thread From: Julien Grall @ 2020-03-13 10:02 UTC (permalink / raw) To: Juergen Gross, xen-devel Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich Hi Juergen, On 13/03/2020 08:05, Juergen Gross wrote: > Similar to spinlocks preemption should be disabled while holding a > rwlock. > > Signed-off-by: Juergen Gross <jgross@suse.com> > --- > xen/include/xen/rwlock.h | 18 +++++++++++++++++- > 1 file changed, 17 insertions(+), 1 deletion(-) > > diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h > index 1c221dd0d9..4ee341a182 100644 > --- a/xen/include/xen/rwlock.h > +++ b/xen/include/xen/rwlock.h > @@ -2,6 +2,7 @@ > #define __RWLOCK_H__ > > #include <xen/percpu.h> > +#include <xen/preempt.h> > #include <xen/smp.h> > #include <xen/spinlock.h> > > @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock) > cnts = atomic_read(&lock->cnts); > if ( likely(_can_read_lock(cnts)) ) > { If you get preempted here, then it means the check below is likely going to fail. So I think it would be best to disable preemption before, to give more chance to succeed. > + preempt_disable(); > cnts = (u32)atomic_add_return(_QR_BIAS, &lock->cnts); > if ( likely(_can_read_lock(cnts)) ) > return 1; > atomic_sub(_QR_BIAS, &lock->cnts); > + preempt_enable(); > } > return 0; > } > @@ -73,6 +76,7 @@ static inline void _read_lock(rwlock_t *lock) > { > u32 cnts; > > + preempt_disable(); > cnts = atomic_add_return(_QR_BIAS, &lock->cnts); > if ( likely(_can_read_lock(cnts)) ) > return; > @@ -106,6 +110,7 @@ static inline void _read_unlock(rwlock_t *lock) > * Atomically decrement the reader count > */ > atomic_sub(_QR_BIAS, &lock->cnts); > + preempt_enable(); > } > > static inline void _read_unlock_irq(rwlock_t *lock) > @@ -137,6 +142,7 @@ static inline unsigned int _write_lock_val(void) > static inline void _write_lock(rwlock_t *lock) > { > /* Optimize for the unfair lock case where the fair flag is 0. */ > + preempt_disable(); > if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 ) > return; > > @@ -172,13 +178,21 @@ static inline int _write_trylock(rwlock_t *lock) > if ( unlikely(cnts) ) > return 0; > > - return likely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0); > + preempt_disable(); Similar remark as the read_trylock(). > + if ( unlikely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) != 0) ) > + { > + preempt_enable(); > + return 0; > + } > + > + return 1; > } > > static inline void _write_unlock(rwlock_t *lock) > { > ASSERT(_is_write_locked_by_me(atomic_read(&lock->cnts))); > atomic_and(~(_QW_CPUMASK | _QW_WMASK), &lock->cnts); > + preempt_enable(); > } > > static inline void _write_unlock_irq(rwlock_t *lock) > @@ -274,6 +288,7 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata, > } > > /* Indicate this cpu is reading. */ > + preempt_disable(); > this_cpu_ptr(per_cpudata) = percpu_rwlock; > smp_mb(); > /* Check if a writer is waiting. */ > @@ -308,6 +323,7 @@ static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata, > return; > } > this_cpu_ptr(per_cpudata) = NULL; > + preempt_enable(); > smp_wmb(); > } > > Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 10:02 ` Julien Grall @ 2020-03-13 10:15 ` Jürgen Groß 2020-03-13 10:31 ` Jan Beulich 2020-03-13 10:40 ` Julien Grall 0 siblings, 2 replies; 15+ messages in thread From: Jürgen Groß @ 2020-03-13 10:15 UTC (permalink / raw) To: Julien Grall, xen-devel Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich On 13.03.20 11:02, Julien Grall wrote: > Hi Juergen, > > On 13/03/2020 08:05, Juergen Gross wrote: >> Similar to spinlocks preemption should be disabled while holding a >> rwlock. >> >> Signed-off-by: Juergen Gross <jgross@suse.com> >> --- >> xen/include/xen/rwlock.h | 18 +++++++++++++++++- >> 1 file changed, 17 insertions(+), 1 deletion(-) >> >> diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h >> index 1c221dd0d9..4ee341a182 100644 >> --- a/xen/include/xen/rwlock.h >> +++ b/xen/include/xen/rwlock.h >> @@ -2,6 +2,7 @@ >> #define __RWLOCK_H__ >> #include <xen/percpu.h> >> +#include <xen/preempt.h> >> #include <xen/smp.h> >> #include <xen/spinlock.h> >> @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock) >> cnts = atomic_read(&lock->cnts); >> if ( likely(_can_read_lock(cnts)) ) >> { > > If you get preempted here, then it means the check below is likely going > to fail. So I think it would be best to disable preemption before, to > give more chance to succeed. As preemption probability at this very point should be much lower than that of held locks I think that is optimizing the wrong path. I'm not opposed doing the modification you are requesting, but would like to hear a second opinion on that topic, especially as I'd need to add another preempt_enable() call when following your advice. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 10:15 ` Jürgen Groß @ 2020-03-13 10:31 ` Jan Beulich 2020-03-13 10:40 ` Julien Grall 1 sibling, 0 replies; 15+ messages in thread From: Jan Beulich @ 2020-03-13 10:31 UTC (permalink / raw) To: Jürgen Groß, Julien Grall Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, xen-devel On 13.03.2020 11:15, Jürgen Groß wrote: > On 13.03.20 11:02, Julien Grall wrote: >> Hi Juergen, >> >> On 13/03/2020 08:05, Juergen Gross wrote: >>> Similar to spinlocks preemption should be disabled while holding a >>> rwlock. >>> >>> Signed-off-by: Juergen Gross <jgross@suse.com> >>> --- >>> xen/include/xen/rwlock.h | 18 +++++++++++++++++- >>> 1 file changed, 17 insertions(+), 1 deletion(-) >>> >>> diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h >>> index 1c221dd0d9..4ee341a182 100644 >>> --- a/xen/include/xen/rwlock.h >>> +++ b/xen/include/xen/rwlock.h >>> @@ -2,6 +2,7 @@ >>> #define __RWLOCK_H__ >>> #include <xen/percpu.h> >>> +#include <xen/preempt.h> >>> #include <xen/smp.h> >>> #include <xen/spinlock.h> >>> @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock) >>> cnts = atomic_read(&lock->cnts); >>> if ( likely(_can_read_lock(cnts)) ) >>> { >> >> If you get preempted here, then it means the check below is likely going >> to fail. So I think it would be best to disable preemption before, to >> give more chance to succeed. > > As preemption probability at this very point should be much lower than > that of held locks I think that is optimizing the wrong path. I'm not > opposed doing the modification you are requesting, but would like to > hear a second opinion on that topic, especially as I'd need to add > another preempt_enable() call when following your advice. I can see arguments for both placements, and hence I'm fine either way. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 10:15 ` Jürgen Groß 2020-03-13 10:31 ` Jan Beulich @ 2020-03-13 10:40 ` Julien Grall 2020-03-13 11:23 ` Jürgen Groß 1 sibling, 1 reply; 15+ messages in thread From: Julien Grall @ 2020-03-13 10:40 UTC (permalink / raw) To: Jürgen Groß, xen-devel Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich Hi Juergen, On 13/03/2020 10:15, Jürgen Groß wrote: > On 13.03.20 11:02, Julien Grall wrote: >> Hi Juergen, >> >> On 13/03/2020 08:05, Juergen Gross wrote: >>> Similar to spinlocks preemption should be disabled while holding a >>> rwlock. >>> >>> Signed-off-by: Juergen Gross <jgross@suse.com> >>> --- >>> xen/include/xen/rwlock.h | 18 +++++++++++++++++- >>> 1 file changed, 17 insertions(+), 1 deletion(-) >>> >>> diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h >>> index 1c221dd0d9..4ee341a182 100644 >>> --- a/xen/include/xen/rwlock.h >>> +++ b/xen/include/xen/rwlock.h >>> @@ -2,6 +2,7 @@ >>> #define __RWLOCK_H__ >>> #include <xen/percpu.h> >>> +#include <xen/preempt.h> >>> #include <xen/smp.h> >>> #include <xen/spinlock.h> >>> @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock) >>> cnts = atomic_read(&lock->cnts); >>> if ( likely(_can_read_lock(cnts)) ) >>> { >> >> If you get preempted here, then it means the check below is likely >> going to fail. So I think it would be best to disable preemption >> before, to give more chance to succeed. > > As preemption probability at this very point should be much lower than > that of held locks I think that is optimizing the wrong path. Why so? Lock contention should be fairly limited or you already have a problem on your system. So preemption is more likely. > I'm not > opposed doing the modification you are requesting, but would like to > hear a second opinion on that topic, especially as I'd need to add > another preempt_enable() call when following your advice. I don't really see the problem with adding a new preemption_enable() call. But the code can also be reworked to have only one call... Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 10:40 ` Julien Grall @ 2020-03-13 11:23 ` Jürgen Groß 2020-03-13 11:39 ` Julien Grall 0 siblings, 1 reply; 15+ messages in thread From: Jürgen Groß @ 2020-03-13 11:23 UTC (permalink / raw) To: Julien Grall, xen-devel Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich On 13.03.20 11:40, Julien Grall wrote: > Hi Juergen, > > On 13/03/2020 10:15, Jürgen Groß wrote: >> On 13.03.20 11:02, Julien Grall wrote: >>> Hi Juergen, >>> >>> On 13/03/2020 08:05, Juergen Gross wrote: >>>> Similar to spinlocks preemption should be disabled while holding a >>>> rwlock. >>>> >>>> Signed-off-by: Juergen Gross <jgross@suse.com> >>>> --- >>>> xen/include/xen/rwlock.h | 18 +++++++++++++++++- >>>> 1 file changed, 17 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h >>>> index 1c221dd0d9..4ee341a182 100644 >>>> --- a/xen/include/xen/rwlock.h >>>> +++ b/xen/include/xen/rwlock.h >>>> @@ -2,6 +2,7 @@ >>>> #define __RWLOCK_H__ >>>> #include <xen/percpu.h> >>>> +#include <xen/preempt.h> >>>> #include <xen/smp.h> >>>> #include <xen/spinlock.h> >>>> @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock) >>>> cnts = atomic_read(&lock->cnts); >>>> if ( likely(_can_read_lock(cnts)) ) >>>> { >>> >>> If you get preempted here, then it means the check below is likely >>> going to fail. So I think it would be best to disable preemption >>> before, to give more chance to succeed. >> >> As preemption probability at this very point should be much lower than >> that of held locks I think that is optimizing the wrong path. > > Why so? Lock contention should be fairly limited or you already have a > problem on your system. So preemption is more likely. Today probability of preemption is 0. Even with preemption added the probability to be preempted in a sequence of about a dozen instructions is _very_ low. > >> I'm not >> opposed doing the modification you are requesting, but would like to >> hear a second opinion on that topic, especially as I'd need to add >> another preempt_enable() call when following your advice. > > I don't really see the problem with adding a new preemption_enable() > call. But the code can also be reworked to have only one call... Its the dynamical path I'm speaking of. Accessing a local cpu variable is not that cheap, and in case we add preemption in the future preempt_enable() will become even more expensive. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 11:23 ` Jürgen Groß @ 2020-03-13 11:39 ` Julien Grall 2020-03-13 12:05 ` Jürgen Groß 0 siblings, 1 reply; 15+ messages in thread From: Julien Grall @ 2020-03-13 11:39 UTC (permalink / raw) To: Jürgen Groß, xen-devel Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich Hi Juergen, On 13/03/2020 11:23, Jürgen Groß wrote: > On 13.03.20 11:40, Julien Grall wrote: >> Hi Juergen, >> >> On 13/03/2020 10:15, Jürgen Groß wrote: >>> On 13.03.20 11:02, Julien Grall wrote: >>>> Hi Juergen, >>>> >>>> On 13/03/2020 08:05, Juergen Gross wrote: >>>>> Similar to spinlocks preemption should be disabled while holding a >>>>> rwlock. >>>>> >>>>> Signed-off-by: Juergen Gross <jgross@suse.com> >>>>> --- >>>>> xen/include/xen/rwlock.h | 18 +++++++++++++++++- >>>>> 1 file changed, 17 insertions(+), 1 deletion(-) >>>>> >>>>> diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h >>>>> index 1c221dd0d9..4ee341a182 100644 >>>>> --- a/xen/include/xen/rwlock.h >>>>> +++ b/xen/include/xen/rwlock.h >>>>> @@ -2,6 +2,7 @@ >>>>> #define __RWLOCK_H__ >>>>> #include <xen/percpu.h> >>>>> +#include <xen/preempt.h> >>>>> #include <xen/smp.h> >>>>> #include <xen/spinlock.h> >>>>> @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock) >>>>> cnts = atomic_read(&lock->cnts); >>>>> if ( likely(_can_read_lock(cnts)) ) >>>>> { >>>> >>>> If you get preempted here, then it means the check below is likely >>>> going to fail. So I think it would be best to disable preemption >>>> before, to give more chance to succeed. >>> >>> As preemption probability at this very point should be much lower than >>> that of held locks I think that is optimizing the wrong path. >> >> Why so? Lock contention should be fairly limited or you already have a >> problem on your system. So preemption is more likely. > > Today probability of preemption is 0. I am aware of that... > > Even with preemption added the probability to be preempted in a sequence > of about a dozen instructions is _very_ low. ... but I am not convinced of the low probability here. > >> >>> I'm not >>> opposed doing the modification you are requesting, but would like to >>> hear a second opinion on that topic, especially as I'd need to add >>> another preempt_enable() call when following your advice. >> >> I don't really see the problem with adding a new preemption_enable() >> call. But the code can also be reworked to have only one call... > > Its the dynamical path I'm speaking of. Accessing a local cpu variable > is not that cheap, and in case we add preemption in the future > preempt_enable() will become even more expensive. Do you realize that the lock is most likely be uncontented? And if it were, the caller would likely not spin in a tight loop, otherwise it would have used read_lock(). So until you proved me otherwise (with numbers), this is micro-optimization that is not going to be seen in a workload. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock 2020-03-13 11:39 ` Julien Grall @ 2020-03-13 12:05 ` Jürgen Groß 0 siblings, 0 replies; 15+ messages in thread From: Jürgen Groß @ 2020-03-13 12:05 UTC (permalink / raw) To: Julien Grall, xen-devel Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich On 13.03.20 12:39, Julien Grall wrote: > Hi Juergen, > > On 13/03/2020 11:23, Jürgen Groß wrote: >> On 13.03.20 11:40, Julien Grall wrote: >>> Hi Juergen, >>> >>> On 13/03/2020 10:15, Jürgen Groß wrote: >>>> On 13.03.20 11:02, Julien Grall wrote: >>>>> Hi Juergen, >>>>> >>>>> On 13/03/2020 08:05, Juergen Gross wrote: >>>>>> Similar to spinlocks preemption should be disabled while holding a >>>>>> rwlock. >>>>>> >>>>>> Signed-off-by: Juergen Gross <jgross@suse.com> >>>>>> --- >>>>>> xen/include/xen/rwlock.h | 18 +++++++++++++++++- >>>>>> 1 file changed, 17 insertions(+), 1 deletion(-) >>>>>> >>>>>> diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h >>>>>> index 1c221dd0d9..4ee341a182 100644 >>>>>> --- a/xen/include/xen/rwlock.h >>>>>> +++ b/xen/include/xen/rwlock.h >>>>>> @@ -2,6 +2,7 @@ >>>>>> #define __RWLOCK_H__ >>>>>> #include <xen/percpu.h> >>>>>> +#include <xen/preempt.h> >>>>>> #include <xen/smp.h> >>>>>> #include <xen/spinlock.h> >>>>>> @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock) >>>>>> cnts = atomic_read(&lock->cnts); >>>>>> if ( likely(_can_read_lock(cnts)) ) >>>>>> { >>>>> >>>>> If you get preempted here, then it means the check below is likely >>>>> going to fail. So I think it would be best to disable preemption >>>>> before, to give more chance to succeed. >>>> >>>> As preemption probability at this very point should be much lower than >>>> that of held locks I think that is optimizing the wrong path. >>> >>> Why so? Lock contention should be fairly limited or you already have >>> a problem on your system. So preemption is more likely. >> >> Today probability of preemption is 0. > > I am aware of that... > >> >> Even with preemption added the probability to be preempted in a sequence >> of about a dozen instructions is _very_ low. > > ... but I am not convinced of the low probability here. > >> >>> >>>> I'm not >>>> opposed doing the modification you are requesting, but would like to >>>> hear a second opinion on that topic, especially as I'd need to add >>>> another preempt_enable() call when following your advice. >>> >>> I don't really see the problem with adding a new preemption_enable() >>> call. But the code can also be reworked to have only one call... >> >> Its the dynamical path I'm speaking of. Accessing a local cpu variable >> is not that cheap, and in case we add preemption in the future >> preempt_enable() will become even more expensive. > > Do you realize that the lock is most likely be uncontented? And if it > were, the caller would likely not spin in a tight loop, otherwise it > would have used read_lock(). > > So until you proved me otherwise (with numbers), this is > micro-optimization that is not going to be seen in a workload. Fine, in case you feeling so strong about that, I'll change it. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able() 2020-03-13 8:05 [Xen-devel] [PATCH 0/2] xen/locks: fix preempt disabling in lock handling Juergen Gross 2020-03-13 8:05 ` [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock Juergen Gross @ 2020-03-13 8:05 ` Juergen Gross 2020-03-13 8:55 ` Jan Beulich 1 sibling, 1 reply; 15+ messages in thread From: Juergen Gross @ 2020-03-13 8:05 UTC (permalink / raw) To: xen-devel Cc: Juergen Gross, Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, Jan Beulich In case Xen ever gains preemption support the spinlock coding's placement of preempt_disable() and preempt_enable() should be outside of the locked section. Signed-off-by: Juergen Gross <jgross@suse.com> --- xen/common/spinlock.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 344981c54a..f05fb068cd 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -160,6 +160,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data) LOCK_PROFILE_VAR; check_lock(&lock->debug); + preempt_disable(); tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail, tickets.head_tail); while ( tickets.tail != observe_head(&lock->tickets) ) @@ -171,7 +172,6 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data) } got_lock(&lock->debug); LOCK_PROFILE_GOT; - preempt_disable(); arch_lock_acquire_barrier(); } @@ -199,10 +199,10 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock) void _spin_unlock(spinlock_t *lock) { arch_lock_release_barrier(); - preempt_enable(); LOCK_PROFILE_REL; rel_lock(&lock->debug); add_sized(&lock->tickets.head, 1); + preempt_enable(); arch_lock_signal(); } @@ -242,15 +242,18 @@ int _spin_trylock(spinlock_t *lock) return 0; new = old; new.tail++; + preempt_disable(); if ( cmpxchg(&lock->tickets.head_tail, old.head_tail, new.head_tail) != old.head_tail ) + { + preempt_enable(); return 0; + } got_lock(&lock->debug); #ifdef CONFIG_DEBUG_LOCK_PROFILE if (lock->profile) lock->profile->time_locked = NOW(); #endif - preempt_disable(); /* * cmpxchg() is a full barrier so no need for an * arch_lock_acquire_barrier(). -- 2.16.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able() 2020-03-13 8:05 ` [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able() Juergen Gross @ 2020-03-13 8:55 ` Jan Beulich 2020-03-13 9:00 ` Jürgen Groß 0 siblings, 1 reply; 15+ messages in thread From: Jan Beulich @ 2020-03-13 8:55 UTC (permalink / raw) To: Juergen Gross Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, xen-devel On 13.03.2020 09:05, Juergen Gross wrote: > @@ -199,10 +199,10 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock) > void _spin_unlock(spinlock_t *lock) > { > arch_lock_release_barrier(); > - preempt_enable(); > LOCK_PROFILE_REL; > rel_lock(&lock->debug); > add_sized(&lock->tickets.head, 1); > + preempt_enable(); > arch_lock_signal(); > } arch_lock_signal() is a barrier on Arm, hence just like for patch 1 I wonder whether the insertion wouldn't better come after it. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able() 2020-03-13 8:55 ` Jan Beulich @ 2020-03-13 9:00 ` Jürgen Groß 2020-03-13 9:44 ` Julien Grall 0 siblings, 1 reply; 15+ messages in thread From: Jürgen Groß @ 2020-03-13 9:00 UTC (permalink / raw) To: Jan Beulich Cc: Stefano Stabellini, Julien Grall, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, xen-devel On 13.03.20 09:55, Jan Beulich wrote: > On 13.03.2020 09:05, Juergen Gross wrote: >> @@ -199,10 +199,10 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock) >> void _spin_unlock(spinlock_t *lock) >> { >> arch_lock_release_barrier(); >> - preempt_enable(); >> LOCK_PROFILE_REL; >> rel_lock(&lock->debug); >> add_sized(&lock->tickets.head, 1); >> + preempt_enable(); >> arch_lock_signal(); >> } > > arch_lock_signal() is a barrier on Arm, hence just like for patch 1 > I wonder whether the insertion wouldn't better come after it. Either way is fine for me. It should be noted that preemption is only relevant on the local cpu. So this is about trading lock state visibility against preemption disabled time, and I agree the visible time of the lock held should be minimized at higher priority than the preemption disabled time. I'll modify my patches accordingly, adding a note in this regard to the commit message. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able() 2020-03-13 9:00 ` Jürgen Groß @ 2020-03-13 9:44 ` Julien Grall 0 siblings, 0 replies; 15+ messages in thread From: Julien Grall @ 2020-03-13 9:44 UTC (permalink / raw) To: Jürgen Groß, Jan Beulich Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Ian Jackson, George Dunlap, xen-devel Hi, On 13/03/2020 09:00, Jürgen Groß wrote: > On 13.03.20 09:55, Jan Beulich wrote: >> On 13.03.2020 09:05, Juergen Gross wrote: >>> @@ -199,10 +199,10 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock) >>> void _spin_unlock(spinlock_t *lock) >>> { >>> arch_lock_release_barrier(); >>> - preempt_enable(); >>> LOCK_PROFILE_REL; >>> rel_lock(&lock->debug); >>> add_sized(&lock->tickets.head, 1); >>> + preempt_enable(); >>> arch_lock_signal(); >>> } >> >> arch_lock_signal() is a barrier on Arm, hence just like for patch 1 >> I wonder whether the insertion wouldn't better come after it. The important barrier in spin_unlock() is arch_lock_release_barrier(). The one in arch_lock_signal() is just to ensure that waking up the other CPUs will not happen before the unlock is seen. The barrier would not have been necessary if the we didn't use 'sev'. > > Either way is fine for me. It should be noted that preemption is only > relevant on the local cpu. So this is about trading lock state > visibility against preemption disabled time, and I agree the visible > time of the lock held should be minimized at higher priority than the > preemption disabled time. I don't think the rationale is about "performance" here. The rationale is you don't know the implementation of arch_lock_signal(). If you get preempted by a thread trying to acquire the same lock, then it may not do the right thing. Linux will also re-enable preemption only after the unlock has been completed. So it would be best to follow the same pattern. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2020-03-13 12:06 UTC | newest] Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-03-13 8:05 [Xen-devel] [PATCH 0/2] xen/locks: fix preempt disabling in lock handling Juergen Gross 2020-03-13 8:05 ` [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock Juergen Gross 2020-03-13 8:48 ` Jan Beulich 2020-03-13 10:02 ` Julien Grall 2020-03-13 10:02 ` Julien Grall 2020-03-13 10:15 ` Jürgen Groß 2020-03-13 10:31 ` Jan Beulich 2020-03-13 10:40 ` Julien Grall 2020-03-13 11:23 ` Jürgen Groß 2020-03-13 11:39 ` Julien Grall 2020-03-13 12:05 ` Jürgen Groß 2020-03-13 8:05 ` [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able() Juergen Gross 2020-03-13 8:55 ` Jan Beulich 2020-03-13 9:00 ` Jürgen Groß 2020-03-13 9:44 ` Julien Grall
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.