From: Nicholas Piggin <npiggin@gmail.com> To: Waiman Long <longman@redhat.com> Cc: linux-arch@vger.kernel.org, Peter Zijlstra <peterz@infradead.org>, Will Deacon <will@kernel.org>, Boqun Feng <boqun.feng@gmail.com>, linux-kernel@vger.kernel.org, kvm-ppc@vger.kernel.org, virtualization@lists.linux-foundation.org, Ingo Molnar <mingo@redhat.com>, Anton Blanchard <anton@ozlabs.org>, linuxppc-dev@lists.ozlabs.org Subject: Re: [PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR Date: Mon, 06 Jul 2020 10:30:05 +1000 [thread overview] Message-ID: <1593994632.syt8hwimv9.astroid@bobo.none> (raw) In-Reply-To: <81d9981b-8a20-729c-b861-c7229e40bb65@redhat.com> Excerpts from Waiman Long's message of July 6, 2020 5:00 am: > On 7/3/20 3:35 AM, Nicholas Piggin wrote: >> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> >> --- >> arch/powerpc/include/asm/paravirt.h | 28 ++++++++++ >> arch/powerpc/include/asm/qspinlock.h | 55 +++++++++++++++++++ >> arch/powerpc/include/asm/qspinlock_paravirt.h | 5 ++ >> arch/powerpc/platforms/pseries/Kconfig | 5 ++ >> arch/powerpc/platforms/pseries/setup.c | 6 +- >> include/asm-generic/qspinlock.h | 2 + >> 6 files changed, 100 insertions(+), 1 deletion(-) >> create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt.h >> >> diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h >> index 7a8546660a63..f2d51f929cf5 100644 >> --- a/arch/powerpc/include/asm/paravirt.h >> +++ b/arch/powerpc/include/asm/paravirt.h >> @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) >> { >> plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); >> } >> + >> +static inline void prod_cpu(int cpu) >> +{ >> + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); >> +} >> + >> +static inline void yield_to_any(void) >> +{ >> + plpar_hcall_norets(H_CONFER, -1, 0); >> +} >> #else >> static inline bool is_shared_processor(void) >> { >> @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) >> { >> ___bad_yield_to_preempted(); /* This would be a bug */ >> } >> + >> +extern void ___bad_yield_to_any(void); >> +static inline void yield_to_any(void) >> +{ >> + ___bad_yield_to_any(); /* This would be a bug */ >> +} >> + >> +extern void ___bad_prod_cpu(void); >> +static inline void prod_cpu(int cpu) >> +{ >> + ___bad_prod_cpu(); /* This would be a bug */ >> +} >> + >> #endif >> >> #define vcpu_is_preempted vcpu_is_preempted >> @@ -57,5 +80,10 @@ static inline bool vcpu_is_preempted(int cpu) >> return false; >> } >> >> +static inline bool pv_is_native_spin_unlock(void) >> +{ >> + return !is_shared_processor(); >> +} >> + >> #endif /* __KERNEL__ */ >> #endif /* __ASM_PARAVIRT_H */ >> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h >> index c49e33e24edd..0960a0de2467 100644 >> --- a/arch/powerpc/include/asm/qspinlock.h >> +++ b/arch/powerpc/include/asm/qspinlock.h >> @@ -3,9 +3,36 @@ >> #define _ASM_POWERPC_QSPINLOCK_H >> >> #include <asm-generic/qspinlock_types.h> >> +#include <asm/paravirt.h> >> >> #define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ >> >> +#ifdef CONFIG_PARAVIRT_SPINLOCKS >> +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); >> +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); >> + >> +static __always_inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) >> +{ >> + if (!is_shared_processor()) >> + native_queued_spin_lock_slowpath(lock, val); >> + else >> + __pv_queued_spin_lock_slowpath(lock, val); >> +} > > In a previous mail, I said that: Hey, yeah I read that right after sending the series out. Thanks for the thorough review. > You may need to match the use of __pv_queued_spin_lock_slowpath() with > the corresponding __pv_queued_spin_unlock(), e.g. > > #define queued_spin_unlock queued_spin_unlock > static inline queued_spin_unlock(struct qspinlock *lock) > { > if (!is_shared_processor()) > smp_store_release(&lock->locked, 0); > else > __pv_queued_spin_unlock(lock); > } > > Otherwise, pv_kick() will never be called. > > Maybe PowerPC HMT is different that the shared cpus can still process > instruction, though slower, that cpu kicking like what was done in kvm > is not really necessary. If that is the case, I think we should document > that. It does stop dispatch, but it will wake up by itself after all other vCPUs have had a chance to dispatch. I will re-test with the fix in place and see if there's any significant performance differences. Thanks, Nick _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
WARNING: multiple messages have this Message-ID (diff)
From: Nicholas Piggin <npiggin@gmail.com> To: Waiman Long <longman@redhat.com> Cc: Anton Blanchard <anton@ozlabs.org>, Boqun Feng <boqun.feng@gmail.com>, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Ingo Molnar <mingo@redhat.com>, Peter Zijlstra <peterz@infradead.org>, virtualization@lists.linux-foundation.org, Will Deacon <will@kernel.org> Subject: Re: [PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR Date: Mon, 06 Jul 2020 10:30:05 +1000 [thread overview] Message-ID: <1593994632.syt8hwimv9.astroid@bobo.none> (raw) Message-ID: <20200706003005.1DhwBV3uxgoMdTqeYP11rcZm2Y6N8iWEXeVIoN72uw8@z> (raw) In-Reply-To: <81d9981b-8a20-729c-b861-c7229e40bb65@redhat.com> Excerpts from Waiman Long's message of July 6, 2020 5:00 am: > On 7/3/20 3:35 AM, Nicholas Piggin wrote: >> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> >> --- >> arch/powerpc/include/asm/paravirt.h | 28 ++++++++++ >> arch/powerpc/include/asm/qspinlock.h | 55 +++++++++++++++++++ >> arch/powerpc/include/asm/qspinlock_paravirt.h | 5 ++ >> arch/powerpc/platforms/pseries/Kconfig | 5 ++ >> arch/powerpc/platforms/pseries/setup.c | 6 +- >> include/asm-generic/qspinlock.h | 2 + >> 6 files changed, 100 insertions(+), 1 deletion(-) >> create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt.h >> >> diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h >> index 7a8546660a63..f2d51f929cf5 100644 >> --- a/arch/powerpc/include/asm/paravirt.h >> +++ b/arch/powerpc/include/asm/paravirt.h >> @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) >> { >> plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); >> } >> + >> +static inline void prod_cpu(int cpu) >> +{ >> + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); >> +} >> + >> +static inline void yield_to_any(void) >> +{ >> + plpar_hcall_norets(H_CONFER, -1, 0); >> +} >> #else >> static inline bool is_shared_processor(void) >> { >> @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) >> { >> ___bad_yield_to_preempted(); /* This would be a bug */ >> } >> + >> +extern void ___bad_yield_to_any(void); >> +static inline void yield_to_any(void) >> +{ >> + ___bad_yield_to_any(); /* This would be a bug */ >> +} >> + >> +extern void ___bad_prod_cpu(void); >> +static inline void prod_cpu(int cpu) >> +{ >> + ___bad_prod_cpu(); /* This would be a bug */ >> +} >> + >> #endif >> >> #define vcpu_is_preempted vcpu_is_preempted >> @@ -57,5 +80,10 @@ static inline bool vcpu_is_preempted(int cpu) >> return false; >> } >> >> +static inline bool pv_is_native_spin_unlock(void) >> +{ >> + return !is_shared_processor(); >> +} >> + >> #endif /* __KERNEL__ */ >> #endif /* __ASM_PARAVIRT_H */ >> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h >> index c49e33e24edd..0960a0de2467 100644 >> --- a/arch/powerpc/include/asm/qspinlock.h >> +++ b/arch/powerpc/include/asm/qspinlock.h >> @@ -3,9 +3,36 @@ >> #define _ASM_POWERPC_QSPINLOCK_H >> >> #include <asm-generic/qspinlock_types.h> >> +#include <asm/paravirt.h> >> >> #define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ >> >> +#ifdef CONFIG_PARAVIRT_SPINLOCKS >> +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); >> +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); >> + >> +static __always_inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) >> +{ >> + if (!is_shared_processor()) >> + native_queued_spin_lock_slowpath(lock, val); >> + else >> + __pv_queued_spin_lock_slowpath(lock, val); >> +} > > In a previous mail, I said that: Hey, yeah I read that right after sending the series out. Thanks for the thorough review. > You may need to match the use of __pv_queued_spin_lock_slowpath() with > the corresponding __pv_queued_spin_unlock(), e.g. > > #define queued_spin_unlock queued_spin_unlock > static inline queued_spin_unlock(struct qspinlock *lock) > { > if (!is_shared_processor()) > smp_store_release(&lock->locked, 0); > else > __pv_queued_spin_unlock(lock); > } > > Otherwise, pv_kick() will never be called. > > Maybe PowerPC HMT is different that the shared cpus can still process > instruction, though slower, that cpu kicking like what was done in kvm > is not really necessary. If that is the case, I think we should document > that. It does stop dispatch, but it will wake up by itself after all other vCPUs have had a chance to dispatch. I will re-test with the fix in place and see if there's any significant performance differences. Thanks, Nick
next prev parent reply other threads:[~2020-07-06 0:30 UTC|newest] Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-07-03 7:35 [PATCH v2 0/6] powerpc: queued spinlocks and rwlocks Nicholas Piggin 2020-07-03 7:35 ` Nicholas Piggin 2020-07-03 7:35 ` [PATCH v2 1/6] powerpc/powernv: must include hvcall.h to get PAPR defines Nicholas Piggin 2020-07-03 7:35 ` Nicholas Piggin 2020-07-03 7:35 ` [PATCH v2 2/6] powerpc/pseries: move some PAPR paravirt functions to their own file Nicholas Piggin 2020-07-03 7:35 ` Nicholas Piggin 2020-07-03 7:35 ` [PATCH v2 3/6] powerpc: move spinlock implementation to simple_spinlock Nicholas Piggin 2020-07-03 7:35 ` Nicholas Piggin 2020-07-03 7:35 ` [PATCH v2 4/6] powerpc/64s: implement queued spinlocks and rwlocks Nicholas Piggin 2020-07-03 7:35 ` [PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR Nicholas Piggin 2020-07-03 7:35 ` Nicholas Piggin 2020-07-05 19:00 ` Waiman Long 2020-07-05 19:00 ` Waiman Long 2020-07-06 0:30 ` Nicholas Piggin [this message] 2020-07-06 0:30 ` Nicholas Piggin 2020-07-03 7:35 ` [PATCH v2 6/6] powerpc/qspinlock: optimised atomic_try_cmpxchg_lock that adds the lock hint Nicholas Piggin
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1593994632.syt8hwimv9.astroid@bobo.none \ --to=npiggin@gmail.com \ --cc=anton@ozlabs.org \ --cc=boqun.feng@gmail.com \ --cc=kvm-ppc@vger.kernel.org \ --cc=linux-arch@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linuxppc-dev@lists.ozlabs.org \ --cc=longman@redhat.com \ --cc=mingo@redhat.com \ --cc=peterz@infradead.org \ --cc=virtualization@lists.linux-foundation.org \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).