From: Alex Kogan <alex.kogan@oracle.com> To: linux@armlinux.org.uk, peterz@infradead.org, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org Cc: steven.sistare@oracle.com, daniel.m.jordan@oracle.com, alex.kogan@oracle.com, dave.dice@oracle.com, rahul.x.yadav@oracle.com Subject: [PATCH v2 2/5] locking/qspinlock: Refactor the qspinlock slow path Date: Fri, 29 Mar 2019 11:20:03 -0400 [thread overview] Message-ID: <20190329152006.110370-3-alex.kogan@oracle.com> (raw) In-Reply-To: <20190329152006.110370-1-alex.kogan@oracle.com> Move some of the code manipulating MCS nodes into separate functions. This would allow easier integration of alternative ways to manipulate those nodes. Signed-off-by: Alex Kogan <alex.kogan@oracle.com> Reviewed-by: Steve Sistare <steven.sistare@oracle.com> --- kernel/locking/qspinlock.c | 48 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 41 insertions(+), 7 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 5941ce3527ce..074f65b9bedc 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -297,6 +297,43 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, #define queued_spin_lock_slowpath native_queued_spin_lock_slowpath #endif +static __always_inline int get_node_index(struct mcs_spinlock *node) +{ + return node->count++; +} + +static __always_inline void release_mcs_node(struct mcs_spinlock *node) +{ + __this_cpu_dec(node->count); +} + +/* + * set_locked_empty_mcs - Try to set the spinlock value to _Q_LOCKED_VAL, + * and by doing that unlock the MCS lock when its waiting queue is empty + * @lock: Pointer to queued spinlock structure + * @val: Current value of the lock + * @node: Pointer to the MCS node of the lock holder + * + * *,*,* -> 0,0,1 + */ +static __always_inline bool set_locked_empty_mcs(struct qspinlock *lock, + u32 val, + struct mcs_spinlock *node) +{ + return atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL); +} + +/* + * pass_mcs_lock - pass the MCS lock to the next waiter + * @node: Pointer to the MCS node of the lock holder + * @next: Pointer to the MCS node of the first waiter in the MCS queue + */ +static __always_inline void pass_mcs_lock(struct mcs_spinlock *node, + struct mcs_spinlock *next) +{ + arch_mcs_spin_unlock_contended(&next->locked, 1); +} + #endif /* _GEN_PV_LOCK_SLOWPATH */ /** @@ -406,7 +443,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) qstat_inc(qstat_lock_slowpath, true); pv_queue: node = this_cpu_ptr(&qnodes[0].mcs); - idx = node->count++; + idx = get_node_index(node); tail = encode_tail(smp_processor_id(), idx); /* @@ -541,7 +578,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * PENDING will make the uncontended transition fail. */ if ((val & _Q_TAIL_MASK) == tail) { - if (atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL)) + if (set_locked_empty_mcs(lock, val, node)) goto release; /* No contention */ } @@ -558,14 +595,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) if (!next) next = smp_cond_load_relaxed(&node->next, (VAL)); - arch_mcs_spin_unlock_contended(&next->locked, 1); + pass_mcs_lock(node, next); pv_kick_node(lock, next); release: - /* - * release the node - */ - __this_cpu_dec(qnodes[0].mcs.count); + release_mcs_node(&qnodes[0].mcs); } EXPORT_SYMBOL(queued_spin_lock_slowpath); -- 2.11.0 (Apple Git-81)
WARNING: multiple messages have this Message-ID (diff)
From: Alex Kogan <alex.kogan@oracle.com> To: linux@armlinux.org.uk, peterz@infradead.org, mingo@redhat.com, will.deacon@arm.com, arnd@arndb.de, longman@redhat.com, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, bp@alien8.de, hpa@zytor.com, x86@kernel.org Cc: alex.kogan@oracle.com, dave.dice@oracle.com, rahul.x.yadav@oracle.com, steven.sistare@oracle.com, daniel.m.jordan@oracle.com Subject: [PATCH v2 2/5] locking/qspinlock: Refactor the qspinlock slow path Date: Fri, 29 Mar 2019 11:20:03 -0400 [thread overview] Message-ID: <20190329152006.110370-3-alex.kogan@oracle.com> (raw) In-Reply-To: <20190329152006.110370-1-alex.kogan@oracle.com> Move some of the code manipulating MCS nodes into separate functions. This would allow easier integration of alternative ways to manipulate those nodes. Signed-off-by: Alex Kogan <alex.kogan@oracle.com> Reviewed-by: Steve Sistare <steven.sistare@oracle.com> --- kernel/locking/qspinlock.c | 48 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 41 insertions(+), 7 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 5941ce3527ce..074f65b9bedc 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -297,6 +297,43 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, #define queued_spin_lock_slowpath native_queued_spin_lock_slowpath #endif +static __always_inline int get_node_index(struct mcs_spinlock *node) +{ + return node->count++; +} + +static __always_inline void release_mcs_node(struct mcs_spinlock *node) +{ + __this_cpu_dec(node->count); +} + +/* + * set_locked_empty_mcs - Try to set the spinlock value to _Q_LOCKED_VAL, + * and by doing that unlock the MCS lock when its waiting queue is empty + * @lock: Pointer to queued spinlock structure + * @val: Current value of the lock + * @node: Pointer to the MCS node of the lock holder + * + * *,*,* -> 0,0,1 + */ +static __always_inline bool set_locked_empty_mcs(struct qspinlock *lock, + u32 val, + struct mcs_spinlock *node) +{ + return atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL); +} + +/* + * pass_mcs_lock - pass the MCS lock to the next waiter + * @node: Pointer to the MCS node of the lock holder + * @next: Pointer to the MCS node of the first waiter in the MCS queue + */ +static __always_inline void pass_mcs_lock(struct mcs_spinlock *node, + struct mcs_spinlock *next) +{ + arch_mcs_spin_unlock_contended(&next->locked, 1); +} + #endif /* _GEN_PV_LOCK_SLOWPATH */ /** @@ -406,7 +443,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) qstat_inc(qstat_lock_slowpath, true); pv_queue: node = this_cpu_ptr(&qnodes[0].mcs); - idx = node->count++; + idx = get_node_index(node); tail = encode_tail(smp_processor_id(), idx); /* @@ -541,7 +578,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * PENDING will make the uncontended transition fail. */ if ((val & _Q_TAIL_MASK) == tail) { - if (atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL)) + if (set_locked_empty_mcs(lock, val, node)) goto release; /* No contention */ } @@ -558,14 +595,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) if (!next) next = smp_cond_load_relaxed(&node->next, (VAL)); - arch_mcs_spin_unlock_contended(&next->locked, 1); + pass_mcs_lock(node, next); pv_kick_node(lock, next); release: - /* - * release the node - */ - __this_cpu_dec(qnodes[0].mcs.count); + release_mcs_node(&qnodes[0].mcs); } EXPORT_SYMBOL(queued_spin_lock_slowpath); -- 2.11.0 (Apple Git-81) _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2019-03-29 15:21 UTC|newest] Thread overview: 86+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-03-29 15:20 [PATCH v2 0/5] Add NUMA-awareness to qspinlock Alex Kogan 2019-03-29 15:20 ` Alex Kogan 2019-03-29 15:20 ` [PATCH v2 1/5] locking/qspinlock: Make arch_mcs_spin_unlock_contended more generic Alex Kogan 2019-03-29 15:20 ` Alex Kogan 2019-03-29 15:20 ` Alex Kogan [this message] 2019-03-29 15:20 ` [PATCH v2 2/5] locking/qspinlock: Refactor the qspinlock slow path Alex Kogan 2019-03-29 15:20 ` [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow path of qspinlock Alex Kogan 2019-03-29 15:20 ` Alex Kogan 2019-04-01 9:06 ` Peter Zijlstra 2019-04-01 9:06 ` Peter Zijlstra 2019-04-01 9:06 ` Peter Zijlstra 2019-04-01 9:33 ` Peter Zijlstra 2019-04-01 9:33 ` Peter Zijlstra 2019-04-01 9:33 ` Peter Zijlstra 2019-04-03 15:53 ` Alex Kogan 2019-04-03 15:53 ` Alex Kogan 2019-04-03 16:10 ` Peter Zijlstra 2019-04-03 16:10 ` Peter Zijlstra 2019-04-03 16:10 ` Peter Zijlstra 2019-04-01 9:21 ` Peter Zijlstra 2019-04-01 9:21 ` Peter Zijlstra 2019-04-01 14:36 ` Waiman Long 2019-04-01 14:36 ` Waiman Long 2019-04-02 9:43 ` Peter Zijlstra 2019-04-02 9:43 ` Peter Zijlstra 2019-04-02 9:43 ` Peter Zijlstra 2019-04-03 15:39 ` Alex Kogan 2019-04-03 15:39 ` Alex Kogan 2019-04-03 15:48 ` Waiman Long 2019-04-03 15:48 ` Waiman Long 2019-04-03 16:01 ` Peter Zijlstra 2019-04-03 16:01 ` Peter Zijlstra 2019-04-04 5:05 ` Juergen Gross 2019-04-04 5:05 ` Juergen Gross 2019-04-04 9:38 ` Peter Zijlstra 2019-04-04 9:38 ` Peter Zijlstra 2019-04-04 9:38 ` Peter Zijlstra 2019-04-04 18:03 ` Waiman Long 2019-04-04 18:03 ` Waiman Long 2019-06-04 23:21 ` Alex Kogan 2019-06-04 23:21 ` Alex Kogan 2019-06-05 20:40 ` Peter Zijlstra 2019-06-05 20:40 ` Peter Zijlstra 2019-06-06 15:21 ` Alex Kogan 2019-06-06 15:21 ` Alex Kogan 2019-06-06 15:32 ` Waiman Long 2019-06-06 15:32 ` Waiman Long 2019-06-06 15:42 ` Waiman Long 2019-06-06 15:42 ` Waiman Long 2019-04-03 16:33 ` Waiman Long 2019-04-03 16:33 ` Waiman Long 2019-04-03 17:16 ` Peter Zijlstra 2019-04-03 17:16 ` Peter Zijlstra 2019-04-03 17:16 ` Peter Zijlstra 2019-04-03 17:40 ` Waiman Long 2019-04-03 17:40 ` Waiman Long 2019-04-04 2:02 ` Hanjun Guo 2019-04-04 2:02 ` Hanjun Guo 2019-04-04 2:02 ` Hanjun Guo 2019-04-04 3:14 ` Alex Kogan 2019-04-04 3:14 ` Alex Kogan 2019-06-11 4:22 ` liwei (GF) 2019-06-11 4:22 ` liwei (GF) 2019-06-11 4:22 ` liwei (GF) 2019-06-12 4:38 ` Alex Kogan 2019-06-12 4:38 ` Alex Kogan 2019-06-12 15:05 ` Waiman Long 2019-06-12 15:05 ` Waiman Long 2019-03-29 15:20 ` [PATCH v2 4/5] locking/qspinlock: Introduce starvation avoidance into CNA Alex Kogan 2019-03-29 15:20 ` Alex Kogan 2019-04-02 10:37 ` Peter Zijlstra 2019-04-02 10:37 ` Peter Zijlstra 2019-04-02 10:37 ` Peter Zijlstra 2019-04-03 17:06 ` Alex Kogan 2019-04-03 17:06 ` Alex Kogan 2019-03-29 15:20 ` [PATCH v2 5/5] locking/qspinlock: Introduce the shuffle reduction optimization " Alex Kogan 2019-03-29 15:20 ` Alex Kogan 2019-04-01 9:09 ` [PATCH v2 0/5] Add NUMA-awareness to qspinlock Peter Zijlstra 2019-04-01 9:09 ` Peter Zijlstra 2019-04-01 9:09 ` Peter Zijlstra 2019-04-03 17:13 ` Alex Kogan 2019-04-03 17:13 ` Alex Kogan 2019-07-03 11:57 ` Jan Glauber 2019-07-03 11:58 ` Jan Glauber 2019-07-12 8:12 ` Hanjun Guo 2019-07-12 8:12 ` Hanjun Guo
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190329152006.110370-3-alex.kogan@oracle.com \ --to=alex.kogan@oracle.com \ --cc=arnd@arndb.de \ --cc=bp@alien8.de \ --cc=daniel.m.jordan@oracle.com \ --cc=dave.dice@oracle.com \ --cc=hpa@zytor.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux@armlinux.org.uk \ --cc=longman@redhat.com \ --cc=mingo@redhat.com \ --cc=peterz@infradead.org \ --cc=rahul.x.yadav@oracle.com \ --cc=steven.sistare@oracle.com \ --cc=tglx@linutronix.de \ --cc=will.deacon@arm.com \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.