linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Boqun Feng <boqun.feng@gmail.com>
To: Waiman Long <longman@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 6/7] locking/rtqspinlock: Voluntarily yield CPU when need_sched()
Date: Wed, 4 Jan 2017 18:07:20 +0800	[thread overview]
Message-ID: <20170104100720.GA4985@tardis.cn.ibm.com> (raw)
In-Reply-To: <1483466430-8028-7-git-send-email-longman@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 2599 bytes --]

On Tue, Jan 03, 2017 at 01:00:29PM -0500, Waiman Long wrote:
> Ideally we want the CPU to be preemptible even when inside or waiting
> for a lock. We cannot make it preemptible when inside a lock critical
> section, but we can try to make the task voluntarily yield the CPU
> when waiting for a lock.
> 
> This patch checks the need_sched() flag and yields the CPU when the
> preemption count is 1. IOW, the spin_lock() call isn't done in a
> region that doesn't allow preemption. Otherwise, it will just perform
> RT spinning with a minimum priority of 1.
> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
>  kernel/locking/qspinlock_rt.h | 68 +++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 65 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/locking/qspinlock_rt.h b/kernel/locking/qspinlock_rt.h
> index 0c4d051..18ec1f8 100644
> --- a/kernel/locking/qspinlock_rt.h
> +++ b/kernel/locking/qspinlock_rt.h
> @@ -43,6 +43,16 @@
>   * it will have to break out of the MCS wait queue just like what is done
>   * in the OSQ lock. Then it has to retry RT spinning if it has been boosted
>   * to RT priority.
> + *
> + * Another RT requirement is that the CPU need to be preemptible even when
> + * waiting for a spinlock. If the task has already acquired the lock, we
> + * will let it run to completion to release the lock and reenable preemption.
> + * For non-nested spinlock, a spinlock waiter will periodically check
> + * need_resched flag to see if it should break out of the waiting loop and
> + * yield the CPU as long as the preemption count indicates just one
> + * preempt_disabled(). For nested spinlock with outer lock acquired, it will
> + * boost its priority to the highest RT priority level to try to acquire the
> + * inner lock, finish up its work, release the locks and reenable preemption.
>   */
>  #include <linux/sched.h>
>  
> @@ -51,6 +61,15 @@
>  #endif
>  
>  /*
> + * Rescheduling is only needed when it is in the task context, the
> + * PREEMPT_NEED_RESCHED flag is set and the preemption count is one.
> + * If only the TIF_NEED_RESCHED flag is set, it will be moved to RT
> + * spinning with a minimum priority of 1.
> + */
> +#define rt_should_resched()	(preempt_count() == \
> +				(PREEMPT_OFFSET | PREEMPT_NEED_RESCHED))
> +

Maybe I am missing something... but

On x86, PREEMPT_NEED_RESCHED is used in an inverting style, i.e. 0
indicates "need to reschedule" and preempt_count() masks away this very
bit, which makes rt_should_resched() always false. So...

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2017-01-04 10:07 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-03 18:00 [RFC PATCH 0/7] locking/rtqspinlock: Realtime queued spinlocks Waiman Long
2017-01-03 18:00 ` [RFC PATCH 1/7] locking/spinlock: Remove the unused spin_lock_bh_nested API Waiman Long
2017-01-03 18:00 ` [RFC PATCH 2/7] locking/rtqspinlock: Introduce realtime queued spinlocks Waiman Long
2017-01-03 18:00 ` [RFC PATCH 3/7] locking/rtqspinlock: Use static RT priority when in interrupt context Waiman Long
2017-01-03 18:00 ` [RFC PATCH 4/7] locking/rtqspinlock: Override spin_lock_nested with special RT variants Waiman Long
2017-01-03 18:00 ` [RFC PATCH 5/7] locking/rtqspinlock: Handle priority boosting Waiman Long
2017-01-03 18:00 ` [RFC PATCH 6/7] locking/rtqspinlock: Voluntarily yield CPU when need_sched() Waiman Long
2017-01-04 10:07   ` Boqun Feng [this message]
2017-01-04 21:57     ` Waiman Long
2017-01-05 10:16   ` Daniel Bristot de Oliveira
2017-01-03 18:00 ` [RFC PATCH 7/7] locking/rtqspinlock: Enable collection of event counts Waiman Long
2017-01-04 12:49 ` [RFC PATCH 0/7] locking/rtqspinlock: Realtime queued spinlocks Peter Zijlstra
2017-01-04 15:25   ` Waiman Long
2017-01-04 15:55     ` Steven Rostedt
2017-01-04 20:02       ` Waiman Long
2017-01-05 18:43         ` Steven Rostedt
2017-01-05  9:26     ` Daniel Bristot de Oliveira
2017-01-05  9:44     ` Peter Zijlstra
2017-01-05 15:55       ` Waiman Long
2017-01-05 16:08         ` Peter Zijlstra
2017-01-05 17:07           ` Waiman Long
2017-01-05 18:50             ` Steven Rostedt
2017-01-05 19:24               ` Waiman Long
2017-01-05 18:05           ` Daniel Bristot de Oliveira

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170104100720.GA4985@tardis.cn.ibm.com \
    --to=boqun.feng@gmail.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).