All of lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>,
	kernel-janitors@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] Adjustments: lock/unlock task in context_switch
Date: Tue, 19 Dec 2017 12:23:57 -0200	[thread overview]
Message-ID: <20171219142357.cfmdyfyhay4hsxrb@smtp.gmail.com> (raw)
In-Reply-To: <20171218193002.zzuocnd2hyt34ok5@hirez.programming.kicks-ass.net>

Thanks for the review :)

Below I just have a small comment in the changed version of the patch

> Thanks; I've slightly changed it, find below. I'll queue it for the next
> merge window.
> 
> ---
> Subject: sched: Rework / clarify prepare_lock_switch()
> From: rodrigosiqueira <rodrigosiqueiramelo@gmail.com>
> Date: Fri, 15 Dec 2017 12:06:03 -0200
> 
> The function prepare_lock_switch has an unused parameter, and also the
> function name was not descriptive. To improve the readability and remove
> the extra parameter, the following changes were made:
> 
> * Moved prepare_lock_switch from kernel/sched/sched.h to
>   kernel/sched/core.c, renamed it to acquire_task, and removed the
>   unused parameter.
> 
> * Split the smp_store_release() out from finish_lock_switch() to a
>   function named release_task.
> 
> * Comments ajdustments.
> 
> Signed-off-by: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Link: http://lkml.kernel.org/r/20171215140603.gxe5i2y6fg5ojfpp@smtp.gmail.com
> ---
>  kernel/sched/core.c  |   53 +++++++++++++++++++++++++++++++++++++++++++++++----
>  kernel/sched/sched.h |   41 ---------------------------------------
>  2 files changed, 49 insertions(+), 45 deletions(-)
> 
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2045,7 +2045,7 @@ try_to_wake_up(struct task_struct *p, un
>  	 * If the owning (remote) CPU is still in the middle of schedule() with
>  	 * this task as prev, wait until its done referencing the task.
>  	 *
> -	 * Pairs with the smp_store_release() in finish_lock_switch().
> +	 * Pairs with the smp_store_release() in release_task().
>  	 *
>  	 * This ensures that tasks getting woken will be fully ordered against
>  	 * their previous state and preserve Program Order.
> @@ -2571,6 +2571,50 @@ fire_sched_out_preempt_notifiers(struct
>  
>  #endif /* CONFIG_PREEMPT_NOTIFIERS */
>  
> +static inline void acquire_task(struct task_struct *next)

In the original patch, I called this function as release_lock_task,
because the release_task was already declared as extern in
include/linux/sched/task.h. I believe there is a function name conflict
here, is that correct?

> +{
> +#ifdef CONFIG_SMP
> +	/*
> +	 * Claim the task as running, we do this before switching to it
> +	 * such that any running task will have this set.
> +	 */
> +	next->on_cpu = 1;
> +#endif
> +}
> +
> +static inline void release_task(struct task_struct *prev)
> +{
> +#ifdef CONFIG_SMP
> +	/*
> +	 * After ->on_cpu is cleared, the task can be moved to a different CPU.
> +	 * We must ensure this doesn't happen until the switch is completely
> +	 * finished.
> +	 *
> +	 * In particular, the load of prev->state in finish_task_switch() must
> +	 * happen before this.
> +	 *
> +	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
> +	 */
> +	smp_store_release(&prev->on_cpu, 0);
> +#endif
> +}
> +
> +static inline void finish_lock_switch(struct rq *rq)
> +{
> +#ifdef CONFIG_DEBUG_SPINLOCK
> +	/* this is a valid case when another task releases the spinlock */
> +	rq->lock.owner = current;
> +#endif
> +	/*
> +	 * If we are tracking spinlock dependencies then we have to
> +	 * fix up the runqueue lock - which gets 'carried over' from
> +	 * prev into current:
> +	 */
> +	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
> +
> +	raw_spin_unlock_irq(&rq->lock);
> +}
> +
>  /**
>   * prepare_task_switch - prepare to switch tasks
>   * @rq: the runqueue preparing to switch
> @@ -2591,7 +2635,7 @@ prepare_task_switch(struct rq *rq, struc
>  	sched_info_switch(rq, prev, next);
>  	perf_event_task_sched_out(prev, next);
>  	fire_sched_out_preempt_notifiers(prev, next);
> -	prepare_lock_switch(rq, next);
> +	acquire_task(next);
>  	prepare_arch_switch(next);
>  }
>  
> @@ -2646,7 +2690,7 @@ static struct rq *finish_task_switch(str
>  	 * the scheduled task must drop that reference.
>  	 *
>  	 * We must observe prev->state before clearing prev->on_cpu (in
> -	 * finish_lock_switch), otherwise a concurrent wakeup can get prev
> +	 * release_task), otherwise a concurrent wakeup can get prev
>  	 * running on another CPU and we could rave with its RUNNING -> DEAD
>  	 * transition, resulting in a double drop.
>  	 */
> @@ -2663,7 +2707,8 @@ static struct rq *finish_task_switch(str
>  	 * to use.
>  	 */
>  	smp_mb__after_unlock_lock();
> -	finish_lock_switch(rq, prev);
> +	release_task(prev);
> +	finish_lock_switch(rq);
>  	finish_arch_post_lock_switch();
>  
>  	fire_sched_in_preempt_notifiers(current);
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1328,47 +1328,6 @@ static inline int task_on_rq_migrating(s
>  # define finish_arch_post_lock_switch()	do { } while (0)
>  #endif
>  
> -static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
> -{
> -#ifdef CONFIG_SMP
> -	/*
> -	 * We can optimise this out completely for !SMP, because the
> -	 * SMP rebalancing from interrupt is the only thing that cares
> -	 * here.
> -	 */
> -	next->on_cpu = 1;
> -#endif
> -}
> -
> -static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
> -{
> -#ifdef CONFIG_SMP
> -	/*
> -	 * After ->on_cpu is cleared, the task can be moved to a different CPU.
> -	 * We must ensure this doesn't happen until the switch is completely
> -	 * finished.
> -	 *
> -	 * In particular, the load of prev->state in finish_task_switch() must
> -	 * happen before this.
> -	 *
> -	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
> -	 */
> -	smp_store_release(&prev->on_cpu, 0);
> -#endif
> -#ifdef CONFIG_DEBUG_SPINLOCK
> -	/* this is a valid case when another task releases the spinlock */
> -	rq->lock.owner = current;
> -#endif
> -	/*
> -	 * If we are tracking spinlock dependencies then we have to
> -	 * fix up the runqueue lock - which gets 'carried over' from
> -	 * prev into current:
> -	 */
> -	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
> -
> -	raw_spin_unlock_irq(&rq->lock);
> -}
> -
>  /*
>   * wake flags
>   */

WARNING: multiple messages have this Message-ID (diff)
From: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>,
	kernel-janitors@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] Adjustments: lock/unlock task in context_switch
Date: Tue, 19 Dec 2017 14:23:57 +0000	[thread overview]
Message-ID: <20171219142357.cfmdyfyhay4hsxrb@smtp.gmail.com> (raw)
In-Reply-To: <20171218193002.zzuocnd2hyt34ok5@hirez.programming.kicks-ass.net>

Thanks for the review :)

Below I just have a small comment in the changed version of the patch

> Thanks; I've slightly changed it, find below. I'll queue it for the next
> merge window.
> 
> ---
> Subject: sched: Rework / clarify prepare_lock_switch()
> From: rodrigosiqueira <rodrigosiqueiramelo@gmail.com>
> Date: Fri, 15 Dec 2017 12:06:03 -0200
> 
> The function prepare_lock_switch has an unused parameter, and also the
> function name was not descriptive. To improve the readability and remove
> the extra parameter, the following changes were made:
> 
> * Moved prepare_lock_switch from kernel/sched/sched.h to
>   kernel/sched/core.c, renamed it to acquire_task, and removed the
>   unused parameter.
> 
> * Split the smp_store_release() out from finish_lock_switch() to a
>   function named release_task.
> 
> * Comments ajdustments.
> 
> Signed-off-by: Rodrigo Siqueira <rodrigosiqueiramelo@gmail.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Link: http://lkml.kernel.org/r/20171215140603.gxe5i2y6fg5ojfpp@smtp.gmail.com
> ---
>  kernel/sched/core.c  |   53 +++++++++++++++++++++++++++++++++++++++++++++++----
>  kernel/sched/sched.h |   41 ---------------------------------------
>  2 files changed, 49 insertions(+), 45 deletions(-)
> 
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2045,7 +2045,7 @@ try_to_wake_up(struct task_struct *p, un
>  	 * If the owning (remote) CPU is still in the middle of schedule() with
>  	 * this task as prev, wait until its done referencing the task.
>  	 *
> -	 * Pairs with the smp_store_release() in finish_lock_switch().
> +	 * Pairs with the smp_store_release() in release_task().
>  	 *
>  	 * This ensures that tasks getting woken will be fully ordered against
>  	 * their previous state and preserve Program Order.
> @@ -2571,6 +2571,50 @@ fire_sched_out_preempt_notifiers(struct
>  
>  #endif /* CONFIG_PREEMPT_NOTIFIERS */
>  
> +static inline void acquire_task(struct task_struct *next)

In the original patch, I called this function as release_lock_task,
because the release_task was already declared as extern in
include/linux/sched/task.h. I believe there is a function name conflict
here, is that correct?

> +{
> +#ifdef CONFIG_SMP
> +	/*
> +	 * Claim the task as running, we do this before switching to it
> +	 * such that any running task will have this set.
> +	 */
> +	next->on_cpu = 1;
> +#endif
> +}
> +
> +static inline void release_task(struct task_struct *prev)
> +{
> +#ifdef CONFIG_SMP
> +	/*
> +	 * After ->on_cpu is cleared, the task can be moved to a different CPU.
> +	 * We must ensure this doesn't happen until the switch is completely
> +	 * finished.
> +	 *
> +	 * In particular, the load of prev->state in finish_task_switch() must
> +	 * happen before this.
> +	 *
> +	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
> +	 */
> +	smp_store_release(&prev->on_cpu, 0);
> +#endif
> +}
> +
> +static inline void finish_lock_switch(struct rq *rq)
> +{
> +#ifdef CONFIG_DEBUG_SPINLOCK
> +	/* this is a valid case when another task releases the spinlock */
> +	rq->lock.owner = current;
> +#endif
> +	/*
> +	 * If we are tracking spinlock dependencies then we have to
> +	 * fix up the runqueue lock - which gets 'carried over' from
> +	 * prev into current:
> +	 */
> +	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
> +
> +	raw_spin_unlock_irq(&rq->lock);
> +}
> +
>  /**
>   * prepare_task_switch - prepare to switch tasks
>   * @rq: the runqueue preparing to switch
> @@ -2591,7 +2635,7 @@ prepare_task_switch(struct rq *rq, struc
>  	sched_info_switch(rq, prev, next);
>  	perf_event_task_sched_out(prev, next);
>  	fire_sched_out_preempt_notifiers(prev, next);
> -	prepare_lock_switch(rq, next);
> +	acquire_task(next);
>  	prepare_arch_switch(next);
>  }
>  
> @@ -2646,7 +2690,7 @@ static struct rq *finish_task_switch(str
>  	 * the scheduled task must drop that reference.
>  	 *
>  	 * We must observe prev->state before clearing prev->on_cpu (in
> -	 * finish_lock_switch), otherwise a concurrent wakeup can get prev
> +	 * release_task), otherwise a concurrent wakeup can get prev
>  	 * running on another CPU and we could rave with its RUNNING -> DEAD
>  	 * transition, resulting in a double drop.
>  	 */
> @@ -2663,7 +2707,8 @@ static struct rq *finish_task_switch(str
>  	 * to use.
>  	 */
>  	smp_mb__after_unlock_lock();
> -	finish_lock_switch(rq, prev);
> +	release_task(prev);
> +	finish_lock_switch(rq);
>  	finish_arch_post_lock_switch();
>  
>  	fire_sched_in_preempt_notifiers(current);
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1328,47 +1328,6 @@ static inline int task_on_rq_migrating(s
>  # define finish_arch_post_lock_switch()	do { } while (0)
>  #endif
>  
> -static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next)
> -{
> -#ifdef CONFIG_SMP
> -	/*
> -	 * We can optimise this out completely for !SMP, because the
> -	 * SMP rebalancing from interrupt is the only thing that cares
> -	 * here.
> -	 */
> -	next->on_cpu = 1;
> -#endif
> -}
> -
> -static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
> -{
> -#ifdef CONFIG_SMP
> -	/*
> -	 * After ->on_cpu is cleared, the task can be moved to a different CPU.
> -	 * We must ensure this doesn't happen until the switch is completely
> -	 * finished.
> -	 *
> -	 * In particular, the load of prev->state in finish_task_switch() must
> -	 * happen before this.
> -	 *
> -	 * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
> -	 */
> -	smp_store_release(&prev->on_cpu, 0);
> -#endif
> -#ifdef CONFIG_DEBUG_SPINLOCK
> -	/* this is a valid case when another task releases the spinlock */
> -	rq->lock.owner = current;
> -#endif
> -	/*
> -	 * If we are tracking spinlock dependencies then we have to
> -	 * fix up the runqueue lock - which gets 'carried over' from
> -	 * prev into current:
> -	 */
> -	spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
> -
> -	raw_spin_unlock_irq(&rq->lock);
> -}
> -
>  /*
>   * wake flags
>   */

  reply	other threads:[~2017-12-19 14:24 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-15 14:06 [PATCH v2] Adjustments: lock/unlock task in context_switch rodrigosiqueira
2017-12-15 14:06 ` rodrigosiqueira
2017-12-18 19:30 ` Peter Zijlstra
2017-12-18 19:30   ` Peter Zijlstra
2017-12-19 14:23   ` Rodrigo Siqueira [this message]
2017-12-19 14:23     ` Rodrigo Siqueira
2017-12-19 14:38     ` Peter Zijlstra
2017-12-19 14:38       ` Peter Zijlstra
2018-01-10 12:13 ` [tip:sched/core] sched/core: Rework and clarify prepare_lock_switch() tip-bot for rodrigosiqueira

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171219142357.cfmdyfyhay4hsxrb@smtp.gmail.com \
    --to=rodrigosiqueiramelo@gmail.com \
    --cc=kernel-janitors@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.