linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling
@ 2021-07-07  2:39 Suren Baghdasaryan
  2021-07-07 13:39 ` Johannes Weiner
  0 siblings, 1 reply; 8+ messages in thread
From: Suren Baghdasaryan @ 2021-07-07  2:39 UTC (permalink / raw)
  To: peterz
  Cc: hannes, mingo, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, matthias.bgg, minchan,
	timmurray, yt.chang, wenju.xu, jonathan.jmchen, linux-kernel,
	linux-arm-kernel, linux-mediatek, kernel-team, surenb, SH Chen

Psi polling mechanism is trying to minimize the number of wakeups to
run psi_poll_work and is currently relying on timer_pending() to detect
when this work is already scheduled. This provides a window of opportunity
for psi_group_change to schedule an immediate psi_poll_work after
poll_timer_fn got called but before psi_poll_work could reschedule itself.
Below is the depiction of this entire window:

poll_timer_fn
  wake_up_interruptible(&group->poll_wait);

psi_poll_worker
  wait_event_interruptible(group->poll_wait, ...)
  psi_poll_work
    psi_schedule_poll_work
      if (timer_pending(&group->poll_timer)) return;
      ...
      mod_timer(&group->poll_timer, jiffies + delay);

Prior to 461daba06bdc we used to rely on poll_scheduled atomic which was
reset and set back inside psi_poll_work and therefore this race window
was much smaller.
The larger window causes increased number of wakeups and our partners
report visible power regression of ~10mA after applying 461daba06bdc.
Bring back the poll_scheduled atomic and make this race window even
narrower by resetting poll_scheduled only when we reach polling expiration
time. This does not completely eliminate the possibility of extra wakeups
caused by a race with psi_group_change however it will limit it to the
worst case scenario of one extra wakeup per every tracking window (0.5s
in the worst case).
This patch also ensures correct ordering between clearing poll_scheduled
flag and obtaining changed_states using memory barrier. Correct ordering
between updating changed_states and setting poll_scheduled is ensured by
atomic_xchg operation.
By tracing the number of immediate rescheduling attempts performed by
psi_group_change and the number of these attempts being blocked due to
psi monitor being already active, we can assess the effects of this change:

Before the patch:
                                           Run#1    Run#2      Run#3
Immediate reschedules attempted:           684365   1385156    1261240
Immediate reschedules blocked:             682846   1381654    1258682
Immediate reschedules (delta):             1519     3502       2558
Immediate reschedules (% of attempted):    0.22%    0.25%      0.20%

After the patch:
                                           Run#1    Run#2      Run#3
Immediate reschedules attempted:           882244   770298    426218
Immediate reschedules blocked:             881996   769796    426074
Immediate reschedules (delta):             248      502       144
Immediate reschedules (% of attempted):    0.03%    0.07%     0.03%

The number of non-blocked immediate reschedules dropped from 0.22-0.25%
to 0.03-0.07%. The drop is attributed to the decrease in the race window
size and the fact that we allow this race only when psi monitors reach
polling window expiration time.

Fixes: 461daba06bdc ("psi: eliminate kthread_worker from psi trigger scheduling mechanism")
Reported-by: Kathleen Chang <yt.chang@mediatek.com>
Reported-by: Wenju Xu <wenju.xu@mediatek.com>
Reported-by: Jonathan Chen <jonathan.jmchen@mediatek.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Tested-by: SH Chen <show-hong.chen@mediatek.com>
---
- Replaced atomic_cmpxchg() with atomic_xchg() to ensure correct ordering
  per PeterZ
- Added memory barrier between resetting poll_scheduled and obtaining
  changed_states per PeterZ and Johannes
- Added a paragraph in the patch description about the ordering guarantees
  added in this patch

 include/linux/psi_types.h |  1 +
 kernel/sched/psi.c        | 46 +++++++++++++++++++++++++++++----------
 2 files changed, 36 insertions(+), 11 deletions(-)

diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
index 0a23300d49af..ef8bd89d065e 100644
--- a/include/linux/psi_types.h
+++ b/include/linux/psi_types.h
@@ -158,6 +158,7 @@ struct psi_group {
 	struct timer_list poll_timer;
 	wait_queue_head_t poll_wait;
 	atomic_t poll_wakeup;
+	atomic_t poll_scheduled;
 
 	/* Protects data used by the monitor */
 	struct mutex trigger_lock;
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index 1652f2bb54b7..544676b2c1dc 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -196,6 +196,7 @@ static void group_init(struct psi_group *group)
 	INIT_DELAYED_WORK(&group->avgs_work, psi_avgs_work);
 	mutex_init(&group->avgs_lock);
 	/* Init trigger-related members */
+	atomic_set(&group->poll_scheduled, 0);
 	mutex_init(&group->trigger_lock);
 	INIT_LIST_HEAD(&group->triggers);
 	memset(group->nr_triggers, 0, sizeof(group->nr_triggers));
@@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
 	return now + group->poll_min_period;
 }
 
-/* Schedule polling if it's not already scheduled. */
-static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
+/* Schedule polling if it's not already scheduled or forced. */
+static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
+				   bool force)
 {
 	struct task_struct *task;
 
-	/*
-	 * Do not reschedule if already scheduled.
-	 * Possible race with a timer scheduled after this check but before
-	 * mod_timer below can be tolerated because group->polling_next_update
-	 * will keep updates on schedule.
-	 */
-	if (timer_pending(&group->poll_timer))
+	/* xchg should be called even when !force to set poll_scheduled */
+	if (atomic_xchg(&group->poll_scheduled, 1) && !force)
 		return;
 
 	rcu_read_lock();
@@ -582,12 +579,15 @@ static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
 	 */
 	if (likely(task))
 		mod_timer(&group->poll_timer, jiffies + delay);
+	else
+		atomic_set(&group->poll_scheduled, 0);
 
 	rcu_read_unlock();
 }
 
 static void psi_poll_work(struct psi_group *group)
 {
+	bool force_reschedule = false;
 	u32 changed_states;
 	u64 now;
 
@@ -595,6 +595,28 @@ static void psi_poll_work(struct psi_group *group)
 
 	now = sched_clock();
 
+	if (now > group->polling_until) {
+		/*
+		 * We are either about to start or might stop polling if no
+		 * state change was recorded. Resetting poll_scheduled leaves
+		 * a small window for psi_group_change to sneak in and schedule
+		 * an immegiate poll_work before we get to rescheduling. One
+		 * potential extra wakeup at the end of the polling window
+		 * should be negligible and polling_next_update still keeps
+		 * updates correctly on schedule.
+		 */
+		atomic_set(&group->poll_scheduled, 0);
+		/*
+		 * Ensure that operations of clearing group->poll_scheduled and
+		 * obtaining changed_states are not reordered.
+		 */
+		smp_mb();
+	} else {
+		/* Polling window is not over, keep rescheduling */
+		force_reschedule = true;
+	}
+
+
 	collect_percpu_times(group, PSI_POLL, &changed_states);
 
 	if (changed_states & group->poll_states) {
@@ -620,7 +642,8 @@ static void psi_poll_work(struct psi_group *group)
 		group->polling_next_update = update_triggers(group, now);
 
 	psi_schedule_poll_work(group,
-		nsecs_to_jiffies(group->polling_next_update - now) + 1);
+		nsecs_to_jiffies(group->polling_next_update - now) + 1,
+		force_reschedule);
 
 out:
 	mutex_unlock(&group->trigger_lock);
@@ -744,7 +767,7 @@ static void psi_group_change(struct psi_group *group, int cpu,
 	write_seqcount_end(&groupc->seq);
 
 	if (state_mask & group->poll_states)
-		psi_schedule_poll_work(group, 1);
+		psi_schedule_poll_work(group, 1, false);
 
 	if (wake_clock && !delayed_work_pending(&group->avgs_work))
 		schedule_delayed_work(&group->avgs_work, PSI_FREQ);
@@ -1239,6 +1262,7 @@ static void psi_trigger_destroy(struct kref *ref)
 		 * can no longer be found through group->poll_task.
 		 */
 		kthread_stop(task_to_destroy);
+		atomic_set(&group->poll_scheduled, 0);
 	}
 	kfree(t);
 }
-- 
2.32.0.93.g670b81a890-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling
  2021-07-07  2:39 [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling Suren Baghdasaryan
@ 2021-07-07 13:39 ` Johannes Weiner
  2021-07-07 22:43   ` Suren Baghdasaryan
  0 siblings, 1 reply; 8+ messages in thread
From: Johannes Weiner @ 2021-07-07 13:39 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: peterz, mingo, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, matthias.bgg, minchan,
	timmurray, yt.chang, wenju.xu, jonathan.jmchen, linux-kernel,
	linux-arm-kernel, linux-mediatek, kernel-team, SH Chen

This looks good to me now code wise. Just a comment on the comments:

On Tue, Jul 06, 2021 at 07:39:33PM -0700, Suren Baghdasaryan wrote:
> @@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
>  	return now + group->poll_min_period;
>  }
>  
> -/* Schedule polling if it's not already scheduled. */
> -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> +/* Schedule polling if it's not already scheduled or forced. */
> +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> +				   bool force)
>  {
>  	struct task_struct *task;
>  
> -	/*
> -	 * Do not reschedule if already scheduled.
> -	 * Possible race with a timer scheduled after this check but before
> -	 * mod_timer below can be tolerated because group->polling_next_update
> -	 * will keep updates on schedule.
> -	 */
> -	if (timer_pending(&group->poll_timer))
> +	/* xchg should be called even when !force to set poll_scheduled */
> +	if (atomic_xchg(&group->poll_scheduled, 1) && !force)
>  		return;

This explains what the code does, but not why. It would be good to
explain the ordering with poll_work, here or there. But both sides
should mention each other.

> @@ -595,6 +595,28 @@ static void psi_poll_work(struct psi_group *group)
>  
>  	now = sched_clock();
>  
> +	if (now > group->polling_until) {
> +		/*
> +		 * We are either about to start or might stop polling if no
> +		 * state change was recorded. Resetting poll_scheduled leaves
> +		 * a small window for psi_group_change to sneak in and schedule
> +		 * an immegiate poll_work before we get to rescheduling. One
> +		 * potential extra wakeup at the end of the polling window
> +		 * should be negligible and polling_next_update still keeps
> +		 * updates correctly on schedule.
> +		 */
> +		atomic_set(&group->poll_scheduled, 0);
> +		/*
> +		 * Ensure that operations of clearing group->poll_scheduled and
> +		 * obtaining changed_states are not reordered.
> +		 */
> +		smp_mb();

Same here, it would be good to explain that this is ordering the
scheduler with the timer such that no events are missed. Feel free to
reuse my race diagram from the other thread - those are better at
conveying the situation than freeform text.

Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling
  2021-07-07 13:39 ` Johannes Weiner
@ 2021-07-07 22:43   ` Suren Baghdasaryan
  2021-07-08 14:44     ` Johannes Weiner
  0 siblings, 1 reply; 8+ messages in thread
From: Suren Baghdasaryan @ 2021-07-07 22:43 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, matthias.bgg, Minchan Kim,
	Tim Murray, YT Chang, Wenju Xu (许文举),
	Jonathan JMChen (陳家明),
	LKML, linux-arm-kernel, linux-mediatek, kernel-team, SH Chen

On Wed, Jul 7, 2021 at 6:39 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> This looks good to me now code wise. Just a comment on the comments:
>
> On Tue, Jul 06, 2021 at 07:39:33PM -0700, Suren Baghdasaryan wrote:
> > @@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
> >       return now + group->poll_min_period;
> >  }
> >
> > -/* Schedule polling if it's not already scheduled. */
> > -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> > +/* Schedule polling if it's not already scheduled or forced. */
> > +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> > +                                bool force)
> >  {
> >       struct task_struct *task;
> >
> > -     /*
> > -      * Do not reschedule if already scheduled.
> > -      * Possible race with a timer scheduled after this check but before
> > -      * mod_timer below can be tolerated because group->polling_next_update
> > -      * will keep updates on schedule.
> > -      */
> > -     if (timer_pending(&group->poll_timer))
> > +     /* xchg should be called even when !force to set poll_scheduled */
> > +     if (atomic_xchg(&group->poll_scheduled, 1) && !force)
> >               return;
>
> This explains what the code does, but not why. It would be good to
> explain the ordering with poll_work, here or there. But both sides
> should mention each other.

How about this:

/*
 * atomic_xchg should be called even when !force to always set poll_scheduled
 * and to provide a memory barrier (see the comment inside psi_poll_work).
 */

>
> > @@ -595,6 +595,28 @@ static void psi_poll_work(struct psi_group *group)
> >
> >       now = sched_clock();
> >
> > +     if (now > group->polling_until) {
> > +             /*
> > +              * We are either about to start or might stop polling if no
> > +              * state change was recorded. Resetting poll_scheduled leaves
> > +              * a small window for psi_group_change to sneak in and schedule
> > +              * an immegiate poll_work before we get to rescheduling. One
> > +              * potential extra wakeup at the end of the polling window
> > +              * should be negligible and polling_next_update still keeps
> > +              * updates correctly on schedule.
> > +              */
> > +             atomic_set(&group->poll_scheduled, 0);
> > +             /*
> > +              * Ensure that operations of clearing group->poll_scheduled and
> > +              * obtaining changed_states are not reordered.
> > +              */
> > +             smp_mb();
>
> Same here, it would be good to explain that this is ordering the
> scheduler with the timer such that no events are missed. Feel free to
> reuse my race diagram from the other thread - those are better at
> conveying the situation than freeform text.

I tried to make your diagram a bit less abstract by using the actual
names. How about this?

/*
 * We need to enforce ordering between poll_scheduled and psi_group_cpu.times
 * reads and writes in psi_poll_work and psi_group_change functions.
Otherwise we
 * might fail to reschedule the timer when monitored states change:
 *
 * psi_poll_work:
 *     poll_scheduled = 0
 *     smp_mb()
 *     changed_states = collect_percpu_times()
 *     if changed_states && xchg(poll_scheduled, 1) == 0
 *         mod_timer()
 *
 * psi_group_change:
 *     record_times()
 *     smp_mb()
 *     if xchg(poll_scheduled, 1) == 0
 *         mod_timer()
 *
 * atomic_xchg in psi_schedule_poll_work implements an implicit memory
barrier but
 * we need an explicit one here.
 */

If we remove smp_mb barriers then there are the following possible
reordering cases:

Case1: reordering in psi_poll_work
psi_poll_work                    psi_group_change
  changed_states = collect_percpu_times()
                                              record_times()
                                              if xchg(poll_scheduled,
1) == 0 <-- false
                                                  mod_timer()
  poll_scheduled = 0
  if changed_states && xchg(poll_scheduled, 1) == 0 <-- changed_states is false
      mod_timer()

Case2: reordering in psi_group_change
psi_poll_work                    psi_group_change
                                              if xchg(poll_scheduled,
1) == 0 <-- false
                                                  mod_timer()
  poll_scheduled = 0
  changed_states = collect_percpu_times()
                                                  record_times()
  if changed_states && xchg(poll_scheduled, 1) == 0 <-- changed_states is false
      mod_timer()

In both cases mod_timer() is not called, poll update is missed. But
describing this all in the comments would be an overkill IMHO.
WDYT?

>
> Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling
  2021-07-07 22:43   ` Suren Baghdasaryan
@ 2021-07-08 14:44     ` Johannes Weiner
  2021-07-08 15:54       ` Suren Baghdasaryan
  0 siblings, 1 reply; 8+ messages in thread
From: Johannes Weiner @ 2021-07-08 14:44 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, matthias.bgg, Minchan Kim,
	Tim Murray, YT Chang, Wenju Xu (许文举),
	Jonathan JMChen (陳家明),
	LKML, linux-arm-kernel, linux-mediatek, kernel-team, SH Chen

On Wed, Jul 07, 2021 at 03:43:48PM -0700, Suren Baghdasaryan wrote:
> On Wed, Jul 7, 2021 at 6:39 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > This looks good to me now code wise. Just a comment on the comments:
> >
> > On Tue, Jul 06, 2021 at 07:39:33PM -0700, Suren Baghdasaryan wrote:
> > > @@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
> > >       return now + group->poll_min_period;
> > >  }
> > >
> > > -/* Schedule polling if it's not already scheduled. */
> > > -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> > > +/* Schedule polling if it's not already scheduled or forced. */
> > > +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> > > +                                bool force)
> > >  {
> > >       struct task_struct *task;
> > >
> > > -     /*
> > > -      * Do not reschedule if already scheduled.
> > > -      * Possible race with a timer scheduled after this check but before
> > > -      * mod_timer below can be tolerated because group->polling_next_update
> > > -      * will keep updates on schedule.
> > > -      */
> > > -     if (timer_pending(&group->poll_timer))
> > > +     /* xchg should be called even when !force to set poll_scheduled */
> > > +     if (atomic_xchg(&group->poll_scheduled, 1) && !force)
> > >               return;
> >
> > This explains what the code does, but not why. It would be good to
> > explain the ordering with poll_work, here or there. But both sides
> > should mention each other.
> 
> How about this:
> 
> /*
>  * atomic_xchg should be called even when !force to always set poll_scheduled
>  * and to provide a memory barrier (see the comment inside psi_poll_work).
>  */

The memory barrier part makes sense, but the first part says what the
code does and the message is unclear to me. Are you worried somebody
might turn this around in the future and only conditionalize on
poll_scheduled when !force? Essentially, I don't see the downside of
dropping that. But maybe I'm missing something.

	/*
	 * The xchg implies a full barrier that matches the one
	 * in psi_poll_work() (see corresponding comment there).
	 */

> > > @@ -595,6 +595,28 @@ static void psi_poll_work(struct psi_group *group)
> > >
> > >       now = sched_clock();
> > >
> > > +     if (now > group->polling_until) {
> > > +             /*
> > > +              * We are either about to start or might stop polling if no
> > > +              * state change was recorded. Resetting poll_scheduled leaves
> > > +              * a small window for psi_group_change to sneak in and schedule
> > > +              * an immegiate poll_work before we get to rescheduling. One
> > > +              * potential extra wakeup at the end of the polling window
> > > +              * should be negligible and polling_next_update still keeps
> > > +              * updates correctly on schedule.
> > > +              */
> > > +             atomic_set(&group->poll_scheduled, 0);
> > > +             /*
> > > +              * Ensure that operations of clearing group->poll_scheduled and
> > > +              * obtaining changed_states are not reordered.
> > > +              */
> > > +             smp_mb();
> >
> > Same here, it would be good to explain that this is ordering the
> > scheduler with the timer such that no events are missed. Feel free to
> > reuse my race diagram from the other thread - those are better at
> > conveying the situation than freeform text.
> 
> I tried to make your diagram a bit less abstract by using the actual
> names. How about this?
> 
> /*
>  * We need to enforce ordering between poll_scheduled and psi_group_cpu.times
>  * reads and writes in psi_poll_work and psi_group_change functions.
> Otherwise we
>  * might fail to reschedule the timer when monitored states change:
>  *
>  * psi_poll_work:
>  *     poll_scheduled = 0
>  *     smp_mb()
>  *     changed_states = collect_percpu_times()
>  *     if changed_states && xchg(poll_scheduled, 1) == 0
>  *         mod_timer()

Those last two lines aren't relevant for the race, right? I'd leave
those out to not distract from it.

>  * psi_group_change:
>  *     record_times()
>  *     smp_mb()
>  *     if xchg(poll_scheduled, 1) == 0
>  *         mod_timer()

The reason I tend to keep these more abstract is because 1) the names
of the functions change (I had already sent out patches to rename half
the variable and function names in this diagram), while the
architecture (task change vs poll worker) likely won't, and 2) because
it's easy to drown out what the reads, writes, and thus the race
condition is with code details and function call indirections.

How about a compromise?

/*
 * A task change can race with the poll worker that is supposed to
 * report on it. To avoid missing events, ensure ordering between
 * poll_scheduled and the task state accesses, such that if the poll
 * worker misses the state update, the task change is guaranteed to
 * reschedule the poll worker:
 *
 * poll worker:
 *   atomic_set(poll_scheduled, 0)
 *   smp_mb()
 *   LOAD states
 *
 * task change:
 *   STORE states
 *   if atomic_xchg(poll_scheduled, 1) == 0:
 *     schedule poll worker
 *
 * The atomic_xchg() implies a full barrier.
 */
 smp_mb();

This gives a high-level view of what's happening but it can still be
mapped to the code by following the poll_scheduled variable.

> If we remove smp_mb barriers then there are the following possible
> reordering cases:
> 
> Case1: reordering in psi_poll_work
> psi_poll_work                    psi_group_change
>   changed_states = collect_percpu_times()
>                                               record_times()
>                                               if xchg(poll_scheduled,
> 1) == 0 <-- false
>                                                   mod_timer()
>   poll_scheduled = 0
>   if changed_states && xchg(poll_scheduled, 1) == 0 <-- changed_states is false
>       mod_timer()
> 
> Case2: reordering in psi_group_change
> psi_poll_work                    psi_group_change
>                                               if xchg(poll_scheduled,
> 1) == 0 <-- false
>                                                   mod_timer()
>   poll_scheduled = 0
>   changed_states = collect_percpu_times()
>                                                   record_times()
>   if changed_states && xchg(poll_scheduled, 1) == 0 <-- changed_states is false
>       mod_timer()
> 
> In both cases mod_timer() is not called, poll update is missed. But
> describing this all in the comments would be an overkill IMHO.
> WDYT?

Yeah, I also think that's overkill. The failure cases can be derived
from the concurrency diagram and explanation.

Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling
  2021-07-08 14:44     ` Johannes Weiner
@ 2021-07-08 15:54       ` Suren Baghdasaryan
  2021-07-08 18:38         ` Johannes Weiner
  0 siblings, 1 reply; 8+ messages in thread
From: Suren Baghdasaryan @ 2021-07-08 15:54 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, matthias.bgg, Minchan Kim,
	Tim Murray, YT Chang, Wenju Xu (许文举),
	Jonathan JMChen (陳家明),
	LKML, linux-arm-kernel, linux-mediatek, kernel-team, SH Chen

 t

On Thu, Jul 8, 2021 at 7:44 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Wed, Jul 07, 2021 at 03:43:48PM -0700, Suren Baghdasaryan wrote:
> > On Wed, Jul 7, 2021 at 6:39 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > >
> > > This looks good to me now code wise. Just a comment on the comments:
> > >
> > > On Tue, Jul 06, 2021 at 07:39:33PM -0700, Suren Baghdasaryan wrote:
> > > > @@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
> > > >       return now + group->poll_min_period;
> > > >  }
> > > >
> > > > -/* Schedule polling if it's not already scheduled. */
> > > > -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> > > > +/* Schedule polling if it's not already scheduled or forced. */
> > > > +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> > > > +                                bool force)
> > > >  {
> > > >       struct task_struct *task;
> > > >
> > > > -     /*
> > > > -      * Do not reschedule if already scheduled.
> > > > -      * Possible race with a timer scheduled after this check but before
> > > > -      * mod_timer below can be tolerated because group->polling_next_update
> > > > -      * will keep updates on schedule.
> > > > -      */
> > > > -     if (timer_pending(&group->poll_timer))
> > > > +     /* xchg should be called even when !force to set poll_scheduled */
> > > > +     if (atomic_xchg(&group->poll_scheduled, 1) && !force)
> > > >               return;
> > >
> > > This explains what the code does, but not why. It would be good to
> > > explain the ordering with poll_work, here or there. But both sides
> > > should mention each other.
> >
> > How about this:
> >
> > /*
> >  * atomic_xchg should be called even when !force to always set poll_scheduled
> >  * and to provide a memory barrier (see the comment inside psi_poll_work).
> >  */
>
> The memory barrier part makes sense, but the first part says what the
> code does and the message is unclear to me. Are you worried somebody
> might turn this around in the future and only conditionalize on
> poll_scheduled when !force? Essentially, I don't see the downside of
> dropping that. But maybe I'm missing something.

Actually you are right. Originally I was worried that there might be a
case when poll_scheduled==0 and force==true and if someone flips the
conditions we will reschedule the timer but will not set
poll_scheduled back to 1. However I don't think this condition is
possible. We set force=true only when we skipped resetting
poll_schedule to 0 and on initial wakeup we always reset
poll_schedule. How about changing the comment to this:

 /*
  * atomic_xchg should be called even when !force to provide a
  * full memory barrier (see the comment inside psi_poll_work).
  */

>         /*
>          * The xchg implies a full barrier that matches the one
>          * in psi_poll_work() (see corresponding comment there).
>          */
>
> > > > @@ -595,6 +595,28 @@ static void psi_poll_work(struct psi_group *group)
> > > >
> > > >       now = sched_clock();
> > > >
> > > > +     if (now > group->polling_until) {
> > > > +             /*
> > > > +              * We are either about to start or might stop polling if no
> > > > +              * state change was recorded. Resetting poll_scheduled leaves
> > > > +              * a small window for psi_group_change to sneak in and schedule
> > > > +              * an immegiate poll_work before we get to rescheduling. One
> > > > +              * potential extra wakeup at the end of the polling window
> > > > +              * should be negligible and polling_next_update still keeps
> > > > +              * updates correctly on schedule.
> > > > +              */
> > > > +             atomic_set(&group->poll_scheduled, 0);
> > > > +             /*
> > > > +              * Ensure that operations of clearing group->poll_scheduled and
> > > > +              * obtaining changed_states are not reordered.
> > > > +              */
> > > > +             smp_mb();
> > >
> > > Same here, it would be good to explain that this is ordering the
> > > scheduler with the timer such that no events are missed. Feel free to
> > > reuse my race diagram from the other thread - those are better at
> > > conveying the situation than freeform text.
> >
> > I tried to make your diagram a bit less abstract by using the actual
> > names. How about this?
> >
> > /*
> >  * We need to enforce ordering between poll_scheduled and psi_group_cpu.times
> >  * reads and writes in psi_poll_work and psi_group_change functions.
> > Otherwise we
> >  * might fail to reschedule the timer when monitored states change:
> >  *
> >  * psi_poll_work:
> >  *     poll_scheduled = 0
> >  *     smp_mb()
> >  *     changed_states = collect_percpu_times()
> >  *     if changed_states && xchg(poll_scheduled, 1) == 0
> >  *         mod_timer()
>
> Those last two lines aren't relevant for the race, right? I'd leave
> those out to not distract from it.

They did help me illustrate the two failure cases but yeah, someone
who can read the code can derive the rest :)

>
> >  * psi_group_change:
> >  *     record_times()
> >  *     smp_mb()
> >  *     if xchg(poll_scheduled, 1) == 0
> >  *         mod_timer()
>
> The reason I tend to keep these more abstract is because 1) the names
> of the functions change (I had already sent out patches to rename half
> the variable and function names in this diagram), while the
> architecture (task change vs poll worker) likely won't, and 2) because
> it's easy to drown out what the reads, writes, and thus the race
> condition is with code details and function call indirections.

Got it.

>
> How about a compromise?
>
> /*
>  * A task change can race with the poll worker that is supposed to
>  * report on it. To avoid missing events, ensure ordering between
>  * poll_scheduled and the task state accesses, such that if the poll
>  * worker misses the state update, the task change is guaranteed to
>  * reschedule the poll worker:
>  *
>  * poll worker:
>  *   atomic_set(poll_scheduled, 0)
>  *   smp_mb()
>  *   LOAD states
>  *
>  * task change:
>  *   STORE states
>  *   if atomic_xchg(poll_scheduled, 1) == 0:
>  *     schedule poll worker
>  *
>  * The atomic_xchg() implies a full barrier.
>  */
>  smp_mb();
>
> This gives a high-level view of what's happening but it can still be
> mapped to the code by following the poll_scheduled variable.

This looks really good to me.
If you agree on the first comment modification, should I respin the
next version?

>
> > If we remove smp_mb barriers then there are the following possible
> > reordering cases:
> >
> > Case1: reordering in psi_poll_work
> > psi_poll_work                    psi_group_change
> >   changed_states = collect_percpu_times()
> >                                               record_times()
> >                                               if xchg(poll_scheduled,
> > 1) == 0 <-- false
> >                                                   mod_timer()
> >   poll_scheduled = 0
> >   if changed_states && xchg(poll_scheduled, 1) == 0 <-- changed_states is false
> >       mod_timer()
> >
> > Case2: reordering in psi_group_change
> > psi_poll_work                    psi_group_change
> >                                               if xchg(poll_scheduled,
> > 1) == 0 <-- false
> >                                                   mod_timer()
> >   poll_scheduled = 0
> >   changed_states = collect_percpu_times()
> >                                                   record_times()
> >   if changed_states && xchg(poll_scheduled, 1) == 0 <-- changed_states is false
> >       mod_timer()
> >
> > In both cases mod_timer() is not called, poll update is missed. But
> > describing this all in the comments would be an overkill IMHO.
> > WDYT?
>
> Yeah, I also think that's overkill. The failure cases can be derived
> from the concurrency diagram and explanation.
>
> Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling
  2021-07-08 15:54       ` Suren Baghdasaryan
@ 2021-07-08 18:38         ` Johannes Weiner
  2021-07-08 19:55           ` Suren Baghdasaryan
  0 siblings, 1 reply; 8+ messages in thread
From: Johannes Weiner @ 2021-07-08 18:38 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, matthias.bgg, Minchan Kim,
	Tim Murray, YT Chang, Wenju Xu (许文举),
	Jonathan JMChen (陳家明),
	LKML, linux-arm-kernel, linux-mediatek, kernel-team, SH Chen

On Thu, Jul 08, 2021 at 08:54:56AM -0700, Suren Baghdasaryan wrote:
> On Thu, Jul 8, 2021 at 7:44 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > On Wed, Jul 07, 2021 at 03:43:48PM -0700, Suren Baghdasaryan wrote:
> > > On Wed, Jul 7, 2021 at 6:39 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > > This looks good to me now code wise. Just a comment on the comments:
> > > >
> > > > On Tue, Jul 06, 2021 at 07:39:33PM -0700, Suren Baghdasaryan wrote:
> > > > > @@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
> > > > >       return now + group->poll_min_period;
> > > > >  }
> > > > >
> > > > > -/* Schedule polling if it's not already scheduled. */
> > > > > -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> > > > > +/* Schedule polling if it's not already scheduled or forced. */
> > > > > +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> > > > > +                                bool force)
> > > > >  {
> > > > >       struct task_struct *task;
> > > > >
> > > > > -     /*
> > > > > -      * Do not reschedule if already scheduled.
> > > > > -      * Possible race with a timer scheduled after this check but before
> > > > > -      * mod_timer below can be tolerated because group->polling_next_update
> > > > > -      * will keep updates on schedule.
> > > > > -      */
> > > > > -     if (timer_pending(&group->poll_timer))
> > > > > +     /* xchg should be called even when !force to set poll_scheduled */
> > > > > +     if (atomic_xchg(&group->poll_scheduled, 1) && !force)
> > > > >               return;
> > > >
> > > > This explains what the code does, but not why. It would be good to
> > > > explain the ordering with poll_work, here or there. But both sides
> > > > should mention each other.
> > >
> > > How about this:
> > >
> > > /*
> > >  * atomic_xchg should be called even when !force to always set poll_scheduled
> > >  * and to provide a memory barrier (see the comment inside psi_poll_work).
> > >  */
> >
> > The memory barrier part makes sense, but the first part says what the
> > code does and the message is unclear to me. Are you worried somebody
> > might turn this around in the future and only conditionalize on
> > poll_scheduled when !force? Essentially, I don't see the downside of
> > dropping that. But maybe I'm missing something.
> 
> Actually you are right. Originally I was worried that there might be a
> case when poll_scheduled==0 and force==true and if someone flips the
> conditions we will reschedule the timer but will not set
> poll_scheduled back to 1.

Oh I see.

Right, flipping the condition doesn't make sense because we need
poll_scheduled to be set when we go ahead - whether we're forcing or
not. I.e. if we were in a locked section, we'd write it like this:

	if (poll_scheduled)
		if (!force)
			return;
	else
		poll_scheduled = 1;

> However I don't think this condition is possible. We set force=true
> only when we skipped resetting poll_schedule to 0 and on initial
> wakeup we always reset poll_schedule. How about changing the comment
> to this:
> 
>  /*
>   * atomic_xchg should be called even when !force to provide a
>   * full memory barrier (see the comment inside psi_poll_work).
>   */

Personally, I still find this more confusing than no comment on
!force, because when you read it it sort of raises the question what
the alternatives would be. And the alternatives appear to be
nonsensical code rather than legitimate options.

But I won't insist if you prefer to leave it in. Your call.

> > /*
> >  * A task change can race with the poll worker that is supposed to
> >  * report on it. To avoid missing events, ensure ordering between
> >  * poll_scheduled and the task state accesses, such that if the poll
> >  * worker misses the state update, the task change is guaranteed to
> >  * reschedule the poll worker:
> >  *
> >  * poll worker:
> >  *   atomic_set(poll_scheduled, 0)
> >  *   smp_mb()
> >  *   LOAD states
> >  *
> >  * task change:
> >  *   STORE states
> >  *   if atomic_xchg(poll_scheduled, 1) == 0:
> >  *     schedule poll worker
> >  *
> >  * The atomic_xchg() implies a full barrier.
> >  */
> >  smp_mb();
> >
> > This gives a high-level view of what's happening but it can still be
> > mapped to the code by following the poll_scheduled variable.
> 
> This looks really good to me.
> If you agree on the first comment modification, should I respin the
> next version?

Yeah, sounds good to me!

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling
  2021-07-08 18:38         ` Johannes Weiner
@ 2021-07-08 19:55           ` Suren Baghdasaryan
  2021-07-08 20:37             ` Suren Baghdasaryan
  0 siblings, 1 reply; 8+ messages in thread
From: Suren Baghdasaryan @ 2021-07-08 19:55 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, matthias.bgg, Minchan Kim,
	Tim Murray, YT Chang, Wenju Xu (许文举),
	Jonathan JMChen (陳家明),
	LKML, linux-arm-kernel, linux-mediatek, kernel-team, SH Chen

On Thu, Jul 8, 2021 at 11:38 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Thu, Jul 08, 2021 at 08:54:56AM -0700, Suren Baghdasaryan wrote:
> > On Thu, Jul 8, 2021 at 7:44 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > On Wed, Jul 07, 2021 at 03:43:48PM -0700, Suren Baghdasaryan wrote:
> > > > On Wed, Jul 7, 2021 at 6:39 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > > > This looks good to me now code wise. Just a comment on the comments:
> > > > >
> > > > > On Tue, Jul 06, 2021 at 07:39:33PM -0700, Suren Baghdasaryan wrote:
> > > > > > @@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
> > > > > >       return now + group->poll_min_period;
> > > > > >  }
> > > > > >
> > > > > > -/* Schedule polling if it's not already scheduled. */
> > > > > > -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> > > > > > +/* Schedule polling if it's not already scheduled or forced. */
> > > > > > +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> > > > > > +                                bool force)
> > > > > >  {
> > > > > >       struct task_struct *task;
> > > > > >
> > > > > > -     /*
> > > > > > -      * Do not reschedule if already scheduled.
> > > > > > -      * Possible race with a timer scheduled after this check but before
> > > > > > -      * mod_timer below can be tolerated because group->polling_next_update
> > > > > > -      * will keep updates on schedule.
> > > > > > -      */
> > > > > > -     if (timer_pending(&group->poll_timer))
> > > > > > +     /* xchg should be called even when !force to set poll_scheduled */
> > > > > > +     if (atomic_xchg(&group->poll_scheduled, 1) && !force)
> > > > > >               return;
> > > > >
> > > > > This explains what the code does, but not why. It would be good to
> > > > > explain the ordering with poll_work, here or there. But both sides
> > > > > should mention each other.
> > > >
> > > > How about this:
> > > >
> > > > /*
> > > >  * atomic_xchg should be called even when !force to always set poll_scheduled
> > > >  * and to provide a memory barrier (see the comment inside psi_poll_work).
> > > >  */
> > >
> > > The memory barrier part makes sense, but the first part says what the
> > > code does and the message is unclear to me. Are you worried somebody
> > > might turn this around in the future and only conditionalize on
> > > poll_scheduled when !force? Essentially, I don't see the downside of
> > > dropping that. But maybe I'm missing something.
> >
> > Actually you are right. Originally I was worried that there might be a
> > case when poll_scheduled==0 and force==true and if someone flips the
> > conditions we will reschedule the timer but will not set
> > poll_scheduled back to 1.
>
> Oh I see.
>
> Right, flipping the condition doesn't make sense because we need
> poll_scheduled to be set when we go ahead - whether we're forcing or
> not. I.e. if we were in a locked section, we'd write it like this:
>
>         if (poll_scheduled)
>                 if (!force)
>                         return;
>         else
>                 poll_scheduled = 1;
>
> > However I don't think this condition is possible. We set force=true
> > only when we skipped resetting poll_schedule to 0 and on initial
> > wakeup we always reset poll_schedule. How about changing the comment
> > to this:
> >
> >  /*
> >   * atomic_xchg should be called even when !force to provide a
> >   * full memory barrier (see the comment inside psi_poll_work).
> >   */
>
> Personally, I still find this more confusing than no comment on
> !force, because when you read it it sort of raises the question what
> the alternatives would be. And the alternatives appear to be
> nonsensical code rather than legitimate options.
>
> But I won't insist if you prefer to leave it in. Your call.

I would like to keep it as a precaution, if you don't mind. In case
someone in the future thinks about "optimizing" this by flipping the
condition, hopefully the comment will give them a pause to think about
it :)

>
> > > /*
> > >  * A task change can race with the poll worker that is supposed to
> > >  * report on it. To avoid missing events, ensure ordering between
> > >  * poll_scheduled and the task state accesses, such that if the poll
> > >  * worker misses the state update, the task change is guaranteed to
> > >  * reschedule the poll worker:
> > >  *
> > >  * poll worker:
> > >  *   atomic_set(poll_scheduled, 0)
> > >  *   smp_mb()
> > >  *   LOAD states
> > >  *
> > >  * task change:
> > >  *   STORE states
> > >  *   if atomic_xchg(poll_scheduled, 1) == 0:
> > >  *     schedule poll worker
> > >  *
> > >  * The atomic_xchg() implies a full barrier.
> > >  */
> > >  smp_mb();
> > >
> > > This gives a high-level view of what's happening but it can still be
> > > mapped to the code by following the poll_scheduled variable.
> >
> > This looks really good to me.
> > If you agree on the first comment modification, should I respin the
> > next version?
>
> Yeah, sounds good to me!

Thanks! I'll post an update shortly.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling
  2021-07-08 19:55           ` Suren Baghdasaryan
@ 2021-07-08 20:37             ` Suren Baghdasaryan
  0 siblings, 0 replies; 8+ messages in thread
From: Suren Baghdasaryan @ 2021-07-08 20:37 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Peter Zijlstra, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Benjamin Segall, Mel Gorman,
	Daniel Bristot de Oliveira, matthias.bgg, Minchan Kim,
	Tim Murray, YT Chang, Wenju Xu (许文举),
	Jonathan JMChen (陳家明),
	LKML, linux-arm-kernel, linux-mediatek, kernel-team, SH Chen

On Thu, Jul 8, 2021 at 12:55 PM Suren Baghdasaryan <surenb@google.com> wrote:
>
> On Thu, Jul 8, 2021 at 11:38 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Thu, Jul 08, 2021 at 08:54:56AM -0700, Suren Baghdasaryan wrote:
> > > On Thu, Jul 8, 2021 at 7:44 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > > On Wed, Jul 07, 2021 at 03:43:48PM -0700, Suren Baghdasaryan wrote:
> > > > > On Wed, Jul 7, 2021 at 6:39 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > > > > This looks good to me now code wise. Just a comment on the comments:
> > > > > >
> > > > > > On Tue, Jul 06, 2021 at 07:39:33PM -0700, Suren Baghdasaryan wrote:
> > > > > > > @@ -559,18 +560,14 @@ static u64 update_triggers(struct psi_group *group, u64 now)
> > > > > > >       return now + group->poll_min_period;
> > > > > > >  }
> > > > > > >
> > > > > > > -/* Schedule polling if it's not already scheduled. */
> > > > > > > -static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay)
> > > > > > > +/* Schedule polling if it's not already scheduled or forced. */
> > > > > > > +static void psi_schedule_poll_work(struct psi_group *group, unsigned long delay,
> > > > > > > +                                bool force)
> > > > > > >  {
> > > > > > >       struct task_struct *task;
> > > > > > >
> > > > > > > -     /*
> > > > > > > -      * Do not reschedule if already scheduled.
> > > > > > > -      * Possible race with a timer scheduled after this check but before
> > > > > > > -      * mod_timer below can be tolerated because group->polling_next_update
> > > > > > > -      * will keep updates on schedule.
> > > > > > > -      */
> > > > > > > -     if (timer_pending(&group->poll_timer))
> > > > > > > +     /* xchg should be called even when !force to set poll_scheduled */
> > > > > > > +     if (atomic_xchg(&group->poll_scheduled, 1) && !force)
> > > > > > >               return;
> > > > > >
> > > > > > This explains what the code does, but not why. It would be good to
> > > > > > explain the ordering with poll_work, here or there. But both sides
> > > > > > should mention each other.
> > > > >
> > > > > How about this:
> > > > >
> > > > > /*
> > > > >  * atomic_xchg should be called even when !force to always set poll_scheduled
> > > > >  * and to provide a memory barrier (see the comment inside psi_poll_work).
> > > > >  */
> > > >
> > > > The memory barrier part makes sense, but the first part says what the
> > > > code does and the message is unclear to me. Are you worried somebody
> > > > might turn this around in the future and only conditionalize on
> > > > poll_scheduled when !force? Essentially, I don't see the downside of
> > > > dropping that. But maybe I'm missing something.
> > >
> > > Actually you are right. Originally I was worried that there might be a
> > > case when poll_scheduled==0 and force==true and if someone flips the
> > > conditions we will reschedule the timer but will not set
> > > poll_scheduled back to 1.
> >
> > Oh I see.
> >
> > Right, flipping the condition doesn't make sense because we need
> > poll_scheduled to be set when we go ahead - whether we're forcing or
> > not. I.e. if we were in a locked section, we'd write it like this:
> >
> >         if (poll_scheduled)
> >                 if (!force)
> >                         return;
> >         else
> >                 poll_scheduled = 1;
> >
> > > However I don't think this condition is possible. We set force=true
> > > only when we skipped resetting poll_schedule to 0 and on initial
> > > wakeup we always reset poll_schedule. How about changing the comment
> > > to this:
> > >
> > >  /*
> > >   * atomic_xchg should be called even when !force to provide a
> > >   * full memory barrier (see the comment inside psi_poll_work).
> > >   */
> >
> > Personally, I still find this more confusing than no comment on
> > !force, because when you read it it sort of raises the question what
> > the alternatives would be. And the alternatives appear to be
> > nonsensical code rather than legitimate options.
> >
> > But I won't insist if you prefer to leave it in. Your call.
>
> I would like to keep it as a precaution, if you don't mind. In case
> someone in the future thinks about "optimizing" this by flipping the
> condition, hopefully the comment will give them a pause to think about
> it :)
>
> >
> > > > /*
> > > >  * A task change can race with the poll worker that is supposed to
> > > >  * report on it. To avoid missing events, ensure ordering between
> > > >  * poll_scheduled and the task state accesses, such that if the poll
> > > >  * worker misses the state update, the task change is guaranteed to
> > > >  * reschedule the poll worker:
> > > >  *
> > > >  * poll worker:
> > > >  *   atomic_set(poll_scheduled, 0)
> > > >  *   smp_mb()
> > > >  *   LOAD states
> > > >  *
> > > >  * task change:
> > > >  *   STORE states
> > > >  *   if atomic_xchg(poll_scheduled, 1) == 0:
> > > >  *     schedule poll worker
> > > >  *
> > > >  * The atomic_xchg() implies a full barrier.
> > > >  */
> > > >  smp_mb();
> > > >
> > > > This gives a high-level view of what's happening but it can still be
> > > > mapped to the code by following the poll_scheduled variable.
> > >
> > > This looks really good to me.
> > > If you agree on the first comment modification, should I respin the
> > > next version?
> >
> > Yeah, sounds good to me!
>
> Thanks! I'll post an update shortly.

v4 is posted at https://lore.kernel.org/patchwork/patch/1455172/

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-07-08 20:37 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-07  2:39 [PATCH v3 1/1] psi: stop relying on timer_pending for poll_work rescheduling Suren Baghdasaryan
2021-07-07 13:39 ` Johannes Weiner
2021-07-07 22:43   ` Suren Baghdasaryan
2021-07-08 14:44     ` Johannes Weiner
2021-07-08 15:54       ` Suren Baghdasaryan
2021-07-08 18:38         ` Johannes Weiner
2021-07-08 19:55           ` Suren Baghdasaryan
2021-07-08 20:37             ` Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).