linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] tick/sched: Ensure quiet_vmstat() is called when the idle tick was stopped too
@ 2022-02-03 21:43 Aaron Tomlin
  2022-02-03 22:22 ` Phil Auld
  2022-02-17 12:47 ` Frederic Weisbecker
  0 siblings, 2 replies; 21+ messages in thread
From: Aaron Tomlin @ 2022-02-03 21:43 UTC (permalink / raw)
  To: frederic; +Cc: tglx, mingo, atomlin, linux-kernel, linux-mm

Hi Frederic,

If I understand correctly, in the context of the idle task and a nohz_full
CPU, quiet_vmstat() can be called: before stopping the idle tick, entering
an idle state and on exit. In particular, for the latter case, when the
idle task is required to reschedule, the idle tick can remain stopped and
the timer expiration time endless i.e., KTIME_MAX. Now, indeed before a
nohz_full CPU enters an idle state, CPU-specific vmstat counters should
be processed to ensure the respective values have been reset and folded
into the zone specific vm_stat[]. That being said, it can only occur when:
the idle tick was previously stopped, and reprogramming of the timer is not
required.

A customer provided some evidence which indicates that the idle tick was
stopped; albeit, CPU-specific vmstat counters still remained populated.
Thus one can only assume quiet_vmstat() was not invoked on return to the
idle loop.

Unfortunately, I suspect this divergence might erroneously prevent a
reclaim attempt by kswapd. If the number of zone specific free pages are
below their per-cpu drift value then zone_page_state_snapshot() is used to
compute a more accurate view of the aforementioned statistic.
Thus any task blocked on the NUMA node specific pfmemalloc_wait queue will
be unable to make significant progress via direct reclaim unless it is
killed after being woken up by kswapd (see throttle_direct_reclaim()).
That being said, eventually reclaim should give up if the conditions are
correct, no?

Consider the following theoretical scenario:

        1.      CPU Y migrated running task A to CPU X that was
                in an idle state i.e. waiting for an IRQ - not
                polling; marked the current task on CPU X to
                need/or require a reschedule i.e., set
                TIF_NEED_RESCHED and invoked a reschedule IPI to
                CPU X (see sched_move_task())

        2.      CPU X acknowledged the reschedule IPI from CPU Y;
                generic idle loop code noticed the
                TIF_NEED_RESCHED flag against the idle task and
                attempts to exit of the loop and calls the main
                scheduler function i.e. __schedule().

                Since the idle tick was previously stopped no
                scheduling-clock tick would occur.
                So, no deferred timers would be handled

        3.      Post transition to kernel execution Task A
                running on CPU Y, indirectly released a few pages
                (e.g. see __free_one_page()); CPU Y's
                vm_stat_diff[NR_FREE_PAGES] was updated and zone
                specific vm_stat[] update was deferred as per the
                CPU-specific stat threshold

        4.      Task A does invoke exit(2) and the kernel does
                remove the task from the run-queue; the idle task
                was selected to execute next since there are no
                other runnable tasks assigned to the given CPU
                (see pick_next_task() and pick_next_task_idle())

        5.      On return to the idle loop since the idle tick
                was already stopped and can remain so (see [1]
                below) e.g. no pending soft IRQs, no attempt is
                made to zero and fold CPU Y's vmstat counters
                since reprogramming of the scheduling-clock tick
                is not required/or needed (see [2])

              ...
                do_idle
                {

                  __current_set_polling()
                  tick_nohz_idle_enter()

                  while (!need_resched()) {

                    local_irq_disable()

                    ...

                    /* No polling or broadcast event */
                    cpuidle_idle_call()
                    {

                      if (cpuidle_not_available(drv, dev)) {
                        tick_nohz_idle_stop_tick()
                          __tick_nohz_idle_stop_tick(this_cpu_ptr(&tick_cpu_sched))
                          {
                            int cpu = smp_processor_id()

                            if (ts->timer_expires_base)
                              expires = ts->timer_expires
                            else if (can_stop_idle_tick(cpu, ts))
          (1) ------->        expires = tick_nohz_next_event(ts, cpu)
                            else
                              return

                            ts->idle_calls++

                            if (expires > 0LL) {

                              tick_nohz_stop_tick(ts, cpu)
                              {

                                if (ts->tick_stopped && (expires == ts->next_tick)) {
          (2) ------->            if (tick == KTIME_MAX || ts->next_tick ==
                                    hrtimer_get_expires(&ts->sched_timer))
                                    return
                                }
                                ...
                              }


The idea with this patch is to ensure refresh_cpu_vm_stats(false) is called
on return to the idle loop when the idle tick was previously stopped.

Any feedback/or testing would be appreciated.


Signed-off-by: Aaron Tomlin <atomlin@redhat.com>
---
 kernel/time/tick-sched.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 17a283ce2b20..61874df064b6 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -876,6 +876,9 @@ static void tick_nohz_stop_tick(struct tick_sched *ts, int cpu)
 		ts->do_timer_last = 0;
 	}
 
+	/* Attempt to fold when the idle tick is stopped or not */
+	quiet_vmstat();
+
 	/* Skip reprogram of event if its not changed */
 	if (ts->tick_stopped && (expires == ts->next_tick)) {
 		/* Sanity check: make sure clockevent is actually programmed */
@@ -897,7 +900,6 @@ static void tick_nohz_stop_tick(struct tick_sched *ts, int cpu)
 	 */
 	if (!ts->tick_stopped) {
 		calc_load_nohz_start();
-		quiet_vmstat();
 
 		ts->last_tick = hrtimer_get_expires(&ts->sched_timer);
 		ts->tick_stopped = 1;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2022-03-31 14:33 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-03 21:43 [RFC PATCH] tick/sched: Ensure quiet_vmstat() is called when the idle tick was stopped too Aaron Tomlin
2022-02-03 22:22 ` Phil Auld
2022-02-16 14:34   ` Aaron Tomlin
2022-02-16 21:20     ` Phil Auld
2022-02-17 12:57     ` Frederic Weisbecker
2022-02-17 14:45       ` Aaron Tomlin
2022-02-17 12:47 ` Frederic Weisbecker
2022-02-17 14:26   ` Aaron Tomlin
2022-02-17 16:32     ` Frederic Weisbecker
2022-02-18 12:54       ` Aaron Tomlin
2022-02-19 15:46         ` Aaron Tomlin
2022-02-24 12:27           ` Marcelo Tosatti
2022-02-24 12:30             ` Marcelo Tosatti
2022-02-24 13:01               ` Aaron Tomlin
2022-02-24 12:37             ` Marcelo Tosatti
2022-02-24 13:00             ` Aaron Tomlin
2022-02-24 13:14               ` Marcelo Tosatti
2022-02-24 13:28                 ` Aaron Tomlin
2022-02-24 13:40                   ` Marcelo Tosatti
2022-02-24 13:44                     ` Aaron Tomlin
2022-03-31 14:33       ` Aaron Tomlin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).