All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] vmstat bug fixes for nohz_full and isolated CPUs
@ 2023-06-02 18:57 Marcelo Tosatti
  2023-06-02 18:57 ` [PATCH v2 1/3] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-02 18:57 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko

This patch series addresses the following two problems:

1. A customer provided evidence indicating that a process
   was stalled in direct reclaim:

 - The process was trapped in throttle_direct_reclaim().
   The function wait_event_killable() was called to wait condition
   allow_direct_reclaim(pgdat) for current node to be true.
   The allow_direct_reclaim(pgdat) examined the number of free pages
   on the node by zone_page_state() which just returns value in
   zone->vm_stat[NR_FREE_PAGES].

 - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.
   However, the freelist on this node was not empty.

 - This inconsistent of vmstat value was caused by percpu vmstat on
   nohz_full cpus. Every increment/decrement of vmstat is performed
   on percpu vmstat counter at first, then pooled diffs are cumulated
   to the zone's vmstat counter in timely manner. However, on nohz_full
   cpus (in case of this customer's system, 48 of 52 cpus) these pooled
   diffs were not cumulated once the cpu had no event on it so that
   the cpu started sleeping infinitely.
   I checked percpu vmstat and found there were total 69 counts not
   cumulated to the zone's vmstat counter yet.

 - In this situation, kswapd did not help the trapped process.
   In pgdat_balanced(), zone_wakermark_ok_safe() examined the number
   of free pages on the node by zone_page_state_snapshot() which
   checks pending counts on percpu vmstat.
   Therefore kswapd could know there were 69 free pages correctly.
   Since zone->_watermark = {8, 20, 32}, kswapd did not work because
   69 was greater than 32 as high watermark.

 2. With a task that busy loops on a given CPU,
    the kworker interruption to execute vmstat_update
    is undesired and may exceed latency thresholds
    for certain applications.

First issue is solved by using _snapshot version of
the counters on allow_direct_reclaim.

Second issue is fixed by disabling periodic vmstat
updates for nohz_full CPUs.

Thanks to Aaron Tomlin for diagnosing issue 1 and writing
the initial patch series.

Performance details for the kworker interruption:

oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
 
The example above shows an additional 7us for the

        oslat -> kworker -> oslat

switches. In the case of a virtualized CPU, and the vmstat_update
interruption in the host (of a qemu-kvm vcpu), the latency penalty
observed in the guest is higher than 50us, violating the acceptable
latency threshold for certain applications.

v2: use cpu_is_isolated         (Michal Hocko)
    opencode schedule_on_each_cpu (Michal Hocko)

 mm/vmscan.c |    2 +-
 mm/vmstat.c |   39 +++++++++++++++++++++++++++++++++++----
 2 files changed, 36 insertions(+), 5 deletions(-)




^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/3] vmstat: allow_direct_reclaim should use zone_page_state_snapshot
  2023-06-02 18:57 [PATCH v2 0/3] vmstat bug fixes for nohz_full and isolated CPUs Marcelo Tosatti
@ 2023-06-02 18:57 ` Marcelo Tosatti
  2023-06-02 18:57 ` [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
  2023-06-02 18:58 ` [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
  2 siblings, 0 replies; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-02 18:57 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti

A customer provided evidence indicating that a process
was stalled in direct reclaim:

 - The process was trapped in throttle_direct_reclaim().
   The function wait_event_killable() was called to wait condition     
   allow_direct_reclaim(pgdat) for current node to be true.     
   The allow_direct_reclaim(pgdat) examined the number of free pages     
   on the node by zone_page_state() which just returns value in     
   zone->vm_stat[NR_FREE_PAGES].     
                                                
 - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.            
   However, the freelist on this node was not empty.     
                           
 - This inconsistent of vmstat value was caused by percpu vmstat on     
   nohz_full cpus. Every increment/decrement of vmstat is performed     
   on percpu vmstat counter at first, then pooled diffs are cumulated     
   to the zone's vmstat counter in timely manner. However, on nohz_full     
   cpus (in case of this customer's system, 48 of 52 cpus) these pooled     
   diffs were not cumulated once the cpu had no event on it so that     
   the cpu started sleeping infinitely.                       
   I checked percpu vmstat and found there were total 69 counts not         
   cumulated to the zone's vmstat counter yet.     
                                         
 - In this situation, kswapd did not help the trapped process.     
   In pgdat_balanced(), zone_wakermark_ok_safe() examined the number     
   of free pages on the node by zone_page_state_snapshot() which     
   checks pending counts on percpu vmstat.     
   Therefore kswapd could know there were 69 free pages correctly.     
   Since zone->_watermark = {8, 20, 32}, kswapd did not work because     
   69 was greater than 32 as high watermark.     

Change allow_direct_reclaim to use zone_page_state_snapshot, which
allows a more precise version of the vmstat counters to be used.

allow_direct_reclaim will only be called from try_to_free_pages,
which is not a hot path.

Testing: Due to difficulties accessing the system, it has not been
possible for the reproducer to test the patch (however its
clear from available data and analysis that it should fix it).

Reviewed-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Aaron Tomlin <atomlin@atomlin.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

Index: linux-vmstat-remote/mm/vmscan.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmscan.c
+++ linux-vmstat-remote/mm/vmscan.c
@@ -6887,7 +6887,7 @@ static bool allow_direct_reclaim(pg_data
 			continue;
 
 		pfmemalloc_reserve += min_wmark_pages(zone);
-		free_pages += zone_page_state(zone, NR_FREE_PAGES);
+		free_pages += zone_page_state_snapshot(zone, NR_FREE_PAGES);
 	}
 
 	/* If there are no reserves (unexpected config) then do not throttle */



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-06-02 18:57 [PATCH v2 0/3] vmstat bug fixes for nohz_full and isolated CPUs Marcelo Tosatti
  2023-06-02 18:57 ` [PATCH v2 1/3] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
@ 2023-06-02 18:57 ` Marcelo Tosatti
  2023-06-05  7:55   ` Michal Hocko
  2023-06-02 18:58 ` [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
  2 siblings, 1 reply; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-02 18:57 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti

The interruption caused by vmstat_update is undesirable 
for certain aplications:

oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...

The example above shows an additional 7us for the

       	oslat -> kworker -> oslat

switches. In the case of a virtualized CPU, and the vmstat_update  
interruption in the host (of a qemu-kvm vcpu), the latency penalty
observed in the guest is higher than 50us, violating the acceptable
latency threshold.

Skip periodic updates for nohz full CPUs. Any callers who
need precise values should use a snapshot of the per-CPU
counters, or use the global counters with measures to 
handle errors up to thresholds (see calculate_normal_threshold).

Suggested by Michal Hocko.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---

v2: use cpu_is_isolated		(Michal Hocko)

Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -28,6 +28,7 @@
 #include <linux/mm_inline.h>
 #include <linux/page_ext.h>
 #include <linux/page_owner.h>
+#include <linux/sched/isolation.h>
 
 #include "internal.h"
 
@@ -2022,6 +2023,16 @@ static void vmstat_shepherd(struct work_
 	for_each_online_cpu(cpu) {
 		struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
 
+		/*
+		 * Skip periodic updates for isolated CPUs.
+		 * Any callers who need precise values should use
+		 * a snapshot of the per-CPU counters, or use the global
+		 * counters with measures to handle errors up to
+		 * thresholds (see calculate_normal_threshold).
+		 */
+		if (cpu_is_isolated(cpu))
+			continue;
+
 		if (!delayed_work_pending(dw) && need_update(cpu))
 			queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
 



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-06-02 18:57 [PATCH v2 0/3] vmstat bug fixes for nohz_full and isolated CPUs Marcelo Tosatti
  2023-06-02 18:57 ` [PATCH v2 1/3] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
  2023-06-02 18:57 ` [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
@ 2023-06-02 18:58 ` Marcelo Tosatti
  2023-06-05  7:59   ` Michal Hocko
  2 siblings, 1 reply; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-02 18:58 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Aaron Tomlin, Frederic Weisbecker, Andrew Morton, linux-kernel,
	linux-mm, Vlastimil Babka, Michal Hocko, Marcelo Tosatti

The interruption caused by queueing work on nohz_full CPUs 
is undesirable for certain aplications.

Fix by not refreshing per-CPU stats of nohz_full CPUs. 

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

---
v2: opencode schedule_on_each_cpu (Michal Hocko)

Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab
 		   void *buffer, size_t *lenp, loff_t *ppos)
 {
 	long val;
-	int err;
 	int i;
+	int cpu;
+	struct work_struct __percpu *works;
+
+	works = alloc_percpu(struct work_struct);
+	if (!works)
+		return -ENOMEM;
 
 	/*
 	 * The regular update, every sysctl_stat_interval, may come later
@@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab
 	 * transiently negative values, report an error here if any of
 	 * the stats is negative, so we know to go looking for imbalance.
 	 */
-	err = schedule_on_each_cpu(refresh_vm_stats);
-	if (err)
-		return err;
+	cpus_read_lock();
+	for_each_online_cpu(cpu) {
+		struct work_struct *work;
+
+		if (cpu_is_isolated(cpu))
+			continue;
+		work = per_cpu_ptr(works, cpu);
+		INIT_WORK(work, refresh_vm_stats);
+		schedule_work_on(cpu, work);
+	}
+
+	for_each_online_cpu(cpu) {
+		if (cpu_is_isolated(cpu))
+			continue;
+		flush_work(per_cpu_ptr(works, cpu));
+	}
+	cpus_read_unlock();
+	free_percpu(works);
 	for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
 		/*
 		 * Skip checking stats known to go negative occasionally.



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-06-02 18:57 ` [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
@ 2023-06-05  7:55   ` Michal Hocko
  2023-06-05 14:53     ` Marcelo Tosatti
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2023-06-05  7:55 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Fri 02-06-23 15:57:59, Marcelo Tosatti wrote:
> The interruption caused by vmstat_update is undesirable 
> for certain aplications:
> 
> oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
> 
> The example above shows an additional 7us for the
> 
>        	oslat -> kworker -> oslat
> 
> switches. In the case of a virtualized CPU, and the vmstat_update  
> interruption in the host (of a qemu-kvm vcpu), the latency penalty
> observed in the guest is higher than 50us, violating the acceptable
> latency threshold.

I personally find the above problem description insufficient. I have
asked several times and only got piece by piece information each time.
Maybe there is a reason to be secretive but it would be great to get at
least some basic expectations described  and what they are based on.

E.g. workloads are running on isolated cpus with nohz full mode to
shield off any kernel interruption. Yet there are operations that update
counters (like mlock, but not mlock alone) that update per cpu counters
that will eventually get flushed and that will cause some interference.
Now the host/guest transition and intereference. How that happens when
the guest is running on an isolated and dedicated cpu?

> Skip periodic updates for nohz full CPUs. Any callers who
> need precise values should use a snapshot of the per-CPU
> counters, or use the global counters with measures to 
> handle errors up to thresholds (see calculate_normal_threshold).

I would rephrase this paragraph. 
In kernel users of vmstat counters either require the precise value and
they are using zone_page_state_snapshot interface or they can live with
an imprecision as the regular flushing can happen at arbitrary time and
cumulative error can grow (see calculate_normal_threshold).

From that POV the regular flushing can be postponed for CPUs that have
been isolated from the kernel interference withtout critical
infrastructure ever noticing. Skip regular flushing from vmstat_shepherd
for all isolated CPUs to avoid interference with the isolated workload.

> Suggested by Michal Hocko.
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> 
> ---
> 
> v2: use cpu_is_isolated		(Michal Hocko)
> 
> Index: linux-vmstat-remote/mm/vmstat.c
> ===================================================================
> --- linux-vmstat-remote.orig/mm/vmstat.c
> +++ linux-vmstat-remote/mm/vmstat.c
> @@ -28,6 +28,7 @@
>  #include <linux/mm_inline.h>
>  #include <linux/page_ext.h>
>  #include <linux/page_owner.h>
> +#include <linux/sched/isolation.h>
>  
>  #include "internal.h"
>  
> @@ -2022,6 +2023,16 @@ static void vmstat_shepherd(struct work_
>  	for_each_online_cpu(cpu) {
>  		struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
>  
> +		/*
> +		 * Skip periodic updates for isolated CPUs.
> +		 * Any callers who need precise values should use
> +		 * a snapshot of the per-CPU counters, or use the global
> +		 * counters with measures to handle errors up to
> +		 * thresholds (see calculate_normal_threshold).
> +		 */
> +		if (cpu_is_isolated(cpu))
> +			continue;
> +
>  		if (!delayed_work_pending(dw) && need_update(cpu))
>  			queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
>  
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-06-02 18:58 ` [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
@ 2023-06-05  7:59   ` Michal Hocko
  2023-06-05 15:43     ` Marcelo Tosatti
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2023-06-05  7:59 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> The interruption caused by queueing work on nohz_full CPUs 
> is undesirable for certain aplications.

This is not a proper changelog. I am not going to write a changelog for
you this time. Please explain why this is really needed and why this
approach is desired. E.g. why don't you prevent userspace from
refreshing stats if interference is not desirable. Also would it make
some sense to reduce flushing to cpumask of the calling process?
(certainly a daring thought but have you even considered it?)

> Fix by not refreshing per-CPU stats of nohz_full CPUs. 
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> ---
> v2: opencode schedule_on_each_cpu (Michal Hocko)
> 
> Index: linux-vmstat-remote/mm/vmstat.c
> ===================================================================
> --- linux-vmstat-remote.orig/mm/vmstat.c
> +++ linux-vmstat-remote/mm/vmstat.c
> @@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab
>  		   void *buffer, size_t *lenp, loff_t *ppos)
>  {
>  	long val;
> -	int err;
>  	int i;
> +	int cpu;
> +	struct work_struct __percpu *works;
> +
> +	works = alloc_percpu(struct work_struct);
> +	if (!works)
> +		return -ENOMEM;
>  
>  	/*
>  	 * The regular update, every sysctl_stat_interval, may come later
> @@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab
>  	 * transiently negative values, report an error here if any of
>  	 * the stats is negative, so we know to go looking for imbalance.
>  	 */
> -	err = schedule_on_each_cpu(refresh_vm_stats);
> -	if (err)
> -		return err;
> +	cpus_read_lock();
> +	for_each_online_cpu(cpu) {
> +		struct work_struct *work;
> +
> +		if (cpu_is_isolated(cpu))
> +			continue;
> +		work = per_cpu_ptr(works, cpu);
> +		INIT_WORK(work, refresh_vm_stats);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_online_cpu(cpu) {
> +		if (cpu_is_isolated(cpu))
> +			continue;
> +		flush_work(per_cpu_ptr(works, cpu));
> +	}
> +	cpus_read_unlock();
> +	free_percpu(works);
>  	for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
>  		/*
>  		 * Skip checking stats known to go negative occasionally.
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-06-05  7:55   ` Michal Hocko
@ 2023-06-05 14:53     ` Marcelo Tosatti
  2023-06-05 15:55       ` Michal Hocko
  0 siblings, 1 reply; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-05 14:53 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon, Jun 05, 2023 at 09:55:57AM +0200, Michal Hocko wrote:
> On Fri 02-06-23 15:57:59, Marcelo Tosatti wrote:
> > The interruption caused by vmstat_update is undesirable 
> > for certain aplications:
> > 
> > oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> > oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> > oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> > kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
> > 
> > The example above shows an additional 7us for the
> > 
> >        	oslat -> kworker -> oslat
> > 
> > switches. In the case of a virtualized CPU, and the vmstat_update  
> > interruption in the host (of a qemu-kvm vcpu), the latency penalty
> > observed in the guest is higher than 50us, violating the acceptable
> > latency threshold.
> 
> I personally find the above problem description insufficient. I have
> asked several times and only got piece by piece information each time.
> Maybe there is a reason to be secretive but it would be great to get at
> least some basic expectations described  and what they are based on.

There is no reason to be secretive. 

> 
> E.g. workloads are running on isolated cpus with nohz full mode to
> shield off any kernel interruption. Yet there are operations that update
> counters (like mlock, but not mlock alone) that update per cpu counters
> that will eventually get flushed and that will cause some interference.
> Now the host/guest transition and intereference. How that happens when
> the guest is running on an isolated and dedicated cpu?

Follows the updated changelog. Does it contain the information
requested ?

----

Performance details for the kworker interruption:

With workloads that are running on isolated cpus with nohz full mode to
shield off any kernel interruption. For example, a VM running a
time sensitive application with a 50us maximum acceptable interruption
(use case: soft PLC).

oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...

The example above shows an additional 7us for the

        oslat -> kworker -> oslat

switches. In the case of a virtualized CPU, and the vmstat_update
interruption in the host (of a qemu-kvm vcpu), the latency penalty
observed in the guest is higher than 50us, violating the acceptable
latency threshold.

The isolated vCPU can perform operations that modify per-CPU page counters,
for example to complete I/O operations:

      CPU 11/KVM-9540    [001] dNh1.  2314.248584: mod_zone_page_state <-__folio_end_writeback
      CPU 11/KVM-9540    [001] dNh1.  2314.248585: <stack trace>
 => 0xffffffffc042b083
 => mod_zone_page_state
 => __folio_end_writeback
 => folio_end_writeback
 => iomap_finish_ioend
 => blk_mq_end_request_batch
 => nvme_irq
 => __handle_irq_event_percpu
 => handle_irq_event
 => handle_edge_irq
 => __common_interrupt
 => common_interrupt
 => asm_common_interrupt
 => vmx_do_interrupt_nmi_irqoff
 => vmx_handle_exit_irqoff
 => vcpu_enter_guest
 => vcpu_run
 => kvm_arch_vcpu_ioctl_run
 => kvm_vcpu_ioctl
 => __x64_sys_ioctl
 => do_syscall_64
 => entry_SYSCALL_64_after_hwframe

> > Skip periodic updates for nohz full CPUs. Any callers who
> > need precise values should use a snapshot of the per-CPU
> > counters, or use the global counters with measures to 
> > handle errors up to thresholds (see calculate_normal_threshold).
> 
> I would rephrase this paragraph. 
> In kernel users of vmstat counters either require the precise value and
> they are using zone_page_state_snapshot interface or they can live with
> an imprecision as the regular flushing can happen at arbitrary time and
> cumulative error can grow (see calculate_normal_threshold).

> >From that POV the regular flushing can be postponed for CPUs that have
> been isolated from the kernel interference withtout critical
> infrastructure ever noticing. Skip regular flushing from vmstat_shepherd
> for all isolated CPUs to avoid interference with the isolated workload.
> 
> > Suggested by Michal Hocko.
> > 
> > Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
> 
> Acked-by: Michal Hocko <mhocko@suse.com>

OK, updated comment, thanks.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-06-05  7:59   ` Michal Hocko
@ 2023-06-05 15:43     ` Marcelo Tosatti
  2023-06-05 16:10       ` Michal Hocko
  0 siblings, 1 reply; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-05 15:43 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > The interruption caused by queueing work on nohz_full CPUs 
> > is undesirable for certain aplications.
> 
> This is not a proper changelog. I am not going to write a changelog for
> you this time. Please explain why this is really needed and why this
> approach is desired. 
> E.g. why don't you prevent userspace from
> refreshing stats if interference is not desirable.

Michal,

Can you please check if the following looks better, as
a changelog? thanks

---

schedule_work_on API uses the workqueue mechanism to
queue a work item on a queue. A kernel thread, which
runs on the target CPU, executes those work items.

Therefore, when using the schedule_work_on API,
it is necessary for the kworker kernel thread to
be scheduled in, for the work function to be executed.

Time sensitive applications such as SoftPLCs
(https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
have their response times affected by such interruptions.

The /proc/sys/vm/stat_refresh file was originally introduced by

commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
Author: Hugh Dickins <hughd@google.com>
Date:   Thu May 19 17:12:50 2016 -0700

    mm: /proc/sys/vm/stat_refresh to force vmstat update

    Provide /proc/sys/vm/stat_refresh to force an immediate update of
    per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
    before checking counts when testing.  Originally added to work around a
    bug which left counts stranded indefinitely on a cpu going idle (an
    inaccuracy magnified when small below-batch numbers represent "huge"
    amounts of memory), but I believe that bug is now fixed: nonetheless,
    this is still a useful knob.

Other than the potential interruption to a time sensitive application,
if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
system hangs can occur:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688

To avoid the problems above, do not schedule the work to synchronize
per-CPU mm counters on isolated CPUs. Given the possibility for
breaking existing userspace applications, avoid changing
behaviour of access to /proc/sys/vm/stat_refresh, such as
returning errors to userspace.

---

> Also would it make some sense to reduce flushing to cpumask 
> of the calling process? (certainly a daring thought but have
> you even considered it?)

Fail to see the point here ?



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-06-05 14:53     ` Marcelo Tosatti
@ 2023-06-05 15:55       ` Michal Hocko
  2023-06-05 17:35         ` Marcelo Tosatti
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2023-06-05 15:55 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon 05-06-23 11:53:56, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 09:55:57AM +0200, Michal Hocko wrote:
> > On Fri 02-06-23 15:57:59, Marcelo Tosatti wrote:
> > > The interruption caused by vmstat_update is undesirable 
> > > for certain aplications:
> > > 
> > > oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> > > oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> > > oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> > > kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
> > > 
> > > The example above shows an additional 7us for the
> > > 
> > >        	oslat -> kworker -> oslat
> > > 
> > > switches. In the case of a virtualized CPU, and the vmstat_update  
> > > interruption in the host (of a qemu-kvm vcpu), the latency penalty
> > > observed in the guest is higher than 50us, violating the acceptable
> > > latency threshold.
> > 
> > I personally find the above problem description insufficient. I have
> > asked several times and only got piece by piece information each time.
> > Maybe there is a reason to be secretive but it would be great to get at
> > least some basic expectations described  and what they are based on.
> 
> There is no reason to be secretive. 
> 
> > 
> > E.g. workloads are running on isolated cpus with nohz full mode to
> > shield off any kernel interruption. Yet there are operations that update
> > counters (like mlock, but not mlock alone) that update per cpu counters
> > that will eventually get flushed and that will cause some interference.
> > Now the host/guest transition and intereference. How that happens when
> > the guest is running on an isolated and dedicated cpu?
> 
> Follows the updated changelog. Does it contain the information
> requested ?
> 
> ----
> 
> Performance details for the kworker interruption:
> 
> With workloads that are running on isolated cpus with nohz full mode to
> shield off any kernel interruption. For example, a VM running a
> time sensitive application with a 50us maximum acceptable interruption
> (use case: soft PLC).
> 
> oslat   1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000)
> oslat   1094.456971: workqueue_queue_work: ... function=vmstat_update ...
> oslat   1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ...
> kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ...
> 
> The example above shows an additional 7us for the
> 
>         oslat -> kworker -> oslat
> 
> switches. In the case of a virtualized CPU, and the vmstat_update
> interruption in the host (of a qemu-kvm vcpu), the latency penalty
> observed in the guest is higher than 50us, violating the acceptable
> latency threshold.
> 
> The isolated vCPU can perform operations that modify per-CPU page counters,
> for example to complete I/O operations:
> 
>       CPU 11/KVM-9540    [001] dNh1.  2314.248584: mod_zone_page_state <-__folio_end_writeback
>       CPU 11/KVM-9540    [001] dNh1.  2314.248585: <stack trace>
>  => 0xffffffffc042b083
>  => mod_zone_page_state
>  => __folio_end_writeback
>  => folio_end_writeback
>  => iomap_finish_ioend
>  => blk_mq_end_request_batch
>  => nvme_irq
>  => __handle_irq_event_percpu
>  => handle_irq_event
>  => handle_edge_irq
>  => __common_interrupt
>  => common_interrupt
>  => asm_common_interrupt
>  => vmx_do_interrupt_nmi_irqoff
>  => vmx_handle_exit_irqoff
>  => vcpu_enter_guest
>  => vcpu_run
>  => kvm_arch_vcpu_ioctl_run
>  => kvm_vcpu_ioctl
>  => __x64_sys_ioctl
>  => do_syscall_64
>  => entry_SYSCALL_64_after_hwframe

OK, this is really useful. It is just not really clear whether the IO
triggered here is from the guest or it a host activity.

overall this is much better!
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-06-05 15:43     ` Marcelo Tosatti
@ 2023-06-05 16:10       ` Michal Hocko
  2023-06-05 18:14         ` Marcelo Tosatti
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2023-06-05 16:10 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > The interruption caused by queueing work on nohz_full CPUs 
> > > is undesirable for certain aplications.
> > 
> > This is not a proper changelog. I am not going to write a changelog for
> > you this time. Please explain why this is really needed and why this
> > approach is desired. 
> > E.g. why don't you prevent userspace from
> > refreshing stats if interference is not desirable.
> 
> Michal,
> 
> Can you please check if the following looks better, as
> a changelog? thanks
> 
> ---
> 
> schedule_work_on API uses the workqueue mechanism to
> queue a work item on a queue. A kernel thread, which
> runs on the target CPU, executes those work items.
> 
> Therefore, when using the schedule_work_on API,
> it is necessary for the kworker kernel thread to
> be scheduled in, for the work function to be executed.
> 
> Time sensitive applications such as SoftPLCs
> (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> have their response times affected by such interruptions.
> 
> The /proc/sys/vm/stat_refresh file was originally introduced by
> 
> commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> Author: Hugh Dickins <hughd@google.com>
> Date:   Thu May 19 17:12:50 2016 -0700
> 
>     mm: /proc/sys/vm/stat_refresh to force vmstat update
> 
>     Provide /proc/sys/vm/stat_refresh to force an immediate update of
>     per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
>     before checking counts when testing.  Originally added to work around a
>     bug which left counts stranded indefinitely on a cpu going idle (an
>     inaccuracy magnified when small below-batch numbers represent "huge"
>     amounts of memory), but I believe that bug is now fixed: nonetheless,
>     this is still a useful knob.

No need to quote the full changelog.
 
> Other than the potential interruption to a time sensitive application,
> if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> system hangs can occur:
> 
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688

Confused... This report says that accessing the file (i.e. to force the
refresh) can get stalled because high priority tasks will not allow
kworkers to run. No?

There is simply no way around that unless those kworkers inherit the
priority. It certainly is unfortunate that the call is not killable
but being stuck behind real time busy looping processes is nothing
really uncommong. One has to be really careful when using real time
priorities.

> To avoid the problems above, do not schedule the work to synchronize
> per-CPU mm counters on isolated CPUs. Given the possibility for
> breaking existing userspace applications, avoid changing
> behaviour of access to /proc/sys/vm/stat_refresh, such as
> returning errors to userspace.

You are changing the behavior. The preexisting behavior was to flush
everything. This is clearly changing that.

> ---
> 
> > Also would it make some sense to reduce flushing to cpumask 
> > of the calling process? (certainly a daring thought but have
> > you even considered it?)
> 
> Fail to see the point here ?

I mean that, if you already want to change the semantic of the call then
it would likely be safer to change it in a more robust way and only
flush pcp vmstat caches that are in the process effective cpu mask. This
way one can control which pcp caches to flush (e.g. those that are not
on isolated CPUs or contrary those that are isolated but you can afford 
to flush at the specific moment). See?

Now I am not saying this is the right way to go because there is still a
slim chance this will break userspace expectations. Therefore I have
asked why you simply do not stop any random application accessing
stat_refresh in the first place. These highly specialized setups with
isolated resources shouldn't run arbitrary crap, should that? What if I
just start allocating memory and get the system close to OOM. I am
pretty sure a small latency induced by the vmstat refreshes is the least
problem you will have.

So please step back and try to think whether this is actually fixing
anything real before trying to change a user visible interface.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-06-05 15:55       ` Michal Hocko
@ 2023-06-05 17:35         ` Marcelo Tosatti
  2023-06-05 18:57           ` Michal Hocko
  0 siblings, 1 reply; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-05 17:35 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon, Jun 05, 2023 at 05:55:49PM +0200, Michal Hocko wrote:
> > The example above shows an additional 7us for the
> > 
> >         oslat -> kworker -> oslat
> > 
> > switches. In the case of a virtualized CPU, and the vmstat_update
> > interruption in the host (of a qemu-kvm vcpu), the latency penalty
> > observed in the guest is higher than 50us, violating the acceptable
> > latency threshold.
> > 
> > The isolated vCPU can perform operations that modify per-CPU page counters,
> > for example to complete I/O operations:
> > 
> >       CPU 11/KVM-9540    [001] dNh1.  2314.248584: mod_zone_page_state <-__folio_end_writeback
> >       CPU 11/KVM-9540    [001] dNh1.  2314.248585: <stack trace>
> >  => 0xffffffffc042b083
> >  => mod_zone_page_state
> >  => __folio_end_writeback
> >  => folio_end_writeback
> >  => iomap_finish_ioend
> >  => blk_mq_end_request_batch
> >  => nvme_irq
> >  => __handle_irq_event_percpu
> >  => handle_irq_event
> >  => handle_edge_irq
> >  => __common_interrupt
> >  => common_interrupt
> >  => asm_common_interrupt
> >  => vmx_do_interrupt_nmi_irqoff
> >  => vmx_handle_exit_irqoff
> >  => vcpu_enter_guest
> >  => vcpu_run
> >  => kvm_arch_vcpu_ioctl_run
> >  => kvm_vcpu_ioctl
> >  => __x64_sys_ioctl
> >  => do_syscall_64
> >  => entry_SYSCALL_64_after_hwframe
> 
> OK, this is really useful. It is just not really clear whether the IO
> triggered here is from the guest or it a host activity.

Guest initiated I/O, since the host CPU is isolated.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-06-05 16:10       ` Michal Hocko
@ 2023-06-05 18:14         ` Marcelo Tosatti
  2023-06-05 18:25           ` Marcelo Tosatti
  2023-06-05 19:12           ` Michal Hocko
  0 siblings, 2 replies; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-05 18:14 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon, Jun 05, 2023 at 06:10:57PM +0200, Michal Hocko wrote:
> On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> > On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > > The interruption caused by queueing work on nohz_full CPUs 
> > > > is undesirable for certain aplications.
> > > 
> > > This is not a proper changelog. I am not going to write a changelog for
> > > you this time. Please explain why this is really needed and why this
> > > approach is desired. 
> > > E.g. why don't you prevent userspace from
> > > refreshing stats if interference is not desirable.
> > 
> > Michal,
> > 
> > Can you please check if the following looks better, as
> > a changelog? thanks
> > 
> > ---
> > 
> > schedule_work_on API uses the workqueue mechanism to
> > queue a work item on a queue. A kernel thread, which
> > runs on the target CPU, executes those work items.
> > 
> > Therefore, when using the schedule_work_on API,
> > it is necessary for the kworker kernel thread to
> > be scheduled in, for the work function to be executed.
> > 
> > Time sensitive applications such as SoftPLCs
> > (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> > have their response times affected by such interruptions.
> > 
> > The /proc/sys/vm/stat_refresh file was originally introduced by
> > 
> > commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> > Author: Hugh Dickins <hughd@google.com>
> > Date:   Thu May 19 17:12:50 2016 -0700
> > 
> >     mm: /proc/sys/vm/stat_refresh to force vmstat update
> > 
> >     Provide /proc/sys/vm/stat_refresh to force an immediate update of
> >     per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> >     before checking counts when testing.  Originally added to work around a
> >     bug which left counts stranded indefinitely on a cpu going idle (an
> >     inaccuracy magnified when small below-batch numbers represent "huge"
> >     amounts of memory), but I believe that bug is now fixed: nonetheless,
> >     this is still a useful knob.
> 
> No need to quote the full changelog.
>  
> > Other than the potential interruption to a time sensitive application,
> > if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> > system hangs can occur:
> > 
> > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688
> 
> Confused... This report says that accessing the file (i.e. to force the
> refresh) can get stalled because high priority tasks will not allow
> kworkers to run. No?

Yes.

> There is simply no way around that unless those kworkers inherit the
> priority.

stalld is an attempt to workaround the situation by allowing the 
lower priority processes to execute for a small amount of time
(for example 20us every 1s). https://github.com/bristot/stalld:

"The stalld program (which stands for 'stall daemon') is a mechanism to
prevent the starvation of operating system threads in a Linux system.
The premise is to start up on a housekeeping cpu (one that is not used
for real-application purposes) and to periodically monitor the state of
each thread in the system, looking for a thread that has been on a run
queue (i.e. ready to run) for a specifed length of time without being
run. This condition is usually hit when the thread is on the same cpu
as a high-priority cpu-intensive task and therefore is being given no
opportunity to run.

When a thread is judged to be starving, stalld changes that thread to
use the SCHED_DEADLINE policy and gives the thread a small slice of time
for that cpu (specified on the command line). The thread then runs and
when that timeslice is used, the thread is then returned to its original
scheduling policy and stalld then continues to monitor thread states."

Unfortunately, if you allow that, then the latency sensitive
application might be interrupted for longer than acceptable
(which is the case for a certain class of applications, for example
SoftPLC inside a VM).

> It certainly is unfortunate that the call is not killable
> but being stuck behind real time busy looping processes is nothing
> really uncommong. One has to be really careful when using real time
> priorities.

Yes.

> > To avoid the problems above, do not schedule the work to synchronize
> > per-CPU mm counters on isolated CPUs. Given the possibility for
> > breaking existing userspace applications, avoid changing
> > behaviour of access to /proc/sys/vm/stat_refresh, such as
> > returning errors to userspace.
> 
> You are changing the behavior. The preexisting behavior was to flush
> everything. This is clearly changing that.

I meant that this patch does not cause read/write to the procfs file 
to return errors.

I believe returning errors has a higher potential for regressions
than not flushing per-CPU VM counters of isolated CPUs (which are
bounded).

> > ---
> > 
> > > Also would it make some sense to reduce flushing to cpumask 
> > > of the calling process? (certainly a daring thought but have
> > > you even considered it?)
> > 
> > Fail to see the point here ?
> 
> I mean that, if you already want to change the semantic of the call then
> it would likely be safer to change it in a more robust way and only
> flush pcp vmstat caches that are in the process effective cpu mask. 

That would change behaviour for systems without isolated CPUs.

> This
> way one can control which pcp caches to flush (e.g. those that are not
> on isolated CPUs or contrary those that are isolated but you can afford 
> to flush at the specific moment). See?

Yes, but not sure what to think of this idea. 

> Now I am not saying this is the right way to go because there is still a
> slim chance this will break userspace expectations. Therefore I have
> asked why you simply do not stop any random application accessing
> stat_refresh in the first place.

I think this is what should be done, but not on the current patchset.

https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html

Regarding housekeeping flags, it is usually the case that initialization might
require code execution on interference blocked CPUs (for example MTRR
initialization, resctrlfs initialization, MSR writes, ...). Therefore
tagging the CPUs after system initialization is necessary, which
is not possible with current housekeeping flags infrastructure.

>  These highly specialized setups with
> isolated resources shouldn't run arbitrary crap, should that?

Problem is that its hard to control what people run on a system.

> What if I just start allocating memory and get the system close to OOM. 

Sure, or "poweroff".

> I am
> pretty sure a small latency induced by the vmstat refreshes is the least
> problem you will have.

If OOM codepath sends no IPI or queues work on isolated CPUs, then OOM
should be fine.

> So please step back and try to think whether this is actually fixing
> anything real before trying to change a user visible interface.

It is fixing either a latency violation or a hang on a system where some user or
piece of software happens to run "sysctl -a" (or read vmstat_refresh).

If one is using CPU isolation, the latency violation has higher 
priority than vmstat_refresh returning proper counters.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-06-05 18:14         ` Marcelo Tosatti
@ 2023-06-05 18:25           ` Marcelo Tosatti
  2023-06-05 19:12           ` Michal Hocko
  1 sibling, 0 replies; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-05 18:25 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon, Jun 05, 2023 at 03:14:25PM -0300, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 06:10:57PM +0200, Michal Hocko wrote:
> > On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> > > On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > > > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > > > The interruption caused by queueing work on nohz_full CPUs 
> > > > > is undesirable for certain aplications.
> > > > 
> > > > This is not a proper changelog. I am not going to write a changelog for
> > > > you this time. Please explain why this is really needed and why this
> > > > approach is desired. 
> > > > E.g. why don't you prevent userspace from
> > > > refreshing stats if interference is not desirable.
> > > 
> > > Michal,
> > > 
> > > Can you please check if the following looks better, as
> > > a changelog? thanks
> > > 
> > > ---
> > > 
> > > schedule_work_on API uses the workqueue mechanism to
> > > queue a work item on a queue. A kernel thread, which
> > > runs on the target CPU, executes those work items.
> > > 
> > > Therefore, when using the schedule_work_on API,
> > > it is necessary for the kworker kernel thread to
> > > be scheduled in, for the work function to be executed.
> > > 
> > > Time sensitive applications such as SoftPLCs
> > > (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> > > have their response times affected by such interruptions.
> > > 
> > > The /proc/sys/vm/stat_refresh file was originally introduced by
> > > 
> > > commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> > > Author: Hugh Dickins <hughd@google.com>
> > > Date:   Thu May 19 17:12:50 2016 -0700
> > > 
> > >     mm: /proc/sys/vm/stat_refresh to force vmstat update
> > > 
> > >     Provide /proc/sys/vm/stat_refresh to force an immediate update of
> > >     per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> > >     before checking counts when testing.  Originally added to work around a
> > >     bug which left counts stranded indefinitely on a cpu going idle (an
> > >     inaccuracy magnified when small below-batch numbers represent "huge"
> > >     amounts of memory), but I believe that bug is now fixed: nonetheless,
> > >     this is still a useful knob.
> > 
> > No need to quote the full changelog.

I think its useful to put things in perspective.

> > > Other than the potential interruption to a time sensitive application,
> > > if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> > > system hangs can occur:
> > > 
> > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688
> > 
> > Confused... This report says that accessing the file (i.e. to force the
> > refresh) can get stalled because high priority tasks will not allow
> > kworkers to run. No?
> 
> Yes.
> 
> > There is simply no way around that unless those kworkers inherit the
> > priority.
> 
> stalld is an attempt to workaround the situation by allowing the 
> lower priority processes to execute for a small amount of time
> (for example 20us every 1s). https://github.com/bristot/stalld:
> 
> "The stalld program (which stands for 'stall daemon') is a mechanism to
> prevent the starvation of operating system threads in a Linux system.
> The premise is to start up on a housekeeping cpu (one that is not used
> for real-application purposes) and to periodically monitor the state of
> each thread in the system, looking for a thread that has been on a run
> queue (i.e. ready to run) for a specifed length of time without being
> run. This condition is usually hit when the thread is on the same cpu
> as a high-priority cpu-intensive task and therefore is being given no
> opportunity to run.
> 
> When a thread is judged to be starving, stalld changes that thread to
> use the SCHED_DEADLINE policy and gives the thread a small slice of time
> for that cpu (specified on the command line). The thread then runs and
> when that timeslice is used, the thread is then returned to its original
> scheduling policy and stalld then continues to monitor thread states."
> 
> Unfortunately, if you allow that, then the latency sensitive
> application might be interrupted for longer than acceptable
> (which is the case for a certain class of applications, for example
> SoftPLC inside a VM).
> 
> > It certainly is unfortunate that the call is not killable
> > but being stuck behind real time busy looping processes is nothing
> > really uncommong. One has to be really careful when using real time
> > priorities.
> 
> Yes.
> 
> > > To avoid the problems above, do not schedule the work to synchronize
> > > per-CPU mm counters on isolated CPUs. Given the possibility for
> > > breaking existing userspace applications, avoid changing
> > > behaviour of access to /proc/sys/vm/stat_refresh, such as
> > > returning errors to userspace.
> > 
> > You are changing the behavior. The preexisting behavior was to flush
> > everything. This is clearly changing that.
> 
> I meant that this patch does not cause read/write to the procfs file 
> to return errors.
> 
> I believe returning errors has a higher potential for regressions
> than not flushing per-CPU VM counters of isolated CPUs (which are
> bounded).
> 
> > > ---
> > > 
> > > > Also would it make some sense to reduce flushing to cpumask 
> > > > of the calling process? (certainly a daring thought but have
> > > > you even considered it?)
> > > 
> > > Fail to see the point here ?
> > 
> > I mean that, if you already want to change the semantic of the call then
> > it would likely be safer to change it in a more robust way and only
> > flush pcp vmstat caches that are in the process effective cpu mask. 
> 
> That would change behaviour for systems without isolated CPUs.
> 
> > This
> > way one can control which pcp caches to flush (e.g. those that are not
> > on isolated CPUs or contrary those that are isolated but you can afford 
> > to flush at the specific moment). See?
> 
> Yes, but not sure what to think of this idea. 
> 
> > Now I am not saying this is the right way to go because there is still a
> > slim chance this will break userspace expectations. Therefore I have
> > asked why you simply do not stop any random application accessing
> > stat_refresh in the first place.
> 
> I think this is what should be done, but not on the current patchset.
> 
> https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html
> 
> Regarding housekeeping flags, it is usually the case that initialization might
> require code execution on interference blocked CPUs (for example MTRR
> initialization, resctrlfs initialization, MSR writes, ...). Therefore
> tagging the CPUs after system initialization is necessary, which
> is not possible with current housekeeping flags infrastructure.
> 
> >  These highly specialized setups with
> > isolated resources shouldn't run arbitrary crap, should that?
> 
> Problem is that its hard to control what people run on a system.
> 
> > What if I just start allocating memory and get the system close to OOM. 
> 
> Sure, or "poweroff".
> 
> > I am
> > pretty sure a small latency induced by the vmstat refreshes is the least
> > problem you will have.
> 
> If OOM codepath sends no IPI or queues work on isolated CPUs, then OOM
> should be fine.
> 
> > So please step back and try to think whether this is actually fixing
> > anything real before trying to change a user visible interface.
> 
> It is fixing either a latency violation or a hang on a system where some user or
> piece of software happens to run "sysctl -a" (or read vmstat_refresh).
> 
> If one is using CPU isolation, the latency violation has higher 
> priority than vmstat_refresh returning proper counters.

OK, so this patch is not going to include the per-CPU vmstat counters
(up to the threshold) in the synchronization step of reading/writing to the
vmstat_refresh file.

This is a tradeoff: one prefers not to have accurate counters 
(for a procfs file whose value is going to be interpreted, and which 
accurate value might or might not be important) than to interrupt
an isolated CPU.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-06-05 17:35         ` Marcelo Tosatti
@ 2023-06-05 18:57           ` Michal Hocko
  2023-06-05 19:14             ` Marcelo Tosatti
  0 siblings, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2023-06-05 18:57 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon 05-06-23 14:35:56, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 05:55:49PM +0200, Michal Hocko wrote:
> > > The example above shows an additional 7us for the
> > > 
> > >         oslat -> kworker -> oslat
> > > 
> > > switches. In the case of a virtualized CPU, and the vmstat_update
> > > interruption in the host (of a qemu-kvm vcpu), the latency penalty
> > > observed in the guest is higher than 50us, violating the acceptable
> > > latency threshold.
> > > 
> > > The isolated vCPU can perform operations that modify per-CPU page counters,
> > > for example to complete I/O operations:
> > > 
> > >       CPU 11/KVM-9540    [001] dNh1.  2314.248584: mod_zone_page_state <-__folio_end_writeback
> > >       CPU 11/KVM-9540    [001] dNh1.  2314.248585: <stack trace>
> > >  => 0xffffffffc042b083
> > >  => mod_zone_page_state
> > >  => __folio_end_writeback
> > >  => folio_end_writeback
> > >  => iomap_finish_ioend
> > >  => blk_mq_end_request_batch
> > >  => nvme_irq
> > >  => __handle_irq_event_percpu
> > >  => handle_irq_event
> > >  => handle_edge_irq
> > >  => __common_interrupt
> > >  => common_interrupt
> > >  => asm_common_interrupt
> > >  => vmx_do_interrupt_nmi_irqoff
> > >  => vmx_handle_exit_irqoff
> > >  => vcpu_enter_guest
> > >  => vcpu_run
> > >  => kvm_arch_vcpu_ioctl_run
> > >  => kvm_vcpu_ioctl
> > >  => __x64_sys_ioctl
> > >  => do_syscall_64
> > >  => entry_SYSCALL_64_after_hwframe
> > 
> > OK, this is really useful. It is just not really clear whether the IO
> > triggered here is from the guest or it a host activity.
> 
> Guest initiated I/O, since the host CPU is isolated.

Make it explicit in the changelog.

I am just wondering how you can achieve your strict deadlines when IO is
involved but that is another story I guess.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-06-05 18:14         ` Marcelo Tosatti
  2023-06-05 18:25           ` Marcelo Tosatti
@ 2023-06-05 19:12           ` Michal Hocko
  2023-06-05 19:44             ` Marcelo Tosatti
  1 sibling, 1 reply; 17+ messages in thread
From: Michal Hocko @ 2023-06-05 19:12 UTC (permalink / raw)
  To: Marcelo Tosatti
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon 05-06-23 15:14:25, Marcelo Tosatti wrote:
> On Mon, Jun 05, 2023 at 06:10:57PM +0200, Michal Hocko wrote:
> > On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> > > On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > > > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > > > The interruption caused by queueing work on nohz_full CPUs 
> > > > > is undesirable for certain aplications.
> > > > 
> > > > This is not a proper changelog. I am not going to write a changelog for
> > > > you this time. Please explain why this is really needed and why this
> > > > approach is desired. 
> > > > E.g. why don't you prevent userspace from
> > > > refreshing stats if interference is not desirable.
> > > 
> > > Michal,
> > > 
> > > Can you please check if the following looks better, as
> > > a changelog? thanks
> > > 
> > > ---
> > > 
> > > schedule_work_on API uses the workqueue mechanism to
> > > queue a work item on a queue. A kernel thread, which
> > > runs on the target CPU, executes those work items.
> > > 
> > > Therefore, when using the schedule_work_on API,
> > > it is necessary for the kworker kernel thread to
> > > be scheduled in, for the work function to be executed.
> > > 
> > > Time sensitive applications such as SoftPLCs
> > > (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> > > have their response times affected by such interruptions.
> > > 
> > > The /proc/sys/vm/stat_refresh file was originally introduced by
> > > 
> > > commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> > > Author: Hugh Dickins <hughd@google.com>
> > > Date:   Thu May 19 17:12:50 2016 -0700
> > > 
> > >     mm: /proc/sys/vm/stat_refresh to force vmstat update
> > > 
> > >     Provide /proc/sys/vm/stat_refresh to force an immediate update of
> > >     per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> > >     before checking counts when testing.  Originally added to work around a
> > >     bug which left counts stranded indefinitely on a cpu going idle (an
> > >     inaccuracy magnified when small below-batch numbers represent "huge"
> > >     amounts of memory), but I believe that bug is now fixed: nonetheless,
> > >     this is still a useful knob.
> > 
> > No need to quote the full changelog.
> >  
> > > Other than the potential interruption to a time sensitive application,
> > > if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> > > system hangs can occur:
> > > 
> > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688
> > 
> > Confused... This report says that accessing the file (i.e. to force the
> > refresh) can get stalled because high priority tasks will not allow
> > kworkers to run. No?
> 
> Yes.
> 
> > There is simply no way around that unless those kworkers inherit the
> > priority.
> 
> stalld is an attempt to workaround the situation by allowing the 
> lower priority processes to execute for a small amount of time
> (for example 20us every 1s). https://github.com/bristot/stalld:
> 
> "The stalld program (which stands for 'stall daemon') is a mechanism to
> prevent the starvation of operating system threads in a Linux system.
> The premise is to start up on a housekeeping cpu (one that is not used
> for real-application purposes) and to periodically monitor the state of
> each thread in the system, looking for a thread that has been on a run
> queue (i.e. ready to run) for a specifed length of time without being
> run. This condition is usually hit when the thread is on the same cpu
> as a high-priority cpu-intensive task and therefore is being given no
> opportunity to run.
>
> When a thread is judged to be starving, stalld changes that thread to
> use the SCHED_DEADLINE policy and gives the thread a small slice of time
> for that cpu (specified on the command line). The thread then runs and
> when that timeslice is used, the thread is then returned to its original
> scheduling policy and stalld then continues to monitor thread states."

But your task is not on rq. It is sleeping and waiting for completion so
I fail to see how this all is related. The problem is that the userspace
depends on kernel WQ to complete. There quite some operations that will
behave like that.
 
> Unfortunately, if you allow that, then the latency sensitive
> application might be interrupted for longer than acceptable
> (which is the case for a certain class of applications, for example
> SoftPLC inside a VM).

I am losing you again. You can either have top priority processes
running uninterrupted or or latency sensitive running with your SLAs.
Even if you apply this patch you cannot protect your sensitive
application from CPU top prio hogs. You either have your process
priorities configured properly or not. I really fail to follow your line
of arguments here.

> > It certainly is unfortunate that the call is not killable
> > but being stuck behind real time busy looping processes is nothing
> > really uncommong. One has to be really careful when using real time
> > priorities.
> 
> Yes.
> 
> > > To avoid the problems above, do not schedule the work to synchronize
> > > per-CPU mm counters on isolated CPUs. Given the possibility for
> > > breaking existing userspace applications, avoid changing
> > > behaviour of access to /proc/sys/vm/stat_refresh, such as
> > > returning errors to userspace.
> > 
> > You are changing the behavior. The preexisting behavior was to flush
> > everything. This is clearly changing that.
> 
> I meant that this patch does not cause read/write to the procfs file 
> to return errors.
> 
> I believe returning errors has a higher potential for regressions
> than not flushing per-CPU VM counters of isolated CPUs (which are
> bounded).

Silent change of behavior is even worse because you cannot really tell a
difference.

> > > > Also would it make some sense to reduce flushing to cpumask 
> > > > of the calling process? (certainly a daring thought but have
> > > > you even considered it?)
> > > 
> > > Fail to see the point here ?
> > 
> > I mean that, if you already want to change the semantic of the call then
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > it would likely be safer to change it in a more robust way and only
> > flush pcp vmstat caches that are in the process effective cpu mask. 
> 
> That would change behaviour for systems without isolated CPUs.

yes, see above.

> > This
> > way one can control which pcp caches to flush (e.g. those that are not
> > on isolated CPUs or contrary those that are isolated but you can afford 
> > to flush at the specific moment). See?
> 
> Yes, but not sure what to think of this idea. 

If you want to break something, at least make the change kinda more
generic than, magically single purpose (isolcpu/nohz in this case) oriented.

> > Now I am not saying this is the right way to go because there is still a
> > slim chance this will break userspace expectations. Therefore I have
> > asked why you simply do not stop any random application accessing
> > stat_refresh in the first place.
> 
> I think this is what should be done, but not on the current patchset.
> 
> https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html
> 
> Regarding housekeeping flags, it is usually the case that initialization might
> require code execution on interference blocked CPUs (for example MTRR
> initialization, resctrlfs initialization, MSR writes, ...). Therefore
> tagging the CPUs after system initialization is necessary, which
> is not possible with current housekeeping flags infrastructure.
> 
> >  These highly specialized setups with
> > isolated resources shouldn't run arbitrary crap, should that?
> 
> Problem is that its hard to control what people run on a system.
> 
> > What if I just start allocating memory and get the system close to OOM. 
> 
> Sure, or "poweroff".
> 
> > I am
> > pretty sure a small latency induced by the vmstat refreshes is the least
> > problem you will have.
> 
> If OOM codepath sends no IPI or queues work on isolated CPUs, then OOM
> should be fine.

You are missing a big picture I am afraid. IPIs is the least of a
problem in that case (just imagine all the indirect dependencies through
locking - get a lock, held by somebody requesting a memory).

> > So please step back and try to think whether this is actually fixing
> > anything real before trying to change a user visible interface.
> 
> It is fixing either a latency violation or a hang on a system where some user or
> piece of software happens to run "sysctl -a" (or read vmstat_refresh).

I believe we have established the "hang problem" as described above is
not fixable by this patch. And I still argue that you should simply not
allow abuse of the interface if you want to have any latency guarantees.
Same as with any other kernel activity where you can compete for
resources (directly or indirectly).
 
> If one is using CPU isolation, the latency violation has higher 
> priority than vmstat_refresh returning proper counters.

This is a strong claim without any actual argument other than you would
like to have it this way.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs
  2023-06-05 18:57           ` Michal Hocko
@ 2023-06-05 19:14             ` Marcelo Tosatti
  0 siblings, 0 replies; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-05 19:14 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon, Jun 05, 2023 at 08:57:15PM +0200, Michal Hocko wrote:
> > Guest initiated I/O, since the host CPU is isolated.
> 
> Make it explicit in the changelog.

I think better use of our time would be to focus on

https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html

1) Operate in terms of add CPU, remove CPU on sysfs (to avoid races).
2) Don't allow all CPUs to be marked as "block_interf".
3) Remove percpu rwsem lock.

> I am just wondering how you can achieve your strict deadlines when IO is
> involved but that is another story I guess.

IO can be submitted when the ELF binary and libraries are read
from the virtual disk, for example. 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
  2023-06-05 19:12           ` Michal Hocko
@ 2023-06-05 19:44             ` Marcelo Tosatti
  0 siblings, 0 replies; 17+ messages in thread
From: Marcelo Tosatti @ 2023-06-05 19:44 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Christoph Lameter, Aaron Tomlin, Frederic Weisbecker,
	Andrew Morton, linux-kernel, linux-mm, Vlastimil Babka

On Mon, Jun 05, 2023 at 09:12:01PM +0200, Michal Hocko wrote:
> On Mon 05-06-23 15:14:25, Marcelo Tosatti wrote:
> > On Mon, Jun 05, 2023 at 06:10:57PM +0200, Michal Hocko wrote:
> > > On Mon 05-06-23 12:43:24, Marcelo Tosatti wrote:
> > > > On Mon, Jun 05, 2023 at 09:59:57AM +0200, Michal Hocko wrote:
> > > > > On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> > > > > > The interruption caused by queueing work on nohz_full CPUs 
> > > > > > is undesirable for certain aplications.
> > > > > 
> > > > > This is not a proper changelog. I am not going to write a changelog for
> > > > > you this time. Please explain why this is really needed and why this
> > > > > approach is desired. 
> > > > > E.g. why don't you prevent userspace from
> > > > > refreshing stats if interference is not desirable.
> > > > 
> > > > Michal,
> > > > 
> > > > Can you please check if the following looks better, as
> > > > a changelog? thanks
> > > > 
> > > > ---
> > > > 
> > > > schedule_work_on API uses the workqueue mechanism to
> > > > queue a work item on a queue. A kernel thread, which
> > > > runs on the target CPU, executes those work items.
> > > > 
> > > > Therefore, when using the schedule_work_on API,
> > > > it is necessary for the kworker kernel thread to
> > > > be scheduled in, for the work function to be executed.
> > > > 
> > > > Time sensitive applications such as SoftPLCs
> > > > (https://tum-esi.github.io/publications-list/PDF/2022-ETFA-How_Real_Time_Are_Virtual_PLCs.pdf),
> > > > have their response times affected by such interruptions.
> > > > 
> > > > The /proc/sys/vm/stat_refresh file was originally introduced by
> > > > 
> > > > commit 52b6f46bc163eef17ecba4cd552beeafe2b24453
> > > > Author: Hugh Dickins <hughd@google.com>
> > > > Date:   Thu May 19 17:12:50 2016 -0700
> > > > 
> > > >     mm: /proc/sys/vm/stat_refresh to force vmstat update
> > > > 
> > > >     Provide /proc/sys/vm/stat_refresh to force an immediate update of
> > > >     per-cpu into global vmstats: useful to avoid a sleep(2) or whatever
> > > >     before checking counts when testing.  Originally added to work around a
> > > >     bug which left counts stranded indefinitely on a cpu going idle (an
> > > >     inaccuracy magnified when small below-batch numbers represent "huge"
> > > >     amounts of memory), but I believe that bug is now fixed: nonetheless,
> > > >     this is still a useful knob.
> > > 
> > > No need to quote the full changelog.
> > >  
> > > > Other than the potential interruption to a time sensitive application,
> > > > if using SCHED_FIFO or SCHED_RR priority on the isolated CPU, then
> > > > system hangs can occur:
> > > > 
> > > > https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978688
> > > 
> > > Confused... This report says that accessing the file (i.e. to force the
> > > refresh) can get stalled because high priority tasks will not allow
> > > kworkers to run. No?
> > 
> > Yes.
> > 
> > > There is simply no way around that unless those kworkers inherit the
> > > priority.
> > 
> > stalld is an attempt to workaround the situation by allowing the 
> > lower priority processes to execute for a small amount of time
> > (for example 20us every 1s). https://github.com/bristot/stalld:
> > 
> > "The stalld program (which stands for 'stall daemon') is a mechanism to
> > prevent the starvation of operating system threads in a Linux system.
> > The premise is to start up on a housekeeping cpu (one that is not used
> > for real-application purposes) and to periodically monitor the state of
> > each thread in the system, looking for a thread that has been on a run
> > queue (i.e. ready to run) for a specifed length of time without being
> > run. This condition is usually hit when the thread is on the same cpu
> > as a high-priority cpu-intensive task and therefore is being given no
> > opportunity to run.
> >
> > When a thread is judged to be starving, stalld changes that thread to
> > use the SCHED_DEADLINE policy and gives the thread a small slice of time
> > for that cpu (specified on the command line). The thread then runs and
> > when that timeslice is used, the thread is then returned to its original
> > scheduling policy and stalld then continues to monitor thread states."
> 
> But your task is not on rq. It is sleeping and waiting for completion so
> I fail to see how this all is related. The problem is that the userspace
> depends on kernel WQ to complete. There quite some operations that will
> behave like that.

Yes. In more detail, two cases, with current kernel code:

Case-1) no stalld daemon running. SCHED_FIFO task on isolated CPU that does:

	do {
		pkt = read_network_packet();
		if (pkt) {
			process_pkt(pkt);
		}
	} while (!stop_request);

Someone else runs "echo 1 > /proc/sys/vm/stat_refresh" or
"sysctl -a".

flush_work hangs, because SCHED_FIFO task never yields the processor
for kwork to finish execution of vmstat_refresh work.

Case-2) stalld daemon running. SCHED_FIFO has on isolated CPU with same 
condition as above. stalld daemon detects kworker "starving" (not being
able to execute) and changes it priority to SCHED_DEADLINE, for 20us,
then changes it back to SCHED_OTHER. 

This causes the packet processing thread to be interrupted for 20us, 
which is not what is expected from the system.

---

Now with this patch integrated:

Case-1B) 

"echo 1 > /proc/sys/vm/stat_refresh" or "sysctl -a" returns, no system
hang.

Case-2B) 

No work is scheduled to run on the isolated CPU, packet processing 
is not interrupted for 20us, everyone is happy.

---

In both B cases, kernel statistics were not fully synchronized from per-CPU 
counters to global counters (but hopefully nobody cares).

> > Unfortunately, if you allow that, then the latency sensitive
> > application might be interrupted for longer than acceptable
> > (which is the case for a certain class of applications, for example
> > SoftPLC inside a VM).
> 
> I am losing you again. You can either have top priority processes
> running uninterrupted or or latency sensitive running with your SLAs.
> Even if you apply this patch you cannot protect your sensitive
> application from CPU top prio hogs. You either have your process
> priorities configured properly or not. I really fail to follow your line
> of arguments here.

Hopefully what is written above helps.

> > > It certainly is unfortunate that the call is not killable
> > > but being stuck behind real time busy looping processes is nothing
> > > really uncommong. One has to be really careful when using real time
> > > priorities.
> > 
> > Yes.
> > 
> > > > To avoid the problems above, do not schedule the work to synchronize
> > > > per-CPU mm counters on isolated CPUs. Given the possibility for
> > > > breaking existing userspace applications, avoid changing
> > > > behaviour of access to /proc/sys/vm/stat_refresh, such as
> > > > returning errors to userspace.
> > > 
> > > You are changing the behavior. The preexisting behavior was to flush
> > > everything. This is clearly changing that.
> > 
> > I meant that this patch does not cause read/write to the procfs file 
> > to return errors.
> > 
> > I believe returning errors has a higher potential for regressions
> > than not flushing per-CPU VM counters of isolated CPUs (which are
> > bounded).
> 
> Silent change of behavior is even worse because you cannot really tell a
> difference.

What do you suggest ?

> > > > > Also would it make some sense to reduce flushing to cpumask 
> > > > > of the calling process? (certainly a daring thought but have
> > > > > you even considered it?)
> > > > 
> > > > Fail to see the point here ?
> > > 
> > > I mean that, if you already want to change the semantic of the call then
>                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > it would likely be safer to change it in a more robust way and only
> > > flush pcp vmstat caches that are in the process effective cpu mask. 
> > 
> > That would change behaviour for systems without isolated CPUs.
> 
> yes, see above.
> 
> > > This
> > > way one can control which pcp caches to flush (e.g. those that are not
> > > on isolated CPUs or contrary those that are isolated but you can afford 
> > > to flush at the specific moment). See?
> > 
> > Yes, but not sure what to think of this idea. 
> 
> If you want to break something, at least make the change kinda more
> generic than, magically single purpose (isolcpu/nohz in this case) oriented.

I hope nobody cares about stat_refresh being off by reclaim threshold
(which is a small percentage).

The alternative was to synchronize per-CPU counters remotely, but it was 
decided as too complex.

> > > Now I am not saying this is the right way to go because there is still a
> > > slim chance this will break userspace expectations. Therefore I have
> > > asked why you simply do not stop any random application accessing
> > > stat_refresh in the first place.
> > 
> > I think this is what should be done, but not on the current patchset.
> > 
> > https://lkml.iu.edu/hypermail/linux/kernel/2209.1/01263.html
> > 
> > Regarding housekeeping flags, it is usually the case that initialization might
> > require code execution on interference blocked CPUs (for example MTRR
> > initialization, resctrlfs initialization, MSR writes, ...). Therefore
> > tagging the CPUs after system initialization is necessary, which
> > is not possible with current housekeeping flags infrastructure.
> > 
> > >  These highly specialized setups with
> > > isolated resources shouldn't run arbitrary crap, should that?
> > 
> > Problem is that its hard to control what people run on a system.
> > 
> > > What if I just start allocating memory and get the system close to OOM. 
> > 
> > Sure, or "poweroff".
> > 
> > > I am
> > > pretty sure a small latency induced by the vmstat refreshes is the least
> > > problem you will have.
> > 
> > If OOM codepath sends no IPI or queues work on isolated CPUs, then OOM
> > should be fine.
> 
> You are missing a big picture I am afraid. IPIs is the least of a
> problem in that case (just imagine all the indirect dependencies through
> locking - get a lock, held by somebody requesting a memory).
> 
> > > So please step back and try to think whether this is actually fixing
> > > anything real before trying to change a user visible interface.
> > 
> > It is fixing either a latency violation or a hang on a system where some user or
> > piece of software happens to run "sysctl -a" (or read vmstat_refresh).
> 
> I believe we have established the "hang problem" as described above is
> not fixable by this patch. And I still argue that you should simply not
> allow abuse of the interface if you want to have any latency guarantees.
> Same as with any other kernel activity where you can compete for
> resources (directly or indirectly).
>  
> > If one is using CPU isolation, the latency violation has higher 
> > priority than vmstat_refresh returning proper counters.
> 
> This is a strong claim without any actual argument other than you would
> like to have it this way.

Well its the least worse option that i can see.

Do you think returning error from procfs file handler, if isolated cpu
is encountered, is not a potential source of regressions ?

Again, it seemed to me that returning an error only if a different 
flag is enabled (flag which is disabled by default), similar to what
is suggested in the "block interference" patchset, would be more
resilient against regressions. But don't have a strong preference.



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-06-05 19:45 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-02 18:57 [PATCH v2 0/3] vmstat bug fixes for nohz_full and isolated CPUs Marcelo Tosatti
2023-06-02 18:57 ` [PATCH v2 1/3] vmstat: allow_direct_reclaim should use zone_page_state_snapshot Marcelo Tosatti
2023-06-02 18:57 ` [PATCH v2 2/3] vmstat: skip periodic vmstat update for nohz full CPUs Marcelo Tosatti
2023-06-05  7:55   ` Michal Hocko
2023-06-05 14:53     ` Marcelo Tosatti
2023-06-05 15:55       ` Michal Hocko
2023-06-05 17:35         ` Marcelo Tosatti
2023-06-05 18:57           ` Michal Hocko
2023-06-05 19:14             ` Marcelo Tosatti
2023-06-02 18:58 ` [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs Marcelo Tosatti
2023-06-05  7:59   ` Michal Hocko
2023-06-05 15:43     ` Marcelo Tosatti
2023-06-05 16:10       ` Michal Hocko
2023-06-05 18:14         ` Marcelo Tosatti
2023-06-05 18:25           ` Marcelo Tosatti
2023-06-05 19:12           ` Michal Hocko
2023-06-05 19:44             ` Marcelo Tosatti

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.