linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
@ 2022-05-18 22:38 Vaibhav Jain
  2022-05-18 22:46 ` Yosry Ahmed
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Vaibhav Jain @ 2022-05-18 22:38 UTC (permalink / raw)
  To: cgroups, linux-doc, linux-kernel, linux-mm
  Cc: Vaibhav Jain, Tejun Heo, Zefan Li, Johannes Weiner,
	Jonathan Corbet, Michal Hocko, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt, Yosry Ahmed

[1] Provides a way for user-space to trigger proactive reclaim by introducing
a write-only memcg file 'memory.reclaim'. However reclaim stats like number
of pages scanned and reclaimed is still not directly available to the
user-space.

This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
readable which returns the number of pages scanned / reclaimed during the
reclaim process from 'struct vmpressure' associated with each memcg. This should
let user-space asses how successful proactive reclaim triggered from memcg
'memory.reclaim' was ?

With the patch following command flow is expected:

 # echo "1M" > memory.reclaim

 # cat memory.reclaim
   scanned 76
   reclaimed 32

[1]:  https://lore.kernel.org/r/20220425190040.2475377-1-yosryahmed@google.com

Cc: Shakeel Butt <shakeelb@google.com>
Cc: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
---
 Documentation/admin-guide/cgroup-v2.rst | 15 ++++++++++++---
 mm/memcontrol.c                         | 14 ++++++++++++++
 2 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 27ebef2485a3..44610165261d 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1209,18 +1209,27 @@ PAGE_SIZE multiple when read back.
 	utility is limited to providing the final safety net.
 
   memory.reclaim
-	A write-only nested-keyed file which exists for all cgroups.
+	A nested-keyed file which exists for all cgroups.
 
-	This is a simple interface to trigger memory reclaim in the
-	target cgroup.
+	This is a simple interface to trigger memory reclaim and retrieve
+	reclaim stats in the target cgroup.
 
 	This file accepts a single key, the number of bytes to reclaim.
 	No nested keys are currently supported.
 
+	Reading the file returns number of pages scanned and number of
+	pages reclaimed from the memcg. This information fetched from
+	vmpressure info associated with each cgroup.
+
 	Example::
 
 	  echo "1G" > memory.reclaim
 
+	  cat memory.reclaim
+
+	  scanned 78
+	  reclaimed 30
+
 	The interface can be later extended with nested keys to
 	configure the reclaim behavior. For example, specify the
 	type of memory to reclaim from (anon, file, ..).
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2e2bfbed4717..9e43580a8726 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6423,6 +6423,19 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of,
 	return nbytes;
 }
 
+static int memory_reclaim_show(struct seq_file *m, void *v)
+{
+	struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
+	struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
+
+	spin_lock(&vmpr->sr_lock);
+	seq_printf(m, "scanned %lu\nreclaimed %lu\n",
+		   vmpr->scanned, vmpr->reclaimed);
+	spin_unlock(&vmpr->sr_lock);
+
+	return 0;
+}
+
 static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
 			      size_t nbytes, loff_t off)
 {
@@ -6525,6 +6538,7 @@ static struct cftype memory_files[] = {
 		.name = "reclaim",
 		.flags = CFTYPE_NS_DELEGATABLE,
 		.write = memory_reclaim,
+		.seq_show  = memory_reclaim_show,
 	},
 	{ }	/* terminate */
 };
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-18 22:38 [PATCH] memcg: provide reclaim stats via 'memory.reclaim' Vaibhav Jain
@ 2022-05-18 22:46 ` Yosry Ahmed
  2022-05-19  8:50   ` Vaibhav Jain
  2022-05-19  5:08 ` Shakeel Butt
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Yosry Ahmed @ 2022-05-18 22:46 UTC (permalink / raw)
  To: Vaibhav Jain
  Cc: cgroups, linux-doc, Linux Kernel Mailing List, Linux-MM,
	Tejun Heo, Zefan Li, Johannes Weiner, Jonathan Corbet,
	Michal Hocko, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt

On Wed, May 18, 2022 at 3:38 PM Vaibhav Jain <vaibhav@linux.ibm.com> wrote:
>
> [1] Provides a way for user-space to trigger proactive reclaim by introducing
> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
> of pages scanned and reclaimed is still not directly available to the
> user-space.
>
> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
> readable which returns the number of pages scanned / reclaimed during the
> reclaim process from 'struct vmpressure' associated with each memcg. This should
> let user-space asses how successful proactive reclaim triggered from memcg
> 'memory.reclaim' was ?

Isn't this a racy read? struct vmpressure can be changed between the
write and read by other reclaim operations, right?

I was actually planning to send a patch that does not updated
vmpressure for user-controller reclaim, similar to how PSI is handled.

The interface currently returns -EBUSY if the entire amount was not
reclaimed, so isn't this enough to figure out if it was successful or
not? If not, we can store the scanned / reclaim counts of the last
memory.reclaim invocation for the sole purpose of memory.reclaim
reads. Maybe it is actually more intuitive to users to just read the
amount of memory read? In a format that is similar to the one written?

i.e
echo "10M" > memory.reclaim
cat memory.reclaim
9M

>
> With the patch following command flow is expected:
>
>  # echo "1M" > memory.reclaim
>
>  # cat memory.reclaim
>    scanned 76
>    reclaimed 32
>
> [1]:  https://lore.kernel.org/r/20220425190040.2475377-1-yosryahmed@google.com
>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Yosry Ahmed <yosryahmed@google.com>
> Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
> ---
>  Documentation/admin-guide/cgroup-v2.rst | 15 ++++++++++++---
>  mm/memcontrol.c                         | 14 ++++++++++++++
>  2 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> index 27ebef2485a3..44610165261d 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -1209,18 +1209,27 @@ PAGE_SIZE multiple when read back.
>         utility is limited to providing the final safety net.
>
>    memory.reclaim
> -       A write-only nested-keyed file which exists for all cgroups.
> +       A nested-keyed file which exists for all cgroups.
>
> -       This is a simple interface to trigger memory reclaim in the
> -       target cgroup.
> +       This is a simple interface to trigger memory reclaim and retrieve
> +       reclaim stats in the target cgroup.
>
>         This file accepts a single key, the number of bytes to reclaim.
>         No nested keys are currently supported.
>
> +       Reading the file returns number of pages scanned and number of
> +       pages reclaimed from the memcg. This information fetched from
> +       vmpressure info associated with each cgroup.
> +
>         Example::
>
>           echo "1G" > memory.reclaim
>
> +         cat memory.reclaim
> +
> +         scanned 78
> +         reclaimed 30
> +
>         The interface can be later extended with nested keys to
>         configure the reclaim behavior. For example, specify the
>         type of memory to reclaim from (anon, file, ..).
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 2e2bfbed4717..9e43580a8726 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6423,6 +6423,19 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of,
>         return nbytes;
>  }
>
> +static int memory_reclaim_show(struct seq_file *m, void *v)
> +{
> +       struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> +       struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
> +
> +       spin_lock(&vmpr->sr_lock);
> +       seq_printf(m, "scanned %lu\nreclaimed %lu\n",
> +                  vmpr->scanned, vmpr->reclaimed);
> +       spin_unlock(&vmpr->sr_lock);
> +
> +       return 0;
> +}
> +
>  static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
>                               size_t nbytes, loff_t off)
>  {
> @@ -6525,6 +6538,7 @@ static struct cftype memory_files[] = {
>                 .name = "reclaim",
>                 .flags = CFTYPE_NS_DELEGATABLE,
>                 .write = memory_reclaim,
> +               .seq_show  = memory_reclaim_show,
>         },
>         { }     /* terminate */
>  };
> --
> 2.35.1
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-18 22:38 [PATCH] memcg: provide reclaim stats via 'memory.reclaim' Vaibhav Jain
  2022-05-18 22:46 ` Yosry Ahmed
@ 2022-05-19  5:08 ` Shakeel Butt
  2022-05-19  9:41   ` Vaibhav Jain
  2022-05-19  7:59 ` Greg Thelen
  2022-05-19 11:02 ` Michal Hocko
  3 siblings, 1 reply; 16+ messages in thread
From: Shakeel Butt @ 2022-05-19  5:08 UTC (permalink / raw)
  To: Vaibhav Jain
  Cc: cgroups, linux-doc, linux-kernel, linux-mm, Tejun Heo, Zefan Li,
	Johannes Weiner, Jonathan Corbet, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Aneesh Kumar K . V, Yosry Ahmed

On Thu, May 19, 2022 at 04:08:15AM +0530, Vaibhav Jain wrote:
> [1] Provides a way for user-space to trigger proactive reclaim by introducing
> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
> of pages scanned and reclaimed is still not directly available to the
> user-space.
> 
> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
> readable which returns the number of pages scanned / reclaimed during the
> reclaim process from 'struct vmpressure' associated with each memcg. This should
> let user-space asses how successful proactive reclaim triggered from memcg
> 'memory.reclaim' was ?
> 
> With the patch following command flow is expected:
> 
>  # echo "1M" > memory.reclaim
> 
>  # cat memory.reclaim
>    scanned 76
>    reclaimed 32
> 

Yosry already mentioned the race issue with the implementation and I
would prefer we don't create any new dependency on vmpressure which I
think we should deprecate.

Anyways my question is how are you planning to use these metrics i.e.
scanned & reclaimed? I wonder if the data you are interested in can be
extracted without a stable interface. Have you tried BPF way to get
these metrics? We already have a tracepoint in vmscan tracing the
scanned and reclaimed. 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-18 22:38 [PATCH] memcg: provide reclaim stats via 'memory.reclaim' Vaibhav Jain
  2022-05-18 22:46 ` Yosry Ahmed
  2022-05-19  5:08 ` Shakeel Butt
@ 2022-05-19  7:59 ` Greg Thelen
  2022-05-19  9:56   ` Vaibhav Jain
  2022-05-19 11:02 ` Michal Hocko
  3 siblings, 1 reply; 16+ messages in thread
From: Greg Thelen @ 2022-05-19  7:59 UTC (permalink / raw)
  To: Vaibhav Jain, cgroups, linux-doc, linux-kernel, linux-mm
  Cc: Vaibhav Jain, Tejun Heo, Zefan Li, Johannes Weiner,
	Jonathan Corbet, Michal Hocko, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt, Yosry Ahmed

Vaibhav Jain <vaibhav@linux.ibm.com> wrote:

> [1] Provides a way for user-space to trigger proactive reclaim by introducing
> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
> of pages scanned and reclaimed is still not directly available to the
> user-space.
>
> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
> readable which returns the number of pages scanned / reclaimed during the
> reclaim process from 'struct vmpressure' associated with each memcg. This should
> let user-space asses how successful proactive reclaim triggered from memcg
> 'memory.reclaim' was ?
>
> With the patch following command flow is expected:
>
>  # echo "1M" > memory.reclaim
>
>  # cat memory.reclaim
>    scanned 76
>    reclaimed 32

I certainly appreciate the ability for shell scripts to demonstrate
cgroup operations with textual interfaces, but such interface seem like
they are optimized for ease of use by developers.

I wonder if for runtime production use an ioctl or netlink interface has
been considered for cgroup? I don't think there are any yet, but such
approaches seem like a more straightforward ways to get nontrivial
input/outputs from a single call (e.g. like this proposal). And they
have the benefit of not requiring ascii serialization/parsing overhead.

> [1]:  https://lore.kernel.org/r/20220425190040.2475377-1-yosryahmed@google.com
>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Yosry Ahmed <yosryahmed@google.com>
> Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
> ---
>  Documentation/admin-guide/cgroup-v2.rst | 15 ++++++++++++---
>  mm/memcontrol.c                         | 14 ++++++++++++++
>  2 files changed, 26 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> index 27ebef2485a3..44610165261d 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -1209,18 +1209,27 @@ PAGE_SIZE multiple when read back.
>  	utility is limited to providing the final safety net.
>  
>    memory.reclaim
> -	A write-only nested-keyed file which exists for all cgroups.
> +	A nested-keyed file which exists for all cgroups.
>  
> -	This is a simple interface to trigger memory reclaim in the
> -	target cgroup.
> +	This is a simple interface to trigger memory reclaim and retrieve
> +	reclaim stats in the target cgroup.
>  
>  	This file accepts a single key, the number of bytes to reclaim.
>  	No nested keys are currently supported.
>  
> +	Reading the file returns number of pages scanned and number of
> +	pages reclaimed from the memcg. This information fetched from
> +	vmpressure info associated with each cgroup.
> +
>  	Example::
>  
>  	  echo "1G" > memory.reclaim
>  
> +	  cat memory.reclaim
> +
> +	  scanned 78
> +	  reclaimed 30
> +
>  	The interface can be later extended with nested keys to
>  	configure the reclaim behavior. For example, specify the
>  	type of memory to reclaim from (anon, file, ..).
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 2e2bfbed4717..9e43580a8726 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6423,6 +6423,19 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of,
>  	return nbytes;
>  }
>  
> +static int memory_reclaim_show(struct seq_file *m, void *v)
> +{
> +	struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> +	struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
> +
> +	spin_lock(&vmpr->sr_lock);
> +	seq_printf(m, "scanned %lu\nreclaimed %lu\n",
> +		   vmpr->scanned, vmpr->reclaimed);
> +	spin_unlock(&vmpr->sr_lock);
> +
> +	return 0;
> +}
> +
>  static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
>  			      size_t nbytes, loff_t off)
>  {
> @@ -6525,6 +6538,7 @@ static struct cftype memory_files[] = {
>  		.name = "reclaim",
>  		.flags = CFTYPE_NS_DELEGATABLE,
>  		.write = memory_reclaim,
> +		.seq_show  = memory_reclaim_show,
>  	},
>  	{ }	/* terminate */
>  };

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-18 22:46 ` Yosry Ahmed
@ 2022-05-19  8:50   ` Vaibhav Jain
  2022-05-19 18:22     ` Yosry Ahmed
  0 siblings, 1 reply; 16+ messages in thread
From: Vaibhav Jain @ 2022-05-19  8:50 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: cgroups, linux-doc, Linux Kernel Mailing List, Linux-MM,
	Tejun Heo, Zefan Li, Johannes Weiner, Jonathan Corbet,
	Michal Hocko, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt

Hi,

Thanks for looking into this patch,

Yosry Ahmed <yosryahmed@google.com> writes:

> On Wed, May 18, 2022 at 3:38 PM Vaibhav Jain <vaibhav@linux.ibm.com> wrote:
>>
>> [1] Provides a way for user-space to trigger proactive reclaim by introducing
>> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
>> of pages scanned and reclaimed is still not directly available to the
>> user-space.
>>
>> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
>> readable which returns the number of pages scanned / reclaimed during the
>> reclaim process from 'struct vmpressure' associated with each memcg. This should
>> let user-space asses how successful proactive reclaim triggered from memcg
>> 'memory.reclaim' was ?
>
> Isn't this a racy read? struct vmpressure can be changed between the
> write and read by other reclaim operations, right?
Read/write of vmpr stats is always done in context of vmpr->sr_lock
which is also the case for this patch. So not sure how the read is racy
?.

>
> I was actually planning to send a patch that does not updated
> vmpressure for user-controller reclaim, similar to how PSI is handled.
>
Ok, not sure if I am inferring correctly as to how how that would be
useful. Can you please provide some more context.

The primary motivation for this patch was to expose the vmpressure stats
to user space that are available with cgroup-v1 but not with cgroup-v2
AFAIK

> The interface currently returns -EBUSY if the entire amount was not
> reclaimed, so isn't this enough to figure out if it was successful or
> not?
Userspace may very well want to know the amount of memory that was
partially reclaimed even though write to "memory.reclaim" returned
'-EBUSY'. This feedback can be useful info for implementing a retry
loop.

> If not, we can store the scanned / reclaim counts of the last
> memory.reclaim invocation for the sole purpose of memory.reclaim
> reads.
Sure sounds reasonable to me.

> Maybe it is actually more intuitive to users to just read the
> amount of memory read? In a format that is similar to the one written?
>
> i.e
> echo "10M" > memory.reclaim
> cat memory.reclaim
> 9M
>
Agree, I will address that in v2.

<snip>

-- 
Cheers
~ Vaibhav

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-19  5:08 ` Shakeel Butt
@ 2022-05-19  9:41   ` Vaibhav Jain
  0 siblings, 0 replies; 16+ messages in thread
From: Vaibhav Jain @ 2022-05-19  9:41 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: cgroups, linux-doc, linux-kernel, linux-mm, Tejun Heo, Zefan Li,
	Johannes Weiner, Jonathan Corbet, Michal Hocko, Vladimir Davydov,
	Andrew Morton, Aneesh Kumar K . V, Yosry Ahmed

Hi,

Thanks for looking into this patch,

Shakeel Butt <shakeelb@google.com> writes:

> On Thu, May 19, 2022 at 04:08:15AM +0530, Vaibhav Jain wrote:
>> [1] Provides a way for user-space to trigger proactive reclaim by introducing
>> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
>> of pages scanned and reclaimed is still not directly available to the
>> user-space.
>> 
>> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
>> readable which returns the number of pages scanned / reclaimed during the
>> reclaim process from 'struct vmpressure' associated with each memcg. This should
>> let user-space asses how successful proactive reclaim triggered from memcg
>> 'memory.reclaim' was ?
>> 
>> With the patch following command flow is expected:
>> 
>>  # echo "1M" > memory.reclaim
>> 
>>  # cat memory.reclaim
>>    scanned 76
>>    reclaimed 32
>> 
>
> Yosry already mentioned the race issue with the implementation and I
> would prefer we don't create any new dependency on vmpressure which I
> think we should deprecate.
Ok,

>
> Anyways my question is how are you planning to use these metrics i.e.
> scanned & reclaimed? I wonder if the data you are interested in can be
> extracted without a stable interface. Have you tried BPF way to get
> these metrics? We already have a tracepoint in vmscan tracing the
> scanned and reclaimed. 
>
Agree that there are enough static trace_mm_vmscan_ tracepoints in
vmscan to get that info.

Also agree that exposing nr_scanned/nr_reclaimed directly to userspace may not
be a good idea but knowing the amount of memory reclaimed might be
useful.

With user-space triggered proactive reclaim user-space code can try to
write a certain value to "memory.reclaim" in a loop till it returns
'-EBUSY'.

Right now there is no direct way for it to get feedback on the progress
of the requested reclaim. Providing a stable interface to ascertain the
progress of reclaim lets that userspace provide smaller values for
proactive reclaim 

-- 
Cheers
~ Vaibhav

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-19  7:59 ` Greg Thelen
@ 2022-05-19  9:56   ` Vaibhav Jain
  0 siblings, 0 replies; 16+ messages in thread
From: Vaibhav Jain @ 2022-05-19  9:56 UTC (permalink / raw)
  To: Greg Thelen, cgroups, linux-doc, linux-kernel, linux-mm
  Cc: Tejun Heo, Zefan Li, Johannes Weiner, Jonathan Corbet,
	Michal Hocko, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt, Yosry Ahmed

Hi,

Thanks for looking into this patch,

Greg Thelen <gthelen@google.com> writes:

> Vaibhav Jain <vaibhav@linux.ibm.com> wrote:
>
>> [1] Provides a way for user-space to trigger proactive reclaim by introducing
>> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
>> of pages scanned and reclaimed is still not directly available to the
>> user-space.
>>
>> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
>> readable which returns the number of pages scanned / reclaimed during the
>> reclaim process from 'struct vmpressure' associated with each memcg. This should
>> let user-space asses how successful proactive reclaim triggered from memcg
>> 'memory.reclaim' was ?
>>
>> With the patch following command flow is expected:
>>
>>  # echo "1M" > memory.reclaim
>>
>>  # cat memory.reclaim
>>    scanned 76
>>    reclaimed 32
>
> I certainly appreciate the ability for shell scripts to demonstrate
> cgroup operations with textual interfaces, but such interface seem like
> they are optimized for ease of use by developers.
>
Agree that directly exposing nr_scanned/reclaimed might not be a useful
for users and certainly looks like a dev interface

> I wonder if for runtime production use an ioctl or netlink interface has
> been considered for cgroup? I don't think there are any yet, but such
> approaches seem like a more straightforward ways to get nontrivial
> input/outputs from a single call (e.g. like this proposal). And they
> have the benefit of not requiring ascii serialization/parsing overhead.

I think to a large degree eBPF and existing static tracepoints in vmscan
can provide access to these metrics as Shakeel Bhat pointed to earlier.

<snip>

-- 
Cheers
~ Vaibhav

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-18 22:38 [PATCH] memcg: provide reclaim stats via 'memory.reclaim' Vaibhav Jain
                   ` (2 preceding siblings ...)
  2022-05-19  7:59 ` Greg Thelen
@ 2022-05-19 11:02 ` Michal Hocko
  2022-05-20  5:15   ` Vaibhav Jain
  3 siblings, 1 reply; 16+ messages in thread
From: Michal Hocko @ 2022-05-19 11:02 UTC (permalink / raw)
  To: Vaibhav Jain
  Cc: cgroups, linux-doc, linux-kernel, linux-mm, Tejun Heo, Zefan Li,
	Johannes Weiner, Jonathan Corbet, Vladimir Davydov,
	Andrew Morton, Aneesh Kumar K . V, Shakeel Butt, Yosry Ahmed

On Thu 19-05-22 04:08:15, Vaibhav Jain wrote:
> [1] Provides a way for user-space to trigger proactive reclaim by introducing
> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
> of pages scanned and reclaimed is still not directly available to the
> user-space.
> 
> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
> readable which returns the number of pages scanned / reclaimed during the
> reclaim process from 'struct vmpressure' associated with each memcg. This should
> let user-space asses how successful proactive reclaim triggered from memcg
> 'memory.reclaim' was ?
> 
> With the patch following command flow is expected:
> 
>  # echo "1M" > memory.reclaim
> 
>  # cat memory.reclaim
>    scanned 76
>    reclaimed 32

Why cannot you use memory.stat? Sure it would require to iterate over
the reclaimed hierarchy but the information about scanned and reclaimed
pages as well as other potentially useful stats is there.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-19  8:50   ` Vaibhav Jain
@ 2022-05-19 18:22     ` Yosry Ahmed
  0 siblings, 0 replies; 16+ messages in thread
From: Yosry Ahmed @ 2022-05-19 18:22 UTC (permalink / raw)
  To: Vaibhav Jain
  Cc: cgroups, linux-doc, Linux Kernel Mailing List, Linux-MM,
	Tejun Heo, Zefan Li, Johannes Weiner, Jonathan Corbet,
	Michal Hocko, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt

On Thu, May 19, 2022 at 1:51 AM Vaibhav Jain <vaibhav@linux.ibm.com> wrote:
>
> Hi,
>
> Thanks for looking into this patch,
>
> Yosry Ahmed <yosryahmed@google.com> writes:
>
> > On Wed, May 18, 2022 at 3:38 PM Vaibhav Jain <vaibhav@linux.ibm.com> wrote:
> >>
> >> [1] Provides a way for user-space to trigger proactive reclaim by introducing
> >> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
> >> of pages scanned and reclaimed is still not directly available to the
> >> user-space.
> >>
> >> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
> >> readable which returns the number of pages scanned / reclaimed during the
> >> reclaim process from 'struct vmpressure' associated with each memcg. This should
> >> let user-space asses how successful proactive reclaim triggered from memcg
> >> 'memory.reclaim' was ?
> >
> > Isn't this a racy read? struct vmpressure can be changed between the
> > write and read by other reclaim operations, right?
> Read/write of vmpr stats is always done in context of vmpr->sr_lock
> which is also the case for this patch. So not sure how the read is racy
> ?.

I didn't mean that you can read the value while it is being changed. I
meant that between writing to memory.reclaim and reading from it,
another reclaim operation could modify memcg vmpressure. A sequence
like this:
1) Write to memory.reclaim
2) Kernel coincidentally runs reclaim on that memcg
3) Read from memory.reclaim

The result would be that you are reading the stats of another reclaim
operation, not the one invoked by writing to memory.reclaim.

>
> >
> > I was actually planning to send a patch that does not updated
> > vmpressure for user-controller reclaim, similar to how PSI is handled.
> >
> Ok, not sure if I am inferring correctly as to how how that would be
> useful. Can you please provide some more context.

IIUC vmpressure is used as an indicator for memory pressure. In my
opinion it makes sense if vmpressure is not changed on reclaim
operations directly invoked by the user, as they are not directly
related to whether the system is under memory pressure or not. PSI is
handled in a similar way. See e22c6ed90aa9 ("mm: memcontrol: don't
count limit-setting reclaim as
memory pressure").

>
> The primary motivation for this patch was to expose the vmpressure stats
> to user space that are available with cgroup-v1 but not with cgroup-v2
> AFAIK

If the main goal is exposing vmpressure, regardless of proactive
reclaim, this is something else. AFAIK vmpressure is not popular
anymore and PSI is the more recent/better indicator.

>
> > The interface currently returns -EBUSY if the entire amount was not
> > reclaimed, so isn't this enough to figure out if it was successful or
> > not?
> Userspace may very well want to know the amount of memory that was
> partially reclaimed even though write to "memory.reclaim" returned
> '-EBUSY'. This feedback can be useful info for implementing a retry
> loop.
>
> > If not, we can store the scanned / reclaim counts of the last
> > memory.reclaim invocation for the sole purpose of memory.reclaim
> > reads.
> Sure sounds reasonable to me.
>
> > Maybe it is actually more intuitive to users to just read the
> > amount of memory read? In a format that is similar to the one written?
> >
> > i.e
> > echo "10M" > memory.reclaim
> > cat memory.reclaim
> > 9M
> >
> Agree, I will address that in v2.
>
> <snip>
>
> --
> Cheers
> ~ Vaibhav

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-19 11:02 ` Michal Hocko
@ 2022-05-20  5:15   ` Vaibhav Jain
  2022-05-20  7:29     ` Michal Hocko
  0 siblings, 1 reply; 16+ messages in thread
From: Vaibhav Jain @ 2022-05-20  5:15 UTC (permalink / raw)
  To: Michal Hocko
  Cc: cgroups, linux-doc, linux-kernel, linux-mm, Tejun Heo, Zefan Li,
	Johannes Weiner, Jonathan Corbet, Vladimir Davydov,
	Andrew Morton, Aneesh Kumar K . V, Shakeel Butt, Yosry Ahmed


Thanks for looking into this patch Michal,

Michal Hocko <mhocko@suse.com> writes:

> On Thu 19-05-22 04:08:15, Vaibhav Jain wrote:
>> [1] Provides a way for user-space to trigger proactive reclaim by introducing
>> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
>> of pages scanned and reclaimed is still not directly available to the
>> user-space.
>> 
>> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
>> readable which returns the number of pages scanned / reclaimed during the
>> reclaim process from 'struct vmpressure' associated with each memcg. This should
>> let user-space asses how successful proactive reclaim triggered from memcg
>> 'memory.reclaim' was ?
>> 
>> With the patch following command flow is expected:
>> 
>>  # echo "1M" > memory.reclaim
>> 
>>  # cat memory.reclaim
>>    scanned 76
>>    reclaimed 32
>
> Why cannot you use memory.stat? Sure it would require to iterate over
> the reclaimed hierarchy but the information about scanned and reclaimed
> pages as well as other potentially useful stats is there.

Agree that "memory.stat" is more suitable for scanned/reclaimed stats as
it already is exposing bunch of other stats.

The discussion on this patch however seems to have split into two parts:

1. Is it a good idea to expose nr_scanned/nr_reclaimed to users-space
and if yes how ?

IMHO, I think it will be better to expose this info via 'memory.stat' as it
can be useful insight into the reclaim efficiency  and vmpressure.


2. Will it be useful to provide feedback to userspace when it writes to
'memory.reclaim' on how much memory has been reclaimed ?

IMHO, this will be a useful feeback to userspace to better adjust future
proactive reclaim requests via 'memory.reclaim'


-- 
> Michal Hocko
> SUSE Labs
>

-- 
Cheers
~ Vaibhav

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-20  5:15   ` Vaibhav Jain
@ 2022-05-20  7:29     ` Michal Hocko
  2022-05-23 22:50       ` Yosry Ahmed
  0 siblings, 1 reply; 16+ messages in thread
From: Michal Hocko @ 2022-05-20  7:29 UTC (permalink / raw)
  To: Vaibhav Jain
  Cc: cgroups, linux-doc, linux-kernel, linux-mm, Tejun Heo, Zefan Li,
	Johannes Weiner, Jonathan Corbet, Vladimir Davydov,
	Andrew Morton, Aneesh Kumar K . V, Shakeel Butt, Yosry Ahmed

On Fri 20-05-22 10:45:43, Vaibhav Jain wrote:
> 
> Thanks for looking into this patch Michal,
> 
> Michal Hocko <mhocko@suse.com> writes:
> 
> > On Thu 19-05-22 04:08:15, Vaibhav Jain wrote:
> >> [1] Provides a way for user-space to trigger proactive reclaim by introducing
> >> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
> >> of pages scanned and reclaimed is still not directly available to the
> >> user-space.
> >> 
> >> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
> >> readable which returns the number of pages scanned / reclaimed during the
> >> reclaim process from 'struct vmpressure' associated with each memcg. This should
> >> let user-space asses how successful proactive reclaim triggered from memcg
> >> 'memory.reclaim' was ?
> >> 
> >> With the patch following command flow is expected:
> >> 
> >>  # echo "1M" > memory.reclaim
> >> 
> >>  # cat memory.reclaim
> >>    scanned 76
> >>    reclaimed 32
> >
> > Why cannot you use memory.stat? Sure it would require to iterate over
> > the reclaimed hierarchy but the information about scanned and reclaimed
> > pages as well as other potentially useful stats is there.
> 
> Agree that "memory.stat" is more suitable for scanned/reclaimed stats as
> it already is exposing bunch of other stats.
> 
> The discussion on this patch however seems to have split into two parts:
> 
> 1. Is it a good idea to expose nr_scanned/nr_reclaimed to users-space
> and if yes how ?
> 
> IMHO, I think it will be better to expose this info via 'memory.stat' as it
> can be useful insight into the reclaim efficiency  and vmpressure.

We already do that with some more metrics
pgrefill 9801926
pgscan 27329762
pgsteal 22715987
pgactivate 250691267
pgdeactivate 9521843
pglazyfree 0
pglazyfreed 0
 
> 2. Will it be useful to provide feedback to userspace when it writes to
> 'memory.reclaim' on how much memory has been reclaimed ?
> 
> IMHO, this will be a useful feeback to userspace to better adjust future
> proactive reclaim requests via 'memory.reclaim'

How precise this information should be? A very simplistic approach would
be
cp memory.stat stats.before
echo $WHATEVER > memory.reclaim
cp memory.stat stats.after

This will obviously contain also activity outside of the explicitly
triggered reclaim (racing background/direct reclaim) but isn't that what
actually matters? Are there any cases where the only metric you care
about is the triggered reclaim in isolation?

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-20  7:29     ` Michal Hocko
@ 2022-05-23 22:50       ` Yosry Ahmed
  2022-05-24 11:45         ` Johannes Weiner
  0 siblings, 1 reply; 16+ messages in thread
From: Yosry Ahmed @ 2022-05-23 22:50 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Vaibhav Jain, Cgroups, linux-doc, Linux Kernel Mailing List,
	Linux-MM, Tejun Heo, Zefan Li, Johannes Weiner, Jonathan Corbet,
	Vladimir Davydov, Andrew Morton, Aneesh Kumar K . V,
	Shakeel Butt

On Fri, May 20, 2022 at 12:29 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 20-05-22 10:45:43, Vaibhav Jain wrote:
> >
> > Thanks for looking into this patch Michal,
> >
> > Michal Hocko <mhocko@suse.com> writes:
> >
> > > On Thu 19-05-22 04:08:15, Vaibhav Jain wrote:
> > >> [1] Provides a way for user-space to trigger proactive reclaim by introducing
> > >> a write-only memcg file 'memory.reclaim'. However reclaim stats like number
> > >> of pages scanned and reclaimed is still not directly available to the
> > >> user-space.
> > >>
> > >> This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
> > >> readable which returns the number of pages scanned / reclaimed during the
> > >> reclaim process from 'struct vmpressure' associated with each memcg. This should
> > >> let user-space asses how successful proactive reclaim triggered from memcg
> > >> 'memory.reclaim' was ?
> > >>
> > >> With the patch following command flow is expected:
> > >>
> > >>  # echo "1M" > memory.reclaim
> > >>
> > >>  # cat memory.reclaim
> > >>    scanned 76
> > >>    reclaimed 32
> > >
> > > Why cannot you use memory.stat? Sure it would require to iterate over
> > > the reclaimed hierarchy but the information about scanned and reclaimed
> > > pages as well as other potentially useful stats is there.
> >
> > Agree that "memory.stat" is more suitable for scanned/reclaimed stats as
> > it already is exposing bunch of other stats.
> >
> > The discussion on this patch however seems to have split into two parts:
> >
> > 1. Is it a good idea to expose nr_scanned/nr_reclaimed to users-space
> > and if yes how ?
> >
> > IMHO, I think it will be better to expose this info via 'memory.stat' as it
> > can be useful insight into the reclaim efficiency  and vmpressure.
>
> We already do that with some more metrics
> pgrefill 9801926
> pgscan 27329762
> pgsteal 22715987
> pgactivate 250691267
> pgdeactivate 9521843
> pglazyfree 0
> pglazyfreed 0
>
> > 2. Will it be useful to provide feedback to userspace when it writes to
> > 'memory.reclaim' on how much memory has been reclaimed ?
> >
> > IMHO, this will be a useful feeback to userspace to better adjust future
> > proactive reclaim requests via 'memory.reclaim'
>
> How precise this information should be? A very simplistic approach would
> be
> cp memory.stat stats.before
> echo $WHATEVER > memory.reclaim
> cp memory.stat stats.after
>
> This will obviously contain also activity outside of the explicitly
> triggered reclaim (racing background/direct reclaim) but isn't that what
> actually matters? Are there any cases where the only metric you care
> about is the triggered reclaim in isolation?

I think it might be useful to have a dedicated entry in memory.stat
for proactively reclaimed memory. A case where this would be useful is
tuning and evaluating userspace proactive reclaimers. For instance, if
a userspace agent is asking the kernel to reclaim 100M, but it could
only reclaim 10M, then most probably the proactive reclaimer is not
using a good methodology to figure out how much memory do we need to
reclaim.

IMO this is more useful, and a superset of just reading the last
reclaim request status through memory.reclaim (read stat before and
after).

Additionally, things get complicated if the userspace agent is
multi-threaded. For a cumulative entry in memory.stat, it shouldn't
matter by a lot as we are looking at the total for all threads
cumulatively anyway. If we are only reading the memory reclaimed in
the last request (through memory.reclaim), then we can easily get the
results of a request that happened on a different thread.

>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-23 22:50       ` Yosry Ahmed
@ 2022-05-24 11:45         ` Johannes Weiner
  2022-05-24 19:01           ` Yosry Ahmed
  0 siblings, 1 reply; 16+ messages in thread
From: Johannes Weiner @ 2022-05-24 11:45 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Michal Hocko, Vaibhav Jain, Cgroups, linux-doc,
	Linux Kernel Mailing List, Linux-MM, Tejun Heo, Zefan Li,
	Jonathan Corbet, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt

On Mon, May 23, 2022 at 03:50:34PM -0700, Yosry Ahmed wrote:
> I think it might be useful to have a dedicated entry in memory.stat
> for proactively reclaimed memory. A case where this would be useful is
> tuning and evaluating userspace proactive reclaimers. For instance, if
> a userspace agent is asking the kernel to reclaim 100M, but it could
> only reclaim 10M, then most probably the proactive reclaimer is not
> using a good methodology to figure out how much memory do we need to
> reclaim.
> 
> IMO this is more useful, and a superset of just reading the last
> reclaim request status through memory.reclaim (read stat before and
> after).

+1

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-24 11:45         ` Johannes Weiner
@ 2022-05-24 19:01           ` Yosry Ahmed
  2022-05-25  8:59             ` Michal Hocko
  0 siblings, 1 reply; 16+ messages in thread
From: Yosry Ahmed @ 2022-05-24 19:01 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Michal Hocko, Vaibhav Jain, Cgroups, linux-doc,
	Linux Kernel Mailing List, Linux-MM, Tejun Heo, Zefan Li,
	Jonathan Corbet, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt, David Rientjes

On Tue, May 24, 2022 at 4:45 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Mon, May 23, 2022 at 03:50:34PM -0700, Yosry Ahmed wrote:
> > I think it might be useful to have a dedicated entry in memory.stat
> > for proactively reclaimed memory. A case where this would be useful is
> > tuning and evaluating userspace proactive reclaimers. For instance, if
> > a userspace agent is asking the kernel to reclaim 100M, but it could
> > only reclaim 10M, then most probably the proactive reclaimer is not
> > using a good methodology to figure out how much memory do we need to
> > reclaim.
> >
> > IMO this is more useful, and a superset of just reading the last
> > reclaim request status through memory.reclaim (read stat before and
> > after).
>
> +1

It might also be useful to have a breakdown of this by memory type:
file, anon, or shrinkers.

It would also fit in nicely with a potential type=file/anon/shrinker
argument to memory.reclaim. Thoughts on this?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-24 19:01           ` Yosry Ahmed
@ 2022-05-25  8:59             ` Michal Hocko
  2022-05-25 20:31               ` Yosry Ahmed
  0 siblings, 1 reply; 16+ messages in thread
From: Michal Hocko @ 2022-05-25  8:59 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Johannes Weiner, Vaibhav Jain, Cgroups, linux-doc,
	Linux Kernel Mailing List, Linux-MM, Tejun Heo, Zefan Li,
	Jonathan Corbet, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt, David Rientjes

On Tue 24-05-22 12:01:01, Yosry Ahmed wrote:
> On Tue, May 24, 2022 at 4:45 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Mon, May 23, 2022 at 03:50:34PM -0700, Yosry Ahmed wrote:
> > > I think it might be useful to have a dedicated entry in memory.stat
> > > for proactively reclaimed memory. A case where this would be useful is
> > > tuning and evaluating userspace proactive reclaimers. For instance, if
> > > a userspace agent is asking the kernel to reclaim 100M, but it could
> > > only reclaim 10M, then most probably the proactive reclaimer is not
> > > using a good methodology to figure out how much memory do we need to
> > > reclaim.
> > >
> > > IMO this is more useful, and a superset of just reading the last
> > > reclaim request status through memory.reclaim (read stat before and
> > > after).
> >
> > +1
> 
> It might also be useful to have a breakdown of this by memory type:
> file, anon, or shrinkers.
> 
> It would also fit in nicely with a potential type=file/anon/shrinker
> argument to memory.reclaim. Thoughts on this?

Can we start simple and see what real usecases actually will need? 
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
  2022-05-25  8:59             ` Michal Hocko
@ 2022-05-25 20:31               ` Yosry Ahmed
  0 siblings, 0 replies; 16+ messages in thread
From: Yosry Ahmed @ 2022-05-25 20:31 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, Vaibhav Jain, Cgroups, linux-doc,
	Linux Kernel Mailing List, Linux-MM, Tejun Heo, Zefan Li,
	Jonathan Corbet, Vladimir Davydov, Andrew Morton,
	Aneesh Kumar K . V, Shakeel Butt, David Rientjes

On Wed, May 25, 2022 at 1:59 AM Michal Hocko <mhocko@suse.com> wrote:
>
> On Tue 24-05-22 12:01:01, Yosry Ahmed wrote:
> > On Tue, May 24, 2022 at 4:45 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > >
> > > On Mon, May 23, 2022 at 03:50:34PM -0700, Yosry Ahmed wrote:
> > > > I think it might be useful to have a dedicated entry in memory.stat
> > > > for proactively reclaimed memory. A case where this would be useful is
> > > > tuning and evaluating userspace proactive reclaimers. For instance, if
> > > > a userspace agent is asking the kernel to reclaim 100M, but it could
> > > > only reclaim 10M, then most probably the proactive reclaimer is not
> > > > using a good methodology to figure out how much memory do we need to
> > > > reclaim.
> > > >
> > > > IMO this is more useful, and a superset of just reading the last
> > > > reclaim request status through memory.reclaim (read stat before and
> > > > after).
> > >
> > > +1
> >
> > It might also be useful to have a breakdown of this by memory type:
> > file, anon, or shrinkers.
> >
> > It would also fit in nicely with a potential type=file/anon/shrinker
> > argument to memory.reclaim. Thoughts on this?
>
> Can we start simple and see what real usecases actually will need?

Agreed. Let's start with a single proactively reclaimed memory stat
and then add subcategories if/when needed.

> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2022-05-25 20:32 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-18 22:38 [PATCH] memcg: provide reclaim stats via 'memory.reclaim' Vaibhav Jain
2022-05-18 22:46 ` Yosry Ahmed
2022-05-19  8:50   ` Vaibhav Jain
2022-05-19 18:22     ` Yosry Ahmed
2022-05-19  5:08 ` Shakeel Butt
2022-05-19  9:41   ` Vaibhav Jain
2022-05-19  7:59 ` Greg Thelen
2022-05-19  9:56   ` Vaibhav Jain
2022-05-19 11:02 ` Michal Hocko
2022-05-20  5:15   ` Vaibhav Jain
2022-05-20  7:29     ` Michal Hocko
2022-05-23 22:50       ` Yosry Ahmed
2022-05-24 11:45         ` Johannes Weiner
2022-05-24 19:01           ` Yosry Ahmed
2022-05-25  8:59             ` Michal Hocko
2022-05-25 20:31               ` Yosry Ahmed

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).