All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] memcg: add per-memcg vmalloc stat
@ 2021-12-22  5:24 Shakeel Butt
  2021-12-22  5:50 ` Muchun Song
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Shakeel Butt @ 2021-12-22  5:24 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Roman Gushchin, Muchun Song
  Cc: Andrew Morton, linux-mm, linux-kernel, Shakeel Butt

The kvmalloc* allocation functions can fallback to vmalloc allocations
and more often on long running machines. In addition the kernel does
have __GFP_ACCOUNT kvmalloc* calls. So, often on long running machines,
the memory.stat does not tell the complete picture which type of memory
is charged to the memcg. So add a per-memcg vmalloc stat.

Signed-off-by: Shakeel Butt <shakeelb@google.com>

---
Changelog since v1:
- page_memcg() within rcu lock as suggested by Muchun.

 Documentation/admin-guide/cgroup-v2.rst |  3 +++
 include/linux/memcontrol.h              | 21 +++++++++++++++++++++
 mm/memcontrol.c                         |  1 +
 mm/vmalloc.c                            |  5 +++++
 4 files changed, 30 insertions(+)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 82c8dc91b2be..5aa368d165da 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1314,6 +1314,9 @@ PAGE_SIZE multiple when read back.
 	  sock (npn)
 		Amount of memory used in network transmission buffers
 
+	  vmalloc (npn)
+		Amount of memory used for vmap backed memory.
+
 	  shmem
 		Amount of cached filesystem data that is swap-backed,
 		such as tmpfs, shm segments, shared anonymous mmap()s
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d76dad703580..b72d75141e12 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -33,6 +33,7 @@ enum memcg_stat_item {
 	MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS,
 	MEMCG_SOCK,
 	MEMCG_PERCPU_B,
+	MEMCG_VMALLOC,
 	MEMCG_NR_STAT,
 };
 
@@ -944,6 +945,21 @@ static inline void mod_memcg_state(struct mem_cgroup *memcg,
 	local_irq_restore(flags);
 }
 
+static inline void mod_memcg_page_state(struct page *page,
+					int idx, int val)
+{
+	struct mem_cgroup *memcg;
+
+	if (mem_cgroup_disabled())
+		return;
+
+	rcu_read_lock();
+	memcg = page_memcg(page);
+	if (memcg)
+		mod_memcg_state(memcg, idx, val);
+	rcu_read_unlock();
+}
+
 static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx)
 {
 	return READ_ONCE(memcg->vmstats.state[idx]);
@@ -1399,6 +1415,11 @@ static inline void mod_memcg_state(struct mem_cgroup *memcg,
 {
 }
 
+static inline void mod_memcg_page_state(struct page *page,
+					int idx, int val)
+{
+}
+
 static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx)
 {
 	return 0;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7ae77608847e..7027a3cc416f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1375,6 +1375,7 @@ static const struct memory_stat memory_stats[] = {
 	{ "pagetables",			NR_PAGETABLE			},
 	{ "percpu",			MEMCG_PERCPU_B			},
 	{ "sock",			MEMCG_SOCK			},
+	{ "vmalloc",			MEMCG_VMALLOC			},
 	{ "shmem",			NR_SHMEM			},
 	{ "file_mapped",		NR_FILE_MAPPED			},
 	{ "file_dirty",			NR_FILE_DIRTY			},
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index eb6e527a6b77..af67ce4fd402 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -39,6 +39,7 @@
 #include <linux/uaccess.h>
 #include <linux/hugetlb.h>
 #include <linux/sched/mm.h>
+#include <linux/memcontrol.h>
 #include <asm/tlbflush.h>
 #include <asm/shmparam.h>
 
@@ -2626,6 +2627,9 @@ static void __vunmap(const void *addr, int deallocate_pages)
 		unsigned int page_order = vm_area_page_order(area);
 		int i;
 
+		mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC,
+				     -(int)area->nr_pages);
+
 		for (i = 0; i < area->nr_pages; i += 1U << page_order) {
 			struct page *page = area->pages[i];
 
@@ -2964,6 +2968,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 		page_order, nr_small_pages, area->pages);
 
 	atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
+	mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC, area->nr_pages);
 
 	/*
 	 * If not enough pages were obtained to accomplish an
-- 
2.34.1.307.g9b7440fafd-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-22  5:24 [PATCH v2] memcg: add per-memcg vmalloc stat Shakeel Butt
@ 2021-12-22  5:50 ` Muchun Song
  2021-12-22  6:06   ` Shakeel Butt
  2021-12-23  2:02 ` Roman Gushchin
  2021-12-27 10:48 ` Michal Hocko
  2 siblings, 1 reply; 10+ messages in thread
From: Muchun Song @ 2021-12-22  5:50 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Andrew Morton,
	Linux Memory Management List, LKML

On Wed, Dec 22, 2021 at 1:25 PM Shakeel Butt <shakeelb@google.com> wrote:
>
[...]
> @@ -2626,6 +2627,9 @@ static void __vunmap(const void *addr, int deallocate_pages)
>                 unsigned int page_order = vm_area_page_order(area);
>                 int i;
>
> +               mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC,
> +                                    -(int)area->nr_pages);

The cast can go away since the type of 3rd parameter of
mod_memcg_page_state() is int but not long. I suggest
to remove it. Anyway

Reviewed-by: Muchun Song <songmuchun@bytedance.com>

Thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-22  5:50 ` Muchun Song
@ 2021-12-22  6:06   ` Shakeel Butt
  0 siblings, 0 replies; 10+ messages in thread
From: Shakeel Butt @ 2021-12-22  6:06 UTC (permalink / raw)
  To: Muchun Song
  Cc: Johannes Weiner, Michal Hocko, Roman Gushchin, Andrew Morton,
	Linux Memory Management List, LKML

On Tue, Dec 21, 2021 at 9:50 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Wed, Dec 22, 2021 at 1:25 PM Shakeel Butt <shakeelb@google.com> wrote:
> >
> [...]
> > @@ -2626,6 +2627,9 @@ static void __vunmap(const void *addr, int deallocate_pages)
> >                 unsigned int page_order = vm_area_page_order(area);
> >                 int i;
> >
> > +               mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC,
> > +                                    -(int)area->nr_pages);
>
> The cast can go away since the type of 3rd parameter of
> mod_memcg_page_state() is int but not long. I suggest
> to remove it. Anyway

Andrew, please remove this cast of the third param when you add this
patch in the mm tree.

>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>

Thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-22  5:24 [PATCH v2] memcg: add per-memcg vmalloc stat Shakeel Butt
  2021-12-22  5:50 ` Muchun Song
@ 2021-12-23  2:02 ` Roman Gushchin
  2021-12-23  2:32   ` Matthew Wilcox
  2021-12-23 17:44   ` Shakeel Butt
  2021-12-27 10:48 ` Michal Hocko
  2 siblings, 2 replies; 10+ messages in thread
From: Roman Gushchin @ 2021-12-23  2:02 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Johannes Weiner, Michal Hocko, Muchun Song, Andrew Morton,
	linux-mm, linux-kernel

On Tue, Dec 21, 2021 at 09:24:57PM -0800, Shakeel Butt wrote:
> The kvmalloc* allocation functions can fallback to vmalloc allocations
> and more often on long running machines. In addition the kernel does
> have __GFP_ACCOUNT kvmalloc* calls. So, often on long running machines,
> the memory.stat does not tell the complete picture which type of memory
> is charged to the memcg. So add a per-memcg vmalloc stat.
> 
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> 
> ---
> Changelog since v1:
> - page_memcg() within rcu lock as suggested by Muchun.
> 
>  Documentation/admin-guide/cgroup-v2.rst |  3 +++
>  include/linux/memcontrol.h              | 21 +++++++++++++++++++++
>  mm/memcontrol.c                         |  1 +
>  mm/vmalloc.c                            |  5 +++++
>  4 files changed, 30 insertions(+)
> 
> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> index 82c8dc91b2be..5aa368d165da 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -1314,6 +1314,9 @@ PAGE_SIZE multiple when read back.
>  	  sock (npn)
>  		Amount of memory used in network transmission buffers
>  
> +	  vmalloc (npn)
> +		Amount of memory used for vmap backed memory.
> +

It's a bid sad that this counter will partially intersect with others
(e.g. percpu and stack), but I don't see how it can be easily fixed.

But I agree that the counter is useful and worth adding.

Acked-by: Roman Gushchin <guro@fb.com>

Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-23  2:02 ` Roman Gushchin
@ 2021-12-23  2:32   ` Matthew Wilcox
  2021-12-23 17:44   ` Shakeel Butt
  1 sibling, 0 replies; 10+ messages in thread
From: Matthew Wilcox @ 2021-12-23  2:32 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Shakeel Butt, Johannes Weiner, Michal Hocko, Muchun Song,
	Andrew Morton, linux-mm, linux-kernel

On Wed, Dec 22, 2021 at 06:02:55PM -0800, Roman Gushchin wrote:
> It's a bid sad that this counter will partially intersect with others
> (e.g. percpu and stack), but I don't see how it can be easily fixed.

If it's worth fixing, we could add a VM_NO_ACCT flag?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-23  2:02 ` Roman Gushchin
  2021-12-23  2:32   ` Matthew Wilcox
@ 2021-12-23 17:44   ` Shakeel Butt
  2021-12-23 20:27     ` Roman Gushchin
  1 sibling, 1 reply; 10+ messages in thread
From: Shakeel Butt @ 2021-12-23 17:44 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Johannes Weiner, Michal Hocko, Muchun Song, Andrew Morton,
	Linux MM, LKML

On Wed, Dec 22, 2021 at 6:03 PM Roman Gushchin <guro@fb.com> wrote:
>
> On Tue, Dec 21, 2021 at 09:24:57PM -0800, Shakeel Butt wrote:
> > The kvmalloc* allocation functions can fallback to vmalloc allocations
> > and more often on long running machines. In addition the kernel does
> > have __GFP_ACCOUNT kvmalloc* calls. So, often on long running machines,
> > the memory.stat does not tell the complete picture which type of memory
> > is charged to the memcg. So add a per-memcg vmalloc stat.
> >
> > Signed-off-by: Shakeel Butt <shakeelb@google.com>
> >
> > ---
> > Changelog since v1:
> > - page_memcg() within rcu lock as suggested by Muchun.
> >
> >  Documentation/admin-guide/cgroup-v2.rst |  3 +++
> >  include/linux/memcontrol.h              | 21 +++++++++++++++++++++
> >  mm/memcontrol.c                         |  1 +
> >  mm/vmalloc.c                            |  5 +++++
> >  4 files changed, 30 insertions(+)
> >
> > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > index 82c8dc91b2be..5aa368d165da 100644
> > --- a/Documentation/admin-guide/cgroup-v2.rst
> > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > @@ -1314,6 +1314,9 @@ PAGE_SIZE multiple when read back.
> >         sock (npn)
> >               Amount of memory used in network transmission buffers
> >
> > +       vmalloc (npn)
> > +             Amount of memory used for vmap backed memory.
> > +
>
> It's a bid sad that this counter will partially intersect with others
> (e.g. percpu and stack), but I don't see how it can be easily fixed.

I checked those again. For vmap based stack we do vmalloc() without
__GFP_ACCOUNT and charge the stack page afterwards with
memcg_kmem_charge_page() interface.

I think we do the same for percpu as well i.e. not use GFP_ACCOUNT for
underlying memory but later use objcg infrastructure to charge at
finer grain.

So, I think at least percpu and stack should not intersect with
vmalloc per-memcg stat.

>
> But I agree that the counter is useful and worth adding.
>
> Acked-by: Roman Gushchin <guro@fb.com>

Thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-23 17:44   ` Shakeel Butt
@ 2021-12-23 20:27     ` Roman Gushchin
  0 siblings, 0 replies; 10+ messages in thread
From: Roman Gushchin @ 2021-12-23 20:27 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Johannes Weiner, Michal Hocko, Muchun Song, Andrew Morton,
	Linux MM, LKML

On Thu, Dec 23, 2021 at 09:44:26AM -0800, Shakeel Butt wrote:
> On Wed, Dec 22, 2021 at 6:03 PM Roman Gushchin <guro@fb.com> wrote:
> >
> > On Tue, Dec 21, 2021 at 09:24:57PM -0800, Shakeel Butt wrote:
> > > The kvmalloc* allocation functions can fallback to vmalloc allocations
> > > and more often on long running machines. In addition the kernel does
> > > have __GFP_ACCOUNT kvmalloc* calls. So, often on long running machines,
> > > the memory.stat does not tell the complete picture which type of memory
> > > is charged to the memcg. So add a per-memcg vmalloc stat.
> > >
> > > Signed-off-by: Shakeel Butt <shakeelb@google.com>
> > >
> > > ---
> > > Changelog since v1:
> > > - page_memcg() within rcu lock as suggested by Muchun.
> > >
> > >  Documentation/admin-guide/cgroup-v2.rst |  3 +++
> > >  include/linux/memcontrol.h              | 21 +++++++++++++++++++++
> > >  mm/memcontrol.c                         |  1 +
> > >  mm/vmalloc.c                            |  5 +++++
> > >  4 files changed, 30 insertions(+)
> > >
> > > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > > index 82c8dc91b2be..5aa368d165da 100644
> > > --- a/Documentation/admin-guide/cgroup-v2.rst
> > > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > > @@ -1314,6 +1314,9 @@ PAGE_SIZE multiple when read back.
> > >         sock (npn)
> > >               Amount of memory used in network transmission buffers
> > >
> > > +       vmalloc (npn)
> > > +             Amount of memory used for vmap backed memory.
> > > +
> >
> > It's a bid sad that this counter will partially intersect with others
> > (e.g. percpu and stack), but I don't see how it can be easily fixed.
> 
> I checked those again. For vmap based stack we do vmalloc() without
> __GFP_ACCOUNT and charge the stack page afterwards with
> memcg_kmem_charge_page() interface.
> 
> I think we do the same for percpu as well i.e. not use GFP_ACCOUNT for
> underlying memory but later use objcg infrastructure to charge at
> finer grain.
> 
> So, I think at least percpu and stack should not intersect with
> vmalloc per-memcg stat.

Ah, ok then, thanks for checking!

In general, it seems that if an allocation is backed by multiple layers
of mm code (e.g. percpu on top of vmalloc), we choose one layer to do
both memcg statistics and accounting. This makes total sense.

Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-22  5:24 [PATCH v2] memcg: add per-memcg vmalloc stat Shakeel Butt
  2021-12-22  5:50 ` Muchun Song
  2021-12-23  2:02 ` Roman Gushchin
@ 2021-12-27 10:48 ` Michal Hocko
  2021-12-30 19:06   ` Shakeel Butt
  2 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2021-12-27 10:48 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Johannes Weiner, Roman Gushchin, Muchun Song, Andrew Morton,
	linux-mm, linux-kernel

On Tue 21-12-21 21:24:57, Shakeel Butt wrote:
> The kvmalloc* allocation functions can fallback to vmalloc allocations
> and more often on long running machines. In addition the kernel does
> have __GFP_ACCOUNT kvmalloc* calls. So, often on long running machines,
> the memory.stat does not tell the complete picture which type of memory
> is charged to the memcg. So add a per-memcg vmalloc stat.
> 
> Signed-off-by: Shakeel Butt <shakeelb@google.com>

The counter is useful IMHO. I just have one implementation specific
question.
[...]
> @@ -2626,6 +2627,9 @@ static void __vunmap(const void *addr, int deallocate_pages)
>  		unsigned int page_order = vm_area_page_order(area);
>  		int i;
>  
> +		mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC,
> +				     -(int)area->nr_pages);
> +
>  		for (i = 0; i < area->nr_pages; i += 1U << page_order) {
>  			struct page *page = area->pages[i];
>  
> @@ -2964,6 +2968,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>  		page_order, nr_small_pages, area->pages);
>  
>  	atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
> +	mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC, area->nr_pages);
>  
>  	/*
>  	 * If not enough pages were obtained to accomplish an

Is it safe to assume that the whole area is always charged to the same
memcg? I am not really deeply familiar with vmalloc internals but is it
possible that an area could get resized/partially reused with a
different charging context?

A clarification comment would be really handy.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-27 10:48 ` Michal Hocko
@ 2021-12-30 19:06   ` Shakeel Butt
  2022-01-03 11:58     ` Michal Hocko
  0 siblings, 1 reply; 10+ messages in thread
From: Shakeel Butt @ 2021-12-30 19:06 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, Roman Gushchin, Muchun Song, Andrew Morton,
	Linux MM, LKML

Hi Michal,

On Mon, Dec 27, 2021 at 2:48 AM Michal Hocko <mhocko@suse.com> wrote:
>
[...]
> >       atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
> > +     mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC, area->nr_pages);
> >
> >       /*
> >        * If not enough pages were obtained to accomplish an
>
> Is it safe to assume that the whole area is always charged to the same
> memcg? I am not really deeply familiar with vmalloc internals but is it
> possible that an area could get resized/partially reused with a
> different charging context?

From what I understand, vmalloc areas are not resized or partially
reused at the moment. There is some ongoing discussion on caching it
but caching would also require updating the accounting part as well.

Regarding the whole area charged to the same memcg, the only way it
may get charged to different memcgs is if the process in which the
allocations are happening is migrated to a different memcg. We can
resolve this by traversing the pages in area->pages array (and use
lruvec based stats instead).

I did contemplate on making this a lruvec stat but decided to start
simple and if we ever need per-node stat then we can easily move to
lruvec based stats. Let me know what you think.

thanks,
Shakeel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] memcg: add per-memcg vmalloc stat
  2021-12-30 19:06   ` Shakeel Butt
@ 2022-01-03 11:58     ` Michal Hocko
  0 siblings, 0 replies; 10+ messages in thread
From: Michal Hocko @ 2022-01-03 11:58 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Johannes Weiner, Roman Gushchin, Muchun Song, Andrew Morton,
	Linux MM, LKML

On Thu 30-12-21 11:06:14, Shakeel Butt wrote:
> Hi Michal,
> 
> On Mon, Dec 27, 2021 at 2:48 AM Michal Hocko <mhocko@suse.com> wrote:
> >
> [...]
> > >       atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
> > > +     mod_memcg_page_state(area->pages[0], MEMCG_VMALLOC, area->nr_pages);
> > >
> > >       /*
> > >        * If not enough pages were obtained to accomplish an
> >
> > Is it safe to assume that the whole area is always charged to the same
> > memcg? I am not really deeply familiar with vmalloc internals but is it
> > possible that an area could get resized/partially reused with a
> > different charging context?
> 
> From what I understand, vmalloc areas are not resized or partially
> reused at the moment. There is some ongoing discussion on caching it
> but caching would also require updating the accounting part as well.

OK.

> Regarding the whole area charged to the same memcg, the only way it
> may get charged to different memcgs is if the process in which the
> allocations are happening is migrated to a different memcg. We can
> resolve this by traversing the pages in area->pages array (and use
> lruvec based stats instead).

I haven't even thought of a task migration. I expect that this is not a
very likely scenario but it would lead to weird numbers and I guess we
would like to prevent from that. A loop over all pages in the area and
accounting them each separately should be good enough to cover that as
well as the existing problem that has already been observed by syzbot.

> I did contemplate on making this a lruvec stat but decided to start
> simple and if we ever need per-node stat then we can easily move to
> lruvec based stats. Let me know what you think.

I am not really sure here. For now I would go with page by page stats
gathering.

Thanks!
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-01-03 11:58 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-22  5:24 [PATCH v2] memcg: add per-memcg vmalloc stat Shakeel Butt
2021-12-22  5:50 ` Muchun Song
2021-12-22  6:06   ` Shakeel Butt
2021-12-23  2:02 ` Roman Gushchin
2021-12-23  2:32   ` Matthew Wilcox
2021-12-23 17:44   ` Shakeel Butt
2021-12-23 20:27     ` Roman Gushchin
2021-12-27 10:48 ` Michal Hocko
2021-12-30 19:06   ` Shakeel Butt
2022-01-03 11:58     ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.