linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* memcg causes crashes in list_lru_add
@ 2019-04-29  8:16 Jiri Slaby
  2019-04-29  9:25 ` Jiri Slaby
  0 siblings, 1 reply; 23+ messages in thread
From: Jiri Slaby @ 2019-04-29  8:16 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Vladimir Davydov, cgroups, mm,
	Linux kernel mailing list

Hi,

with new enough systemd, one of our systems 100% crashes during boot.
Kernels I tried are all affected: 5.1-rc7, 5.0.10 stable, 4.12.14.

The 5.1-rc7 crash:
> [   12.022637] systemd[1]: Starting Create list of required static device nodes for the current kernel...
> [   12.023353] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> [   12.041502] #PF error: [normal kernel read fault]
> [   12.041502] PGD 0 P4D 0 
> [   12.041502] Oops: 0000 [#1] SMP NOPTI
> [   12.041502] CPU: 0 PID: 208 Comm: (kmod) Not tainted 5.1.0-rc7-1.g04c1966-default #1 openSUSE Tumbleweed (unreleased)
> [   12.041502] Hardware name: Supermicro H8DSP-8/H8DSP-8, BIOS 080011  06/30/2006
> [   12.041502] RIP: 0010:list_lru_add+0x94/0x170
> [   12.041502] Code: c6 07 00 66 66 66 90 31 c0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 49 8b 7c 24 20 49 8d 54 24 08 48 85 ff 74 07 e9 46 00 00 00 31 ff <48> 8b 42 08 4c 89 6a 08 49 89 55 00 49 89 45 08 4c 89 28 48 8b 42
> [   12.041502] RSP: 0018:ffffb11b8091be50 EFLAGS: 00010202
> [   12.041502] RAX: 0000000000000001 RBX: ffff930b35705a40 RCX: ffff9309cf21ade0
> [   12.041502] RDX: 0000000000000000 RSI: ffff930ab61bc587 RDI: ffff930a17711000
> [   12.041502] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
> [   12.041502] R10: 0000000000000000 R11: 0000000000000008 R12: ffff9309f5f86640
> [   12.041502] R13: ffff930ab5705a40 R14: 0000000000000001 R15: ffff930a171dc4e0
> [   12.041502] FS:  00007f42d6ea5940(0000) GS:ffff930ab7800000(0000) knlGS:0000000000000000
> [   12.041502] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   12.041502] CR2: 0000000000000008 CR3: 0000000057dec000 CR4: 00000000000006f0
> [   12.041502] Call Trace:
> [   12.041502]  d_lru_add+0x44/0x50
> [   12.041502]  dput.part.34+0xfc/0x110
> [   12.041502]  __fput+0x108/0x230
> [   12.041502]  task_work_run+0x9f/0xc0
> [   12.041502]  exit_to_usermode_loop+0xf5/0x100
> [   12.041502]  do_syscall_64+0xe2/0x110
> [   12.041502]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
> [   12.041502] RIP: 0033:0x7f42d77567b7
> [   12.041502] Code: ff ff ff ff c3 48 8b 15 df 96 0c 00 f7 d8 64 89 02 b8 ff ff ff ff eb c0 66 2e 0f 1f 84 00 00 00 00 00 90 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 8b 15 b1 96 0c 00 f7 d8 64 89 02 b8
> [   12.041502] RSP: 002b:00007fffeb85c2c8 EFLAGS: 00000202 ORIG_RAX: 0000000000000003
> [   12.041502] RAX: 0000000000000000 RBX: 000055dfb6222fd0 RCX: 00007f42d77567b7
> [   12.041502] RDX: 00007f42d78217c0 RSI: 000055dfb6223053 RDI: 0000000000000003
> [   12.041502] RBP: 00007f42d78223c0 R08: 000055dfb62230b0 R09: 00007fffeb85c0f5
> [   12.041502] R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000000
> [   12.041502] R13: 000055dfb6225080 R14: 00007fffeb85c3aa R15: 0000000000000003
> [   12.041502] Modules linked in:
> [   12.041502] CR2: 0000000000000008
> [   12.491424] ---[ end trace 574d0c998e97d864 ]---

Enabling KASAN reveals a bit more:
> Allocated by task 1:
>  __kasan_kmalloc.constprop.13+0xc1/0xd0
>  __list_lru_init+0x3cd/0x5e0

This is kvmalloc in memcg_init_list_lru_node:
        memcg_lrus = kvmalloc(sizeof(*memcg_lrus) +
                              size * sizeof(void *), GFP_KERNEL);

>  sget_userns+0x65c/0xba0
>  kernfs_mount_ns+0x120/0x7f0
>  cgroup_do_mount+0x93/0x2e0
>  cgroup1_mount+0x335/0x925
>  cgroup_mount+0x14a/0x7b0
>  mount_fs+0xce/0x304
>  vfs_kern_mount.part.33+0x58/0x370
>  do_mount+0x390/0x2540
>  ksys_mount+0xb6/0xd0
...
>
> Freed by task 1:
>  __kasan_slab_free+0x125/0x170
>  kfree+0x90/0x1a0
>  acpi_ds_terminate_control_method+0x5a2/0x5c9

This is a different object (the address overflowed to an acpi-allocated
memory). Irrelevant info.

> The buggy address belongs to the object at ffff8880d69a2e68
>  which belongs to the cache kmalloc-16 of size 16
> The buggy address is located 8 bytes to the right of
>  16-byte region [ffff8880d69a2e68, ffff8880d69a2e78)

Hmm, 16byte slab. 'memcg_lrus' allocated above is 'struct
list_lru_memcg' defined as:
        struct rcu_head         rcu;
        /* array of per cgroup lists, indexed by memcg_cache_id */
        struct list_lru_one     *lru[0];

sizeof(struct rcu_head) is 16. So it must mean that 'size' used in the
'kvmalloc' above in 'memcg_init_list_lru_node' is 0. That cannot be correct.

This confirms the theory:
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -366,8 +366,14 @@ static int memcg_init_list_lru_node(stru
        struct list_lru_memcg *memcg_lrus;
        int size = memcg_nr_cache_ids;

+       if (!size) {
+               pr_err("%s: XXXXXXXXX size is zero yet!\n", __func__);
+               size = 256;
+       }
+
        memcg_lrus = kvmalloc(sizeof(*memcg_lrus) +
                              size * sizeof(void *), GFP_KERNEL);
+       printk(KERN_DEBUG "%s:    a=%px\n", __func__, memcg_lrus);
        if (!memcg_lrus)
                return -ENOMEM;


and even makes the beast booting. memcg has very wrong assumptions on
'memcg_nr_cache_ids'. It does not assume it can change later, despite it
does.

These are dump_stacks from 'memcg_alloc_cache_id' which changes
'memcg_nr_cache_ids' later during boot:
CPU: 1 PID: 1 Comm: systemd Tainted: G            E
5.0.10-0.ge8fc1e9-default #1 openSUSE Tumbleweed (unreleased)
Hardware name: Supermicro H8DSP-8/H8DSP-8, BIOS 080011  06/30/2006
Call Trace:
 dump_stack+0x9a/0xf0
 mem_cgroup_css_alloc+0xb16/0x16a0
 cgroup_apply_control_enable+0x2d7/0xb40
 cgroup_mkdir+0x594/0xc50
 kernfs_iop_mkdir+0x21a/0x2e0
 vfs_mkdir+0x37a/0x5d0
 do_mkdirat+0x1b1/0x200
 do_syscall_64+0xa5/0x290
 entry_SYSCALL_64_after_hwframe+0x49/0xbe




I am not sure why this is machine-dependent. I cannot reproduce on any
other box.

Any idea how to fix this mess?

The report is in our bugzilla:
https://bugzilla.suse.com/show_bug.cgi?id=1133616

thanks,
-- 
js
suse labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: memcg causes crashes in list_lru_add
  2019-04-29  8:16 memcg causes crashes in list_lru_add Jiri Slaby
@ 2019-04-29  9:25 ` Jiri Slaby
  2019-04-29 10:09   ` Jiri Slaby
  2019-04-29 10:17   ` memcg causes crashes in list_lru_add Michal Hocko
  0 siblings, 2 replies; 23+ messages in thread
From: Jiri Slaby @ 2019-04-29  9:25 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Vladimir Davydov, cgroups, mm,
	Linux kernel mailing list

On 29. 04. 19, 10:16, Jiri Slaby wrote:
> Hi,
> 
> with new enough systemd, one of our systems 100% crashes during boot.
> Kernels I tried are all affected: 5.1-rc7, 5.0.10 stable, 4.12.14.
> 
> The 5.1-rc7 crash:
>> [   12.022637] systemd[1]: Starting Create list of required static device nodes for the current kernel...
>> [   12.023353] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>> [   12.041502] #PF error: [normal kernel read fault]
>> [   12.041502] PGD 0 P4D 0 
>> [   12.041502] Oops: 0000 [#1] SMP NOPTI
>> [   12.041502] CPU: 0 PID: 208 Comm: (kmod) Not tainted 5.1.0-rc7-1.g04c1966-default #1 openSUSE Tumbleweed (unreleased)
>> [   12.041502] Hardware name: Supermicro H8DSP-8/H8DSP-8, BIOS 080011  06/30/2006
>> [   12.041502] RIP: 0010:list_lru_add+0x94/0x170
>> [   12.041502] Code: c6 07 00 66 66 66 90 31 c0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 49 8b 7c 24 20 49 8d 54 24 08 48 85 ff 74 07 e9 46 00 00 00 31 ff <48> 8b 42 08 4c 89 6a 08 49 89 55 00 49 89 45 08 4c 89 28 48 8b 42
>> [   12.041502] RSP: 0018:ffffb11b8091be50 EFLAGS: 00010202
>> [   12.041502] RAX: 0000000000000001 RBX: ffff930b35705a40 RCX: ffff9309cf21ade0
>> [   12.041502] RDX: 0000000000000000 RSI: ffff930ab61bc587 RDI: ffff930a17711000
>> [   12.041502] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
>> [   12.041502] R10: 0000000000000000 R11: 0000000000000008 R12: ffff9309f5f86640
>> [   12.041502] R13: ffff930ab5705a40 R14: 0000000000000001 R15: ffff930a171dc4e0
>> [   12.041502] FS:  00007f42d6ea5940(0000) GS:ffff930ab7800000(0000) knlGS:0000000000000000
>> [   12.041502] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [   12.041502] CR2: 0000000000000008 CR3: 0000000057dec000 CR4: 00000000000006f0
>> [   12.041502] Call Trace:
>> [   12.041502]  d_lru_add+0x44/0x50

...

> and even makes the beast booting. memcg has very wrong assumptions on
> 'memcg_nr_cache_ids'. It does not assume it can change later, despite it
> does.
...
> I am not sure why this is machine-dependent. I cannot reproduce on any
> other box.
> 
> Any idea how to fix this mess?

memcg_update_all_list_lrus should take care about resizing the array. So
it looks like list_lru_from_memcg_idx returns a stale pointer to
list_lru_from_kmem and then to list_lru_add. Still investigating.

thanks,
-- 
js
suse labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: memcg causes crashes in list_lru_add
  2019-04-29  9:25 ` Jiri Slaby
@ 2019-04-29 10:09   ` Jiri Slaby
  2019-04-29 10:40     ` Michal Hocko
  2019-04-29 10:59     ` [PATCH] memcg: make it work on sparse non-0-node systems Jiri Slaby
  2019-04-29 10:17   ` memcg causes crashes in list_lru_add Michal Hocko
  1 sibling, 2 replies; 23+ messages in thread
From: Jiri Slaby @ 2019-04-29 10:09 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Vladimir Davydov, cgroups, mm,
	Linux kernel mailing list

On 29. 04. 19, 11:25, Jiri Slaby wrote:> memcg_update_all_list_lrus
should take care about resizing the array.

It should, but:
[    0.058362] Number of physical nodes 2
[    0.058366] Skipping disabled node 0

So this should be the real fix:
--- linux-5.0-stable1.orig/mm/list_lru.c
+++ linux-5.0-stable1/mm/list_lru.c
@@ -37,11 +37,12 @@ static int lru_shrinker_id(struct list_l

 static inline bool list_lru_memcg_aware(struct list_lru *lru)
 {
-       /*
-        * This needs node 0 to be always present, even
-        * in the systems supporting sparse numa ids.
-        */
-       return !!lru->node[0].memcg_lrus;
+       int i;
+
+       for_each_online_node(i)
+               return !!lru->node[i].memcg_lrus;
+
+       return false;
 }

 static inline struct list_lru_one *





Opinions?

thanks,
-- 
js
suse labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: memcg causes crashes in list_lru_add
  2019-04-29  9:25 ` Jiri Slaby
  2019-04-29 10:09   ` Jiri Slaby
@ 2019-04-29 10:17   ` Michal Hocko
  1 sibling, 0 replies; 23+ messages in thread
From: Michal Hocko @ 2019-04-29 10:17 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Johannes Weiner, Vladimir Davydov, cgroups, mm,
	Linux kernel mailing list

On Mon 29-04-19 11:25:48, Jiri Slaby wrote:
> On 29. 04. 19, 10:16, Jiri Slaby wrote:
[...]
> > Any idea how to fix this mess?
> 
> memcg_update_all_list_lrus should take care about resizing the array. So
> it looks like list_lru_from_memcg_idx returns a stale pointer to
> list_lru_from_kmem and then to list_lru_add. Still investigating.

I am traveling and on a conference this week. Please open a bug and if
this affects upstream kernel then report upstream as well. Cc linux-mm
and memcg maintainers. This doesn't ring bells immediately. I do not
remember any large changes recently.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: memcg causes crashes in list_lru_add
  2019-04-29 10:09   ` Jiri Slaby
@ 2019-04-29 10:40     ` Michal Hocko
  2019-04-29 10:43       ` Michal Hocko
  2019-04-29 10:59     ` [PATCH] memcg: make it work on sparse non-0-node systems Jiri Slaby
  1 sibling, 1 reply; 23+ messages in thread
From: Michal Hocko @ 2019-04-29 10:40 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Johannes Weiner, Vladimir Davydov, cgroups, mm,
	Linux kernel mailing list

On Mon 29-04-19 12:09:53, Jiri Slaby wrote:
> On 29. 04. 19, 11:25, Jiri Slaby wrote:> memcg_update_all_list_lrus
> should take care about resizing the array.
> 
> It should, but:
> [    0.058362] Number of physical nodes 2
> [    0.058366] Skipping disabled node 0
> 
> So this should be the real fix:
> --- linux-5.0-stable1.orig/mm/list_lru.c
> +++ linux-5.0-stable1/mm/list_lru.c
> @@ -37,11 +37,12 @@ static int lru_shrinker_id(struct list_l
> 
>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>  {
> -       /*
> -        * This needs node 0 to be always present, even
> -        * in the systems supporting sparse numa ids.
> -        */
> -       return !!lru->node[0].memcg_lrus;
> +       int i;
> +
> +       for_each_online_node(i)
> +               return !!lru->node[i].memcg_lrus;
> +
> +       return false;
>  }
> 
>  static inline struct list_lru_one *
> 
> 
> 
> 
> 
> Opinions?

Please report upstream. This code here is there for quite some time.
I do not really remember why we do have an assumption about node 0
and why it hasn't been problem until now.

Thanks!
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: memcg causes crashes in list_lru_add
  2019-04-29 10:40     ` Michal Hocko
@ 2019-04-29 10:43       ` Michal Hocko
  0 siblings, 0 replies; 23+ messages in thread
From: Michal Hocko @ 2019-04-29 10:43 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Johannes Weiner, Vladimir Davydov, cgroups, mm,
	Linux kernel mailing list, Raghavendra K T

On Mon 29-04-19 12:40:51, Michal Hocko wrote:
> On Mon 29-04-19 12:09:53, Jiri Slaby wrote:
> > On 29. 04. 19, 11:25, Jiri Slaby wrote:> memcg_update_all_list_lrus
> > should take care about resizing the array.
> > 
> > It should, but:
> > [    0.058362] Number of physical nodes 2
> > [    0.058366] Skipping disabled node 0
> > 
> > So this should be the real fix:
> > --- linux-5.0-stable1.orig/mm/list_lru.c
> > +++ linux-5.0-stable1/mm/list_lru.c
> > @@ -37,11 +37,12 @@ static int lru_shrinker_id(struct list_l
> > 
> >  static inline bool list_lru_memcg_aware(struct list_lru *lru)
> >  {
> > -       /*
> > -        * This needs node 0 to be always present, even
> > -        * in the systems supporting sparse numa ids.
> > -        */
> > -       return !!lru->node[0].memcg_lrus;
> > +       int i;
> > +
> > +       for_each_online_node(i)
> > +               return !!lru->node[i].memcg_lrus;
> > +
> > +       return false;
> >  }
> > 
> >  static inline struct list_lru_one *
> > 
> > 
> > 
> > 
> > 
> > Opinions?
> 
> Please report upstream. This code here is there for quite some time.
> I do not really remember why we do have an assumption about node 0
> and why it hasn't been problem until now.

Humm, I blame jet-lag. I was convinced that this is an internal email.
Sorry about the confusion.

Anyway, time to revisit 145949a1387ba. CCed Raghavendra.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH] memcg: make it work on sparse non-0-node systems
  2019-04-29 10:09   ` Jiri Slaby
  2019-04-29 10:40     ` Michal Hocko
@ 2019-04-29 10:59     ` Jiri Slaby
  2019-04-29 11:30       ` Michal Hocko
                         ` (2 more replies)
  1 sibling, 3 replies; 23+ messages in thread
From: Jiri Slaby @ 2019-04-29 10:59 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Jiri Slaby, Johannes Weiner, Michal Hocko,
	Vladimir Davydov, cgroups, Raghavendra K T

We have a single node system with node 0 disabled:
  Scanning NUMA topology in Northbridge 24
  Number of physical nodes 2
  Skipping disabled node 0
  Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
  NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]

This causes crashes in memcg when system boots:
  BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
  #PF error: [normal kernel read fault]
...
  RIP: 0010:list_lru_add+0x94/0x170
...
  Call Trace:
   d_lru_add+0x44/0x50
   dput.part.34+0xfc/0x110
   __fput+0x108/0x230
   task_work_run+0x9f/0xc0
   exit_to_usermode_loop+0xf5/0x100

It is reproducible as far as 4.12. I did not try older kernels. You have
to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
investigated). Cannot be reproduced with systemd 234.

The system crashes because the size of lru array is never updated in
memcg_update_all_list_lrus and the reads are past the zero-sized array,
causing dereferences of random memory.

The root cause are list_lru_memcg_aware checks in the list_lru code.
The test in list_lru_memcg_aware is broken: it assumes node 0 is always
present, but it is not true on some systems as can be seen above.

So fix this by checking the first online node instead of node 0.

Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: <cgroups@vger.kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 mm/list_lru.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0730bf8ff39f..7689910f1a91 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
 
 static inline bool list_lru_memcg_aware(struct list_lru *lru)
 {
-	/*
-	 * This needs node 0 to be always present, even
-	 * in the systems supporting sparse numa ids.
-	 */
-	return !!lru->node[0].memcg_lrus;
+	return !!lru->node[first_online_node].memcg_lrus;
 }
 
 static inline struct list_lru_one *
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-04-29 10:59     ` [PATCH] memcg: make it work on sparse non-0-node systems Jiri Slaby
@ 2019-04-29 11:30       ` Michal Hocko
  2019-04-29 11:55         ` Jiri Slaby
  2019-05-09  7:21       ` Jiri Slaby
  2019-05-09 12:25       ` Vladimir Davydov
  2 siblings, 1 reply; 23+ messages in thread
From: Michal Hocko @ 2019-04-29 11:30 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: linux-mm, linux-kernel, Johannes Weiner, Vladimir Davydov,
	cgroups, Raghavendra K T

On Mon 29-04-19 12:59:39, Jiri Slaby wrote:
[...]
>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>  {
> -	/*
> -	 * This needs node 0 to be always present, even
> -	 * in the systems supporting sparse numa ids.
> -	 */
> -	return !!lru->node[0].memcg_lrus;
> +	return !!lru->node[first_online_node].memcg_lrus;
>  }
>  
>  static inline struct list_lru_one *

How come this doesn't blow up later - e.g. in memcg_destroy_list_lru
path which does iterate over all existing nodes thus including the
node 0.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-04-29 11:30       ` Michal Hocko
@ 2019-04-29 11:55         ` Jiri Slaby
  2019-04-29 12:11           ` Jiri Slaby
  2019-04-29 13:15           ` Michal Hocko
  0 siblings, 2 replies; 23+ messages in thread
From: Jiri Slaby @ 2019-04-29 11:55 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, Johannes Weiner, Vladimir Davydov,
	cgroups, Raghavendra K T

On 29. 04. 19, 13:30, Michal Hocko wrote:
> On Mon 29-04-19 12:59:39, Jiri Slaby wrote:
> [...]
>>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>>  {
>> -	/*
>> -	 * This needs node 0 to be always present, even
>> -	 * in the systems supporting sparse numa ids.
>> -	 */
>> -	return !!lru->node[0].memcg_lrus;
>> +	return !!lru->node[first_online_node].memcg_lrus;
>>  }
>>  
>>  static inline struct list_lru_one *
> 
> How come this doesn't blow up later - e.g. in memcg_destroy_list_lru
> path which does iterate over all existing nodes thus including the
> node 0.

If the node is not disabled (i.e. is N_POSSIBLE), lru->node is allocated
for that node too. It will also have memcg_lrus properly set.

If it is disabled, it will never be iterated.

Well, I could have used first_node. But I am not sure, if the first
POSSIBLE node is also ONLINE during boot?

thanks,
-- 
js
suse labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-04-29 11:55         ` Jiri Slaby
@ 2019-04-29 12:11           ` Jiri Slaby
  2019-04-29 13:15           ` Michal Hocko
  1 sibling, 0 replies; 23+ messages in thread
From: Jiri Slaby @ 2019-04-29 12:11 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, linux-kernel, Johannes Weiner, Vladimir Davydov,
	cgroups, Raghavendra K T

On 29. 04. 19, 13:55, Jiri Slaby wrote:
> Well, I could have used first_node. But I am not sure, if the first
> POSSIBLE node is also ONLINE during boot?

Thinking about it, it does not matter, actually. Both first_node and
first_online are allocated and set up, no matter which one is ONLINE
node. So first_node should work as good as first_online_node.

thanks,
-- 
js
suse labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-04-29 11:55         ` Jiri Slaby
  2019-04-29 12:11           ` Jiri Slaby
@ 2019-04-29 13:15           ` Michal Hocko
  1 sibling, 0 replies; 23+ messages in thread
From: Michal Hocko @ 2019-04-29 13:15 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: linux-mm, linux-kernel, Johannes Weiner, Vladimir Davydov,
	cgroups, Raghavendra K T

On Mon 29-04-19 13:55:26, Jiri Slaby wrote:
> On 29. 04. 19, 13:30, Michal Hocko wrote:
> > On Mon 29-04-19 12:59:39, Jiri Slaby wrote:
> > [...]
> >>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
> >>  {
> >> -	/*
> >> -	 * This needs node 0 to be always present, even
> >> -	 * in the systems supporting sparse numa ids.
> >> -	 */
> >> -	return !!lru->node[0].memcg_lrus;
> >> +	return !!lru->node[first_online_node].memcg_lrus;
> >>  }
> >>  
> >>  static inline struct list_lru_one *
> > 
> > How come this doesn't blow up later - e.g. in memcg_destroy_list_lru
> > path which does iterate over all existing nodes thus including the
> > node 0.
> 
> If the node is not disabled (i.e. is N_POSSIBLE), lru->node is allocated
> for that node too. It will also have memcg_lrus properly set.
> 
> If it is disabled, it will never be iterated.
> 
> Well, I could have used first_node. But I am not sure, if the first
> POSSIBLE node is also ONLINE during boot?

I dunno. I would have to think about this much more. The whole
expectation that node 0 is always around is simply broken. But also
list_lru_memcg_aware looks very suspicious. We should have a flag or
something rather than what we have now.

I am still not sure I have completely understood the problem though.
I will try to get to this during the week but Vladimir should be much
better fit to judge here.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-04-29 10:59     ` [PATCH] memcg: make it work on sparse non-0-node systems Jiri Slaby
  2019-04-29 11:30       ` Michal Hocko
@ 2019-05-09  7:21       ` Jiri Slaby
  2019-05-09 12:25       ` Vladimir Davydov
  2 siblings, 0 replies; 23+ messages in thread
From: Jiri Slaby @ 2019-05-09  7:21 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	cgroups, Raghavendra K T

Vladimir,

as you are perhaps the one most familiar with the code, could you take a
look on this?

On 29. 04. 19, 12:59, Jiri Slaby wrote:
> We have a single node system with node 0 disabled:
>   Scanning NUMA topology in Northbridge 24
>   Number of physical nodes 2
>   Skipping disabled node 0
>   Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
>   NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
> 
> This causes crashes in memcg when system boots:
>   BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>   #PF error: [normal kernel read fault]
> ...
>   RIP: 0010:list_lru_add+0x94/0x170
> ...
>   Call Trace:
>    d_lru_add+0x44/0x50
>    dput.part.34+0xfc/0x110
>    __fput+0x108/0x230
>    task_work_run+0x9f/0xc0
>    exit_to_usermode_loop+0xf5/0x100
> 
> It is reproducible as far as 4.12. I did not try older kernels. You have
> to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> investigated). Cannot be reproduced with systemd 234.
> 
> The system crashes because the size of lru array is never updated in
> memcg_update_all_list_lrus and the reads are past the zero-sized array,
> causing dereferences of random memory.
> 
> The root cause are list_lru_memcg_aware checks in the list_lru code.
> The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> present, but it is not true on some systems as can be seen above.
> 
> So fix this by checking the first online node instead of node 0.
> 
> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
> Cc: <cgroups@vger.kernel.org>
> Cc: <linux-mm@kvack.org>
> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  mm/list_lru.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..7689910f1a91 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>  
>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>  {
> -	/*
> -	 * This needs node 0 to be always present, even
> -	 * in the systems supporting sparse numa ids.
> -	 */
> -	return !!lru->node[0].memcg_lrus;
> +	return !!lru->node[first_online_node].memcg_lrus;
>  }
>  
>  static inline struct list_lru_one *
> 


-- 
js
suse labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-04-29 10:59     ` [PATCH] memcg: make it work on sparse non-0-node systems Jiri Slaby
  2019-04-29 11:30       ` Michal Hocko
  2019-05-09  7:21       ` Jiri Slaby
@ 2019-05-09 12:25       ` Vladimir Davydov
  2019-05-09 16:05         ` Shakeel Butt
  2019-05-16 13:59         ` Michal Hocko
  2 siblings, 2 replies; 23+ messages in thread
From: Vladimir Davydov @ 2019-05-09 12:25 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: linux-mm, linux-kernel, Johannes Weiner, Michal Hocko, cgroups,
	Raghavendra K T

On Mon, Apr 29, 2019 at 12:59:39PM +0200, Jiri Slaby wrote:
> We have a single node system with node 0 disabled:
>   Scanning NUMA topology in Northbridge 24
>   Number of physical nodes 2
>   Skipping disabled node 0
>   Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
>   NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
> 
> This causes crashes in memcg when system boots:
>   BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>   #PF error: [normal kernel read fault]
> ...
>   RIP: 0010:list_lru_add+0x94/0x170
> ...
>   Call Trace:
>    d_lru_add+0x44/0x50
>    dput.part.34+0xfc/0x110
>    __fput+0x108/0x230
>    task_work_run+0x9f/0xc0
>    exit_to_usermode_loop+0xf5/0x100
> 
> It is reproducible as far as 4.12. I did not try older kernels. You have
> to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> investigated). Cannot be reproduced with systemd 234.
> 
> The system crashes because the size of lru array is never updated in
> memcg_update_all_list_lrus and the reads are past the zero-sized array,
> causing dereferences of random memory.
> 
> The root cause are list_lru_memcg_aware checks in the list_lru code.
> The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> present, but it is not true on some systems as can be seen above.
> 
> So fix this by checking the first online node instead of node 0.
> 
> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
> Cc: <cgroups@vger.kernel.org>
> Cc: <linux-mm@kvack.org>
> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  mm/list_lru.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..7689910f1a91 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>  
>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>  {
> -	/*
> -	 * This needs node 0 to be always present, even
> -	 * in the systems supporting sparse numa ids.
> -	 */
> -	return !!lru->node[0].memcg_lrus;
> +	return !!lru->node[first_online_node].memcg_lrus;
>  }
>  
>  static inline struct list_lru_one *

Yep, I didn't expect node 0 could ever be unavailable, my bad.
The patch looks fine to me:

Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>

However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
to check if a list_lru is memcg aware looks confusing. I guess we could
simply add a bool flag to list_lru instead. Something like this, may be:

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index aa5efd9351eb..d5ceb2839a2d 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -54,6 +54,7 @@ struct list_lru {
 #ifdef CONFIG_MEMCG_KMEM
 	struct list_head	list;
 	int			shrinker_id;
+	bool			memcg_aware;
 #endif
 };
 
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0730bf8ff39f..8e605e40a4c6 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
 
 static inline bool list_lru_memcg_aware(struct list_lru *lru)
 {
-	/*
-	 * This needs node 0 to be always present, even
-	 * in the systems supporting sparse numa ids.
-	 */
-	return !!lru->node[0].memcg_lrus;
+	return lru->memcg_aware;
 }
 
 static inline struct list_lru_one *
@@ -451,6 +447,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
 {
 	int i;
 
+	lru->memcg_aware = memcg_aware;
 	if (!memcg_aware)
 		return 0;
 


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-05-09 12:25       ` Vladimir Davydov
@ 2019-05-09 16:05         ` Shakeel Butt
  2019-05-16 13:59         ` Michal Hocko
  1 sibling, 0 replies; 23+ messages in thread
From: Shakeel Butt @ 2019-05-09 16:05 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Jiri Slaby, Linux MM, LKML, Johannes Weiner, Michal Hocko,
	Cgroups, Raghavendra K T

On Thu, May 9, 2019 at 5:25 AM Vladimir Davydov <vdavydov.dev@gmail.com> wrote:
>
> On Mon, Apr 29, 2019 at 12:59:39PM +0200, Jiri Slaby wrote:
> > We have a single node system with node 0 disabled:
> >   Scanning NUMA topology in Northbridge 24
> >   Number of physical nodes 2
> >   Skipping disabled node 0
> >   Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
> >   NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
> >
> > This causes crashes in memcg when system boots:
> >   BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> >   #PF error: [normal kernel read fault]
> > ...
> >   RIP: 0010:list_lru_add+0x94/0x170
> > ...
> >   Call Trace:
> >    d_lru_add+0x44/0x50
> >    dput.part.34+0xfc/0x110
> >    __fput+0x108/0x230
> >    task_work_run+0x9f/0xc0
> >    exit_to_usermode_loop+0xf5/0x100
> >
> > It is reproducible as far as 4.12. I did not try older kernels. You have
> > to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> > investigated). Cannot be reproduced with systemd 234.
> >
> > The system crashes because the size of lru array is never updated in
> > memcg_update_all_list_lrus and the reads are past the zero-sized array,
> > causing dereferences of random memory.
> >
> > The root cause are list_lru_memcg_aware checks in the list_lru code.
> > The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> > present, but it is not true on some systems as can be seen above.
> >
> > So fix this by checking the first online node instead of node 0.
> >
> > Signed-off-by: Jiri Slaby <jslaby@suse.cz>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
> > Cc: <cgroups@vger.kernel.org>
> > Cc: <linux-mm@kvack.org>
> > Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> > ---
> >  mm/list_lru.c | 6 +-----
> >  1 file changed, 1 insertion(+), 5 deletions(-)
> >
> > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > index 0730bf8ff39f..7689910f1a91 100644
> > --- a/mm/list_lru.c
> > +++ b/mm/list_lru.c
> > @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
> >
> >  static inline bool list_lru_memcg_aware(struct list_lru *lru)
> >  {
> > -     /*
> > -      * This needs node 0 to be always present, even
> > -      * in the systems supporting sparse numa ids.
> > -      */
> > -     return !!lru->node[0].memcg_lrus;
> > +     return !!lru->node[first_online_node].memcg_lrus;
> >  }
> >
> >  static inline struct list_lru_one *
>
> Yep, I didn't expect node 0 could ever be unavailable, my bad.
> The patch looks fine to me:
>
> Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
>
> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
> to check if a list_lru is memcg aware looks confusing. I guess we could
> simply add a bool flag to list_lru instead. Something like this, may be:
>

I think the bool flag approach is much better. No assumption on the
node initialization.

If we go with bool approach then add

Reviewed-by: Shakeel Butt <shakeelb@google.com>

> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index aa5efd9351eb..d5ceb2839a2d 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -54,6 +54,7 @@ struct list_lru {
>  #ifdef CONFIG_MEMCG_KMEM
>         struct list_head        list;
>         int                     shrinker_id;
> +       bool                    memcg_aware;
>  #endif
>  };
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..8e605e40a4c6 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>
>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>  {
> -       /*
> -        * This needs node 0 to be always present, even
> -        * in the systems supporting sparse numa ids.
> -        */
> -       return !!lru->node[0].memcg_lrus;
> +       return lru->memcg_aware;
>  }
>
>  static inline struct list_lru_one *
> @@ -451,6 +447,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
>  {
>         int i;
>
> +       lru->memcg_aware = memcg_aware;
>         if (!memcg_aware)
>                 return 0;
>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-05-09 12:25       ` Vladimir Davydov
  2019-05-09 16:05         ` Shakeel Butt
@ 2019-05-16 13:59         ` Michal Hocko
  2019-05-17  4:48           ` Jiri Slaby
  1 sibling, 1 reply; 23+ messages in thread
From: Michal Hocko @ 2019-05-16 13:59 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Jiri Slaby, linux-mm, linux-kernel, Johannes Weiner, cgroups,
	Raghavendra K T

On Thu 09-05-19 15:25:26, Vladimir Davydov wrote:
> On Mon, Apr 29, 2019 at 12:59:39PM +0200, Jiri Slaby wrote:
> > We have a single node system with node 0 disabled:
> >   Scanning NUMA topology in Northbridge 24
> >   Number of physical nodes 2
> >   Skipping disabled node 0
> >   Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
> >   NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
> > 
> > This causes crashes in memcg when system boots:
> >   BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> >   #PF error: [normal kernel read fault]
> > ...
> >   RIP: 0010:list_lru_add+0x94/0x170
> > ...
> >   Call Trace:
> >    d_lru_add+0x44/0x50
> >    dput.part.34+0xfc/0x110
> >    __fput+0x108/0x230
> >    task_work_run+0x9f/0xc0
> >    exit_to_usermode_loop+0xf5/0x100
> > 
> > It is reproducible as far as 4.12. I did not try older kernels. You have
> > to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> > investigated). Cannot be reproduced with systemd 234.
> > 
> > The system crashes because the size of lru array is never updated in
> > memcg_update_all_list_lrus and the reads are past the zero-sized array,
> > causing dereferences of random memory.
> > 
> > The root cause are list_lru_memcg_aware checks in the list_lru code.
> > The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> > present, but it is not true on some systems as can be seen above.
> > 
> > So fix this by checking the first online node instead of node 0.
> > 
> > Signed-off-by: Jiri Slaby <jslaby@suse.cz>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
> > Cc: <cgroups@vger.kernel.org>
> > Cc: <linux-mm@kvack.org>
> > Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> > ---
> >  mm/list_lru.c | 6 +-----
> >  1 file changed, 1 insertion(+), 5 deletions(-)
> > 
> > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > index 0730bf8ff39f..7689910f1a91 100644
> > --- a/mm/list_lru.c
> > +++ b/mm/list_lru.c
> > @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
> >  
> >  static inline bool list_lru_memcg_aware(struct list_lru *lru)
> >  {
> > -	/*
> > -	 * This needs node 0 to be always present, even
> > -	 * in the systems supporting sparse numa ids.
> > -	 */
> > -	return !!lru->node[0].memcg_lrus;
> > +	return !!lru->node[first_online_node].memcg_lrus;
> >  }
> >  
> >  static inline struct list_lru_one *
> 
> Yep, I didn't expect node 0 could ever be unavailable, my bad.
> The patch looks fine to me:
> 
> Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
> 
> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
> to check if a list_lru is memcg aware looks confusing. I guess we could
> simply add a bool flag to list_lru instead. Something like this, may be:

Yes, this makes much more sense to me!

> 
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index aa5efd9351eb..d5ceb2839a2d 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -54,6 +54,7 @@ struct list_lru {
>  #ifdef CONFIG_MEMCG_KMEM
>  	struct list_head	list;
>  	int			shrinker_id;
> +	bool			memcg_aware;
>  #endif
>  };
>  
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..8e605e40a4c6 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>  
>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>  {
> -	/*
> -	 * This needs node 0 to be always present, even
> -	 * in the systems supporting sparse numa ids.
> -	 */
> -	return !!lru->node[0].memcg_lrus;
> +	return lru->memcg_aware;
>  }
>  
>  static inline struct list_lru_one *
> @@ -451,6 +447,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
>  {
>  	int i;
>  
> +	lru->memcg_aware = memcg_aware;
>  	if (!memcg_aware)
>  		return 0;
>  

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-05-16 13:59         ` Michal Hocko
@ 2019-05-17  4:48           ` Jiri Slaby
  2019-05-17  8:00             ` Vladimir Davydov
  0 siblings, 1 reply; 23+ messages in thread
From: Jiri Slaby @ 2019-05-17  4:48 UTC (permalink / raw)
  To: Michal Hocko, Vladimir Davydov
  Cc: linux-mm, linux-kernel, Johannes Weiner, cgroups, Raghavendra K T

On 16. 05. 19, 15:59, Michal Hocko wrote:
>> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
>> to check if a list_lru is memcg aware looks confusing. I guess we could
>> simply add a bool flag to list_lru instead. Something like this, may be:
> 
> Yes, this makes much more sense to me!

I am not sure if I should send a patch with this solution or Vladimir
will (given he is an author and has a diff already)?

thanks,
-- 
js
suse labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-05-17  4:48           ` Jiri Slaby
@ 2019-05-17  8:00             ` Vladimir Davydov
  2019-05-17  8:16               ` Jiri Slaby
  2019-05-17 11:42               ` [PATCH v2] " Jiri Slaby
  0 siblings, 2 replies; 23+ messages in thread
From: Vladimir Davydov @ 2019-05-17  8:00 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Michal Hocko, linux-mm, linux-kernel, Johannes Weiner, cgroups,
	Raghavendra K T

On Fri, May 17, 2019 at 06:48:37AM +0200, Jiri Slaby wrote:
> On 16. 05. 19, 15:59, Michal Hocko wrote:
> >> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
> >> to check if a list_lru is memcg aware looks confusing. I guess we could
> >> simply add a bool flag to list_lru instead. Something like this, may be:
> > 
> > Yes, this makes much more sense to me!
> 
> I am not sure if I should send a patch with this solution or Vladimir
> will (given he is an author and has a diff already)?

I didn't even try to compile it, let alone test it. I'd appreciate if
you could wrap it up and send it out using your authorship. Feel free
to add my acked-by.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] memcg: make it work on sparse non-0-node systems
  2019-05-17  8:00             ` Vladimir Davydov
@ 2019-05-17  8:16               ` Jiri Slaby
  2019-05-17 11:42               ` [PATCH v2] " Jiri Slaby
  1 sibling, 0 replies; 23+ messages in thread
From: Jiri Slaby @ 2019-05-17  8:16 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, linux-mm, linux-kernel, Johannes Weiner, cgroups,
	Raghavendra K T

On 17. 05. 19, 10:00, Vladimir Davydov wrote:
> On Fri, May 17, 2019 at 06:48:37AM +0200, Jiri Slaby wrote:
>> On 16. 05. 19, 15:59, Michal Hocko wrote:
>>>> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
>>>> to check if a list_lru is memcg aware looks confusing. I guess we could
>>>> simply add a bool flag to list_lru instead. Something like this, may be:
>>>
>>> Yes, this makes much more sense to me!
>>
>> I am not sure if I should send a patch with this solution or Vladimir
>> will (given he is an author and has a diff already)?
> 
> I didn't even try to compile it, let alone test it. I'd appreciate if
> you could wrap it up and send it out using your authorship. Feel free
> to add my acked-by.

OK, NP.

thanks,
-- 
js
suse labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v2] memcg: make it work on sparse non-0-node systems
  2019-05-17  8:00             ` Vladimir Davydov
  2019-05-17  8:16               ` Jiri Slaby
@ 2019-05-17 11:42               ` Jiri Slaby
  2019-05-17 12:13                 ` Shakeel Butt
                                   ` (2 more replies)
  1 sibling, 3 replies; 23+ messages in thread
From: Jiri Slaby @ 2019-05-17 11:42 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Jiri Slaby, Johannes Weiner, Michal Hocko,
	Vladimir Davydov, cgroups, Raghavendra K T

We have a single node system with node 0 disabled:
  Scanning NUMA topology in Northbridge 24
  Number of physical nodes 2
  Skipping disabled node 0
  Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
  NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]

This causes crashes in memcg when system boots:
  BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
  #PF error: [normal kernel read fault]
...
  RIP: 0010:list_lru_add+0x94/0x170
...
  Call Trace:
   d_lru_add+0x44/0x50
   dput.part.34+0xfc/0x110
   __fput+0x108/0x230
   task_work_run+0x9f/0xc0
   exit_to_usermode_loop+0xf5/0x100

It is reproducible as far as 4.12. I did not try older kernels. You have
to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
investigated). Cannot be reproduced with systemd 234.

The system crashes because the size of lru array is never updated in
memcg_update_all_list_lrus and the reads are past the zero-sized array,
causing dereferences of random memory.

The root cause are list_lru_memcg_aware checks in the list_lru code.
The test in list_lru_memcg_aware is broken: it assumes node 0 is always
present, but it is not true on some systems as can be seen above.

So fix this by avoiding checks on node 0. Remember the memcg-awareness
by a bool flag in struct list_lru.

[v2] use the idea proposed by Vladimir -- the bool flag.

Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Suggested-by: Vladimir Davydov <vdavydov.dev@gmail.com>
Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: <cgroups@vger.kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---
 include/linux/list_lru.h | 1 +
 mm/list_lru.c            | 8 +++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index aa5efd9351eb..d5ceb2839a2d 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -54,6 +54,7 @@ struct list_lru {
 #ifdef CONFIG_MEMCG_KMEM
 	struct list_head	list;
 	int			shrinker_id;
+	bool			memcg_aware;
 #endif
 };
 
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0730bf8ff39f..d3b538146efd 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
 
 static inline bool list_lru_memcg_aware(struct list_lru *lru)
 {
-	/*
-	 * This needs node 0 to be always present, even
-	 * in the systems supporting sparse numa ids.
-	 */
-	return !!lru->node[0].memcg_lrus;
+	return lru->memcg_aware;
 }
 
 static inline struct list_lru_one *
@@ -451,6 +447,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
 {
 	int i;
 
+	lru->memcg_aware = memcg_aware;
+
 	if (!memcg_aware)
 		return 0;
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v2] memcg: make it work on sparse non-0-node systems
  2019-05-17 11:42               ` [PATCH v2] " Jiri Slaby
@ 2019-05-17 12:13                 ` Shakeel Butt
  2019-05-17 12:27                 ` Michal Hocko
  2019-05-22  9:19                 ` [PATCH -resend " Jiri Slaby
  2 siblings, 0 replies; 23+ messages in thread
From: Shakeel Butt @ 2019-05-17 12:13 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: Linux MM, LKML, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Cgroups, Raghavendra K T

On Fri, May 17, 2019 at 4:42 AM Jiri Slaby <jslaby@suse.cz> wrote:
>
> We have a single node system with node 0 disabled:
>   Scanning NUMA topology in Northbridge 24
>   Number of physical nodes 2
>   Skipping disabled node 0
>   Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
>   NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
>
> This causes crashes in memcg when system boots:
>   BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>   #PF error: [normal kernel read fault]
> ...
>   RIP: 0010:list_lru_add+0x94/0x170
> ...
>   Call Trace:
>    d_lru_add+0x44/0x50
>    dput.part.34+0xfc/0x110
>    __fput+0x108/0x230
>    task_work_run+0x9f/0xc0
>    exit_to_usermode_loop+0xf5/0x100
>
> It is reproducible as far as 4.12. I did not try older kernels. You have
> to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> investigated). Cannot be reproduced with systemd 234.
>
> The system crashes because the size of lru array is never updated in
> memcg_update_all_list_lrus and the reads are past the zero-sized array,
> causing dereferences of random memory.
>
> The root cause are list_lru_memcg_aware checks in the list_lru code.
> The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> present, but it is not true on some systems as can be seen above.
>
> So fix this by avoiding checks on node 0. Remember the memcg-awareness
> by a bool flag in struct list_lru.
>
> [v2] use the idea proposed by Vladimir -- the bool flag.
>
> Signed-off-by: Jiri Slaby <jslaby@suse.cz>

Reviewed-by: Shakeel Butt <shakeelb@google.com>

> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Suggested-by: Vladimir Davydov <vdavydov.dev@gmail.com>
> Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
> Cc: <cgroups@vger.kernel.org>
> Cc: <linux-mm@kvack.org>
> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
> ---
>  include/linux/list_lru.h | 1 +
>  mm/list_lru.c            | 8 +++-----
>  2 files changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index aa5efd9351eb..d5ceb2839a2d 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -54,6 +54,7 @@ struct list_lru {
>  #ifdef CONFIG_MEMCG_KMEM
>         struct list_head        list;
>         int                     shrinker_id;
> +       bool                    memcg_aware;
>  #endif
>  };
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..d3b538146efd 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>
>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>  {
> -       /*
> -        * This needs node 0 to be always present, even
> -        * in the systems supporting sparse numa ids.
> -        */
> -       return !!lru->node[0].memcg_lrus;
> +       return lru->memcg_aware;
>  }
>
>  static inline struct list_lru_one *
> @@ -451,6 +447,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
>  {
>         int i;
>
> +       lru->memcg_aware = memcg_aware;
> +
>         if (!memcg_aware)
>                 return 0;
>
> --
> 2.21.0
>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2] memcg: make it work on sparse non-0-node systems
  2019-05-17 11:42               ` [PATCH v2] " Jiri Slaby
  2019-05-17 12:13                 ` Shakeel Butt
@ 2019-05-17 12:27                 ` Michal Hocko
  2019-05-22  9:19                 ` [PATCH -resend " Jiri Slaby
  2 siblings, 0 replies; 23+ messages in thread
From: Michal Hocko @ 2019-05-17 12:27 UTC (permalink / raw)
  To: Jiri Slaby
  Cc: linux-mm, linux-kernel, Johannes Weiner, Vladimir Davydov,
	cgroups, Raghavendra K T

On Fri 17-05-19 13:42:04, Jiri Slaby wrote:
> We have a single node system with node 0 disabled:
>   Scanning NUMA topology in Northbridge 24
>   Number of physical nodes 2
>   Skipping disabled node 0
>   Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
>   NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
> 
> This causes crashes in memcg when system boots:
>   BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>   #PF error: [normal kernel read fault]
> ...
>   RIP: 0010:list_lru_add+0x94/0x170
> ...
>   Call Trace:
>    d_lru_add+0x44/0x50
>    dput.part.34+0xfc/0x110
>    __fput+0x108/0x230
>    task_work_run+0x9f/0xc0
>    exit_to_usermode_loop+0xf5/0x100
> 
> It is reproducible as far as 4.12. I did not try older kernels. You have
> to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> investigated). Cannot be reproduced with systemd 234.
> 
> The system crashes because the size of lru array is never updated in
> memcg_update_all_list_lrus and the reads are past the zero-sized array,
> causing dereferences of random memory.
> 
> The root cause are list_lru_memcg_aware checks in the list_lru code.
> The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> present, but it is not true on some systems as can be seen above.
> 
> So fix this by avoiding checks on node 0. Remember the memcg-awareness
> by a bool flag in struct list_lru.
> 
> [v2] use the idea proposed by Vladimir -- the bool flag.
> 
> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Suggested-by: Vladimir Davydov <vdavydov.dev@gmail.com>
> Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
> Cc: <cgroups@vger.kernel.org>
> Cc: <linux-mm@kvack.org>
> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>

Fixes: 60d3fd32a7a9 ("list_lru: introduce per-memcg lists")
unless I have missed something

Cc: stable sounds like a good idea to me as well, although nobody has
noticed this yet but Node0 machines are quite rare.

I haven't checked all users of list_lru but the structure size increase
shouldn't be a big problem. There tend to be only limited number of
those and the number shouldn't be huge.

So this looks good to me.
Acked-by: Michal Hocko <mhocko@suse.com>

Thanks a lot Jiri!

> ---
>  include/linux/list_lru.h | 1 +
>  mm/list_lru.c            | 8 +++-----
>  2 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index aa5efd9351eb..d5ceb2839a2d 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -54,6 +54,7 @@ struct list_lru {
>  #ifdef CONFIG_MEMCG_KMEM
>  	struct list_head	list;
>  	int			shrinker_id;
> +	bool			memcg_aware;
>  #endif
>  };
>  
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..d3b538146efd 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>  
>  static inline bool list_lru_memcg_aware(struct list_lru *lru)
>  {
> -	/*
> -	 * This needs node 0 to be always present, even
> -	 * in the systems supporting sparse numa ids.
> -	 */
> -	return !!lru->node[0].memcg_lrus;
> +	return lru->memcg_aware;
>  }
>  
>  static inline struct list_lru_one *
> @@ -451,6 +447,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
>  {
>  	int i;
>  
> +	lru->memcg_aware = memcg_aware;
> +
>  	if (!memcg_aware)
>  		return 0;
>  
> -- 
> 2.21.0

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH -resend v2] memcg: make it work on sparse non-0-node systems
  2019-05-17 11:42               ` [PATCH v2] " Jiri Slaby
  2019-05-17 12:13                 ` Shakeel Butt
  2019-05-17 12:27                 ` Michal Hocko
@ 2019-05-22  9:19                 ` Jiri Slaby
  2019-05-29 13:14                   ` Sasha Levin
  2 siblings, 1 reply; 23+ messages in thread
From: Jiri Slaby @ 2019-05-22  9:19 UTC (permalink / raw)
  To: akpm
  Cc: linux-kernel, Jiri Slaby, Johannes Weiner, Michal Hocko,
	Vladimir Davydov, Shakeel Butt, cgroups, stable, linux-mm,
	Raghavendra K T

We have a single node system with node 0 disabled:
  Scanning NUMA topology in Northbridge 24
  Number of physical nodes 2
  Skipping disabled node 0
  Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
  NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]

This causes crashes in memcg when system boots:
  BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
  #PF error: [normal kernel read fault]
...
  RIP: 0010:list_lru_add+0x94/0x170
...
  Call Trace:
   d_lru_add+0x44/0x50
   dput.part.34+0xfc/0x110
   __fput+0x108/0x230
   task_work_run+0x9f/0xc0
   exit_to_usermode_loop+0xf5/0x100

It is reproducible as far as 4.12. I did not try older kernels. You have
to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
investigated). Cannot be reproduced with systemd 234.

The system crashes because the size of lru array is never updated in
memcg_update_all_list_lrus and the reads are past the zero-sized array,
causing dereferences of random memory.

The root cause are list_lru_memcg_aware checks in the list_lru code.
The test in list_lru_memcg_aware is broken: it assumes node 0 is always
present, but it is not true on some systems as can be seen above.

So fix this by avoiding checks on node 0. Remember the memcg-awareness
by a bool flag in struct list_lru.

[v2] use the idea proposed by Vladimir -- the bool flag.

Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Fixes: 60d3fd32a7a9 ("list_lru: introduce per-memcg lists")
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Suggested-by: Vladimir Davydov <vdavydov.dev@gmail.com>
Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: <cgroups@vger.kernel.org>
Cc: <stable@vger.kernel.org>
Cc: <linux-mm@kvack.org>
Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
---

This is only a resent patch. I did not send it the akpm's way previously.

 include/linux/list_lru.h | 1 +
 mm/list_lru.c            | 8 +++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index aa5efd9351eb..d5ceb2839a2d 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -54,6 +54,7 @@ struct list_lru {
 #ifdef CONFIG_MEMCG_KMEM
 	struct list_head	list;
 	int			shrinker_id;
+	bool			memcg_aware;
 #endif
 };
 
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0730bf8ff39f..d3b538146efd 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
 
 static inline bool list_lru_memcg_aware(struct list_lru *lru)
 {
-	/*
-	 * This needs node 0 to be always present, even
-	 * in the systems supporting sparse numa ids.
-	 */
-	return !!lru->node[0].memcg_lrus;
+	return lru->memcg_aware;
 }
 
 static inline struct list_lru_one *
@@ -451,6 +447,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
 {
 	int i;
 
+	lru->memcg_aware = memcg_aware;
+
 	if (!memcg_aware)
 		return 0;
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH -resend v2] memcg: make it work on sparse non-0-node systems
  2019-05-22  9:19                 ` [PATCH -resend " Jiri Slaby
@ 2019-05-29 13:14                   ` Sasha Levin
  0 siblings, 0 replies; 23+ messages in thread
From: Sasha Levin @ 2019-05-29 13:14 UTC (permalink / raw)
  To: Sasha Levin, Jiri Slaby, akpm
  Cc: linux-kernel, ,
	Johannes Weiner, cgroups, stable, linux-mm, Raghavendra K T,
	stable

Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: 60d3fd32a7a9d list_lru: introduce per-memcg lists.

The bot has tested the following trees: v5.1.4, v5.0.18, v4.19.45, v4.14.121, v4.9.178, v4.4.180.

v5.1.4: Build OK!
v5.0.18: Build OK!
v4.19.45: Build OK!
v4.14.121: Failed to apply! Possible dependencies:
    0200894d11551 ("new helper: destroy_unused_super()")
    2b3648a6ff83b ("fs/super.c: refactor alloc_super()")
    39887653aab4c ("mm/workingset.c: refactor workingset_init()")
    8e04944f0ea8b ("mm,vmscan: Allow preallocating memory for register_shrinker().")
    c92e8e10cafea ("fs: propagate shrinker::id to list_lru")

v4.9.178: Failed to apply! Possible dependencies:
    0200894d11551 ("new helper: destroy_unused_super()")
    14b468791fa95 ("mm: workingset: move shadow entry tracking to radix tree exceptional tracking")
    2b3648a6ff83b ("fs/super.c: refactor alloc_super()")
    39887653aab4c ("mm/workingset.c: refactor workingset_init()")
    4d693d08607ab ("lib: radix-tree: update callback for changing leaf nodes")
    6d75f366b9242 ("lib: radix-tree: check accounting of existing slot replacement users")
    8e04944f0ea8b ("mm,vmscan: Allow preallocating memory for register_shrinker().")
    c92e8e10cafea ("fs: propagate shrinker::id to list_lru")
    f4b109c6dad54 ("lib: radix-tree: add entry deletion support to __radix_tree_replace()")
    f7942430e40f1 ("lib: radix-tree: native accounting of exceptional entries")

v4.4.180: Failed to apply! Possible dependencies:
    0200894d11551 ("new helper: destroy_unused_super()")
    0cefabdaf757a ("mm: workingset: fix premature shadow node shrinking with cgroups")
    0e749e54244ee ("dax: increase granularity of dax_clear_blocks() operations")
    14b468791fa95 ("mm: workingset: move shadow entry tracking to radix tree exceptional tracking")
    162453bfbdf4c ("mm: workingset: separate shadow unpacking and refault calculation")
    2b3648a6ff83b ("fs/super.c: refactor alloc_super()")
    39887653aab4c ("mm/workingset.c: refactor workingset_init()")
    52db400fcd502 ("pmem, dax: clean up clear_pmem()")
    612e44939c3c7 ("mm: workingset: eviction buckets for bigmem/lowbit machines")
    689c94f03ae25 ("mm: workingset: #define radix entry eviction mask")
    6e4eab577a0ca ("fs: Add user namespace member to struct super_block")
    8e04944f0ea8b ("mm,vmscan: Allow preallocating memory for register_shrinker().")
    ac401cc782429 ("dax: New fault locking")
    b2e0d1625e193 ("dax: fix lifetime of in-kernel dax mappings with dax_map_atomic()")
    c92e8e10cafea ("fs: propagate shrinker::id to list_lru")
    d91ee87d8d85a ("vfs: Pass data, ns, and ns->userns to mount_ns")
    e4b2749158631 ("DAX: move RADIX_DAX_ definitions to dax.c")
    f7942430e40f1 ("lib: radix-tree: native accounting of exceptional entries")
    f9fe48bece3af ("dax: support dirty DAX entries in radix tree")


How should we proceed with this patch?

--
Thanks,
Sasha


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-05-29 13:14 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-29  8:16 memcg causes crashes in list_lru_add Jiri Slaby
2019-04-29  9:25 ` Jiri Slaby
2019-04-29 10:09   ` Jiri Slaby
2019-04-29 10:40     ` Michal Hocko
2019-04-29 10:43       ` Michal Hocko
2019-04-29 10:59     ` [PATCH] memcg: make it work on sparse non-0-node systems Jiri Slaby
2019-04-29 11:30       ` Michal Hocko
2019-04-29 11:55         ` Jiri Slaby
2019-04-29 12:11           ` Jiri Slaby
2019-04-29 13:15           ` Michal Hocko
2019-05-09  7:21       ` Jiri Slaby
2019-05-09 12:25       ` Vladimir Davydov
2019-05-09 16:05         ` Shakeel Butt
2019-05-16 13:59         ` Michal Hocko
2019-05-17  4:48           ` Jiri Slaby
2019-05-17  8:00             ` Vladimir Davydov
2019-05-17  8:16               ` Jiri Slaby
2019-05-17 11:42               ` [PATCH v2] " Jiri Slaby
2019-05-17 12:13                 ` Shakeel Butt
2019-05-17 12:27                 ` Michal Hocko
2019-05-22  9:19                 ` [PATCH -resend " Jiri Slaby
2019-05-29 13:14                   ` Sasha Levin
2019-04-29 10:17   ` memcg causes crashes in list_lru_add Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).