linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] mm/cgroup soft limit data allocation
@ 2017-02-22 15:58 Laurent Dufour
  2017-02-22 15:58 ` [PATCH 1/2] mm/cgroup: avoid panic when init with low memory Laurent Dufour
  2017-02-22 15:58 ` [PATCH 2/2] mm/cgroup: delay soft limit data allocation Laurent Dufour
  0 siblings, 2 replies; 10+ messages in thread
From: Laurent Dufour @ 2017-02-22 15:58 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Vladimir Davydov
  Cc: cgroups, linux-mm, linux-kernel

The first patch of this series is fixing a panic occurring when soft
limit data allocation is using soft limit data.

The second patch, as suggested by Michal Hocko, is pushing forward by
delaying the soft limit data allocation when a soft limit is set.

Laurent Dufour (2):
  mm/cgroup: avoid panic when init with low memory
  mm/cgroup: delay soft limit data allocation

 mm/memcontrol.c | 54 ++++++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 40 insertions(+), 14 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/2] mm/cgroup: avoid panic when init with low memory
  2017-02-22 15:58 [PATCH 0/2] mm/cgroup soft limit data allocation Laurent Dufour
@ 2017-02-22 15:58 ` Laurent Dufour
  2017-02-23  1:12   ` Balbir Singh
  2017-02-22 15:58 ` [PATCH 2/2] mm/cgroup: delay soft limit data allocation Laurent Dufour
  1 sibling, 1 reply; 10+ messages in thread
From: Laurent Dufour @ 2017-02-22 15:58 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Vladimir Davydov
  Cc: cgroups, linux-mm, linux-kernel

The system may panic when initialisation is done when almost all the
memory is assigned to the huge pages using the kernel command line
parameter hugepage=xxxx. Panic may occur like this:

[    0.082289] Unable to handle kernel paging request for data at address 0x00000000
[    0.082338] Faulting instruction address: 0xc000000000302b88
[    0.082377] Oops: Kernel access of bad area, sig: 11 [#1]
[    0.082408] SMP NR_CPUS=2048 [    0.082424] NUMA
[    0.082440] pSeries
[    0.082457] Modules linked in:
[    0.082490] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.0-15-generic #16-Ubuntu
[    0.082536] task: c00000021ed01600 task.stack: c00000010d108000
[    0.082575] NIP: c000000000302b88 LR: c000000000270e04 CTR: c00000000016cfd0
[    0.082621] REGS: c00000010d10b2c0 TRAP: 0300   Not tainted (4.9.0-15-generic)
[    0.082666] MSR: 8000000002009033 <SF,VEC,EE,ME,IR,DR,RI,LE>[ 0.082770]   CR: 28424422  XER: 00000000
[    0.082793] CFAR: c0000000003d28b8 DAR: 0000000000000000 DSISR: 40000000 SOFTE: 1
GPR00: c000000000270e04 c00000010d10b540 c00000000141a300 c00000010fff6300
GPR04: 0000000000000000 00000000026012c0 c00000010d10b630 0000000487ab0000
GPR08: 000000010ee90000 c000000001454fd8 0000000000000000 0000000000000000
GPR12: 0000000000004400 c00000000fb80000 00000000026012c0 00000000026012c0
GPR16: 00000000026012c0 0000000000000000 0000000000000000 0000000000000002
GPR20: 000000000000000c 0000000000000000 0000000000000000 00000000024200c0
GPR24: c0000000016eef48 0000000000000000 c00000010fff7d00 00000000026012c0
GPR28: 0000000000000000 c00000010fff7d00 c00000010fff6300 c00000010d10b6d0
NIP [c000000000302b88] mem_cgroup_soft_limit_reclaim+0xf8/0x4f0
[    0.083456] LR [c000000000270e04] do_try_to_free_pages+0x1b4/0x450
[    0.083494] Call Trace:
[    0.083511] [c00000010d10b540] [c00000010d10b640] 0xc00000010d10b640 (unreliable)
[    0.083567] [c00000010d10b610] [c000000000270e04] do_try_to_free_pages+0x1b4/0x450
[    0.083622] [c00000010d10b6b0] [c000000000271198] try_to_free_pages+0xf8/0x270
[    0.083676] [c00000010d10b740] [c000000000259dd8] __alloc_pages_nodemask+0x7a8/0xff0
[    0.083729] [c00000010d10b960] [c0000000002dd274] new_slab+0x104/0x8e0
[    0.083776] [c00000010d10ba40] [c0000000002e03d0] ___slab_alloc+0x620/0x700
[    0.083822] [c00000010d10bb70] [c0000000002e04e4] __slab_alloc+0x34/0x60
[    0.083868] [c00000010d10bba0] [c0000000002e101c] kmem_cache_alloc_node_trace+0xdc/0x310
[    0.083947] [c00000010d10bc00] [c000000000eb8120] mem_cgroup_init+0x158/0x1c8
[    0.083994] [c00000010d10bc40] [c00000000000dde8] do_one_initcall+0x68/0x1d0
[    0.084041] [c00000010d10bd00] [c000000000e84184] kernel_init_freeable+0x278/0x360
[    0.084094] [c00000010d10bdc0] [c00000000000e714] kernel_init+0x24/0x170
[    0.084143] [c00000010d10be30] [c00000000000c0e8] ret_from_kernel_thread+0x5c/0x74
[    0.084195] Instruction dump:
[    0.084220] eb81ffe0 eba1ffe8 ebc1fff0 ebe1fff8 4e800020 3d230001 e9499a42 3d220004
[    0.084300] 3929acd8 794a1f24 7d295214 eac90100 <e9360000> 2fa90000 419eff74 3b200000
[    0.084382] ---[ end trace 342f5208b00d01b6 ]---

This is a chicken and egg issue where the kernel try to get free
memory when allocating per node data in mem_cgroup_init(), but in that
path mem_cgroup_soft_limit_reclaim() is called which assumes that
these data are allocated.

As mem_cgroup_soft_limit_reclaim() is best effort, it should return
when these data are not yet allocated.

Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memcontrol.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 45867e439d31..228ac44f77e1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2557,7 +2557,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
 	 * is empty. Do it lockless to prevent lock bouncing. Races
 	 * are acceptable as soft limit is best effort anyway.
 	 */
-	if (RB_EMPTY_ROOT(&mctz->rb_root))
+	if (!mctz || RB_EMPTY_ROOT(&mctz->rb_root))
 		return 0;
 
 	/*
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] mm/cgroup: delay soft limit data allocation
  2017-02-22 15:58 [PATCH 0/2] mm/cgroup soft limit data allocation Laurent Dufour
  2017-02-22 15:58 ` [PATCH 1/2] mm/cgroup: avoid panic when init with low memory Laurent Dufour
@ 2017-02-22 15:58 ` Laurent Dufour
  2017-02-22 17:11   ` Michal Hocko
  2017-02-23  1:16   ` Balbir Singh
  1 sibling, 2 replies; 10+ messages in thread
From: Laurent Dufour @ 2017-02-22 15:58 UTC (permalink / raw)
  To: Johannes Weiner, Michal Hocko, Vladimir Davydov
  Cc: cgroups, linux-mm, linux-kernel

Until a soft limit is set to a cgroup, the soft limit data are useless
so delay this allocation when a limit is set.

Suggested-by: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 mm/memcontrol.c | 52 +++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 39 insertions(+), 13 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 228ac44f77e1..bc2e6ab69c0c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -142,6 +142,8 @@ struct mem_cgroup_tree {
 	struct mem_cgroup_tree_per_node *rb_tree_per_node[MAX_NUMNODES];
 };
 
+static DEFINE_MUTEX(soft_limit_mutex);
+static bool soft_limit_initialized;
 static struct mem_cgroup_tree soft_limit_tree __read_mostly;
 
 /* for OOM */
@@ -381,6 +383,36 @@ mem_cgroup_page_nodeinfo(struct mem_cgroup *memcg, struct page *page)
 	return memcg->nodeinfo[nid];
 }
 
+static void soft_limit_initialize(void)
+{
+	int node;
+
+	mutex_lock(&soft_limit_mutex);
+	if (soft_limit_initialized)
+		goto bail;
+
+	for_each_node(node) {
+		struct mem_cgroup_tree_per_node *rtpn;
+
+		rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL,
+				    node_online(node) ? node : NUMA_NO_NODE);
+		/* Best effort, but should we warn if allocation failed */
+		if (rtpn) {
+			rtpn->rb_root = RB_ROOT;
+			spin_lock_init(&rtpn->lock);
+			/*
+			 * We don't want the compiler to set rb_tree_per_node
+			 * before rb_root and lock are initialized.
+			 */
+			WRITE_ONCE(soft_limit_tree.rb_tree_per_node[node],
+				   rtpn);
+		}
+	}
+	soft_limit_initialized = true;
+bail:
+	mutex_unlock(&soft_limit_mutex);
+}
+
 static struct mem_cgroup_tree_per_node *
 soft_limit_tree_node(int nid)
 {
@@ -465,6 +497,8 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page)
 	struct mem_cgroup_tree_per_node *mctz;
 
 	mctz = soft_limit_tree_from_page(page);
+	if (!mctz)
+		return;
 	/*
 	 * Necessary to update all ancestors when hierarchy is used.
 	 * because their event counter is not touched.
@@ -502,7 +536,8 @@ static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg)
 	for_each_node(nid) {
 		mz = mem_cgroup_nodeinfo(memcg, nid);
 		mctz = soft_limit_tree_node(nid);
-		mem_cgroup_remove_exceeded(mz, mctz);
+		if (mctz)
+			mem_cgroup_remove_exceeded(mz, mctz);
 	}
 }
 
@@ -3000,6 +3035,8 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
 		}
 		break;
 	case RES_SOFT_LIMIT:
+		if (!soft_limit_initialized)
+			soft_limit_initialize();
 		memcg->soft_limit = nr_pages;
 		ret = 0;
 		break;
@@ -5774,7 +5811,7 @@ __setup("cgroup.memory=", cgroup_memory);
  */
 static int __init mem_cgroup_init(void)
 {
-	int cpu, node;
+	int cpu;
 
 #ifndef CONFIG_SLOB
 	/*
@@ -5794,17 +5831,6 @@ static int __init mem_cgroup_init(void)
 		INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work,
 			  drain_local_stock);
 
-	for_each_node(node) {
-		struct mem_cgroup_tree_per_node *rtpn;
-
-		rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL,
-				    node_online(node) ? node : NUMA_NO_NODE);
-
-		rtpn->rb_root = RB_ROOT;
-		spin_lock_init(&rtpn->lock);
-		soft_limit_tree.rb_tree_per_node[node] = rtpn;
-	}
-
 	return 0;
 }
 subsys_initcall(mem_cgroup_init);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] mm/cgroup: delay soft limit data allocation
  2017-02-22 15:58 ` [PATCH 2/2] mm/cgroup: delay soft limit data allocation Laurent Dufour
@ 2017-02-22 17:11   ` Michal Hocko
  2017-02-22 17:50     ` Laurent Dufour
  2017-02-23  1:16   ` Balbir Singh
  1 sibling, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2017-02-22 17:11 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Johannes Weiner, Vladimir Davydov, cgroups, linux-mm, linux-kernel

On Wed 22-02-17 16:58:11, Laurent Dufour wrote:
[...]
>  static struct mem_cgroup_tree_per_node *
>  soft_limit_tree_node(int nid)
>  {
> @@ -465,6 +497,8 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page)
>  	struct mem_cgroup_tree_per_node *mctz;
>  
>  	mctz = soft_limit_tree_from_page(page);
> +	if (!mctz)
> +		return;
>  	/*
>  	 * Necessary to update all ancestors when hierarchy is used.
>  	 * because their event counter is not touched.
> @@ -502,7 +536,8 @@ static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg)
>  	for_each_node(nid) {
>  		mz = mem_cgroup_nodeinfo(memcg, nid);
>  		mctz = soft_limit_tree_node(nid);
> -		mem_cgroup_remove_exceeded(mz, mctz);
> +		if (mctz)
> +			mem_cgroup_remove_exceeded(mz, mctz);
>  	}
>  }
>  

this belongs to the previous patch, right?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] mm/cgroup: delay soft limit data allocation
  2017-02-22 17:11   ` Michal Hocko
@ 2017-02-22 17:50     ` Laurent Dufour
  2017-02-22 18:24       ` Michal Hocko
  0 siblings, 1 reply; 10+ messages in thread
From: Laurent Dufour @ 2017-02-22 17:50 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, Vladimir Davydov, cgroups, linux-mm, linux-kernel

On 22/02/2017 18:11, Michal Hocko wrote:
> On Wed 22-02-17 16:58:11, Laurent Dufour wrote:
> [...]
>>  static struct mem_cgroup_tree_per_node *
>>  soft_limit_tree_node(int nid)
>>  {
>> @@ -465,6 +497,8 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page)
>>  	struct mem_cgroup_tree_per_node *mctz;
>>  
>>  	mctz = soft_limit_tree_from_page(page);
>> +	if (!mctz)
>> +		return;
>>  	/*
>>  	 * Necessary to update all ancestors when hierarchy is used.
>>  	 * because their event counter is not touched.
>> @@ -502,7 +536,8 @@ static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg)
>>  	for_each_node(nid) {
>>  		mz = mem_cgroup_nodeinfo(memcg, nid);
>>  		mctz = soft_limit_tree_node(nid);
>> -		mem_cgroup_remove_exceeded(mz, mctz);
>> +		if (mctz)
>> +			mem_cgroup_remove_exceeded(mz, mctz);
>>  	}
>>  }
>>  
> 
> this belongs to the previous patch, right?

It may. I made the first patch fixing the panic I saw but if you prefer
this to be part of the first one, fair enough.
Tell me what you like.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] mm/cgroup: delay soft limit data allocation
  2017-02-22 17:50     ` Laurent Dufour
@ 2017-02-22 18:24       ` Michal Hocko
  2017-02-23  8:59         ` Laurent Dufour
  0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2017-02-22 18:24 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Johannes Weiner, Vladimir Davydov, cgroups, linux-mm, linux-kernel

On Wed 22-02-17 18:50:19, Laurent Dufour wrote:
> On 22/02/2017 18:11, Michal Hocko wrote:
> > On Wed 22-02-17 16:58:11, Laurent Dufour wrote:
> > [...]
> >>  static struct mem_cgroup_tree_per_node *
> >>  soft_limit_tree_node(int nid)
> >>  {
> >> @@ -465,6 +497,8 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page)
> >>  	struct mem_cgroup_tree_per_node *mctz;
> >>  
> >>  	mctz = soft_limit_tree_from_page(page);
> >> +	if (!mctz)
> >> +		return;
> >>  	/*
> >>  	 * Necessary to update all ancestors when hierarchy is used.
> >>  	 * because their event counter is not touched.
> >> @@ -502,7 +536,8 @@ static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg)
> >>  	for_each_node(nid) {
> >>  		mz = mem_cgroup_nodeinfo(memcg, nid);
> >>  		mctz = soft_limit_tree_node(nid);
> >> -		mem_cgroup_remove_exceeded(mz, mctz);
> >> +		if (mctz)
> >> +			mem_cgroup_remove_exceeded(mz, mctz);
> >>  	}
> >>  }
> >>  
> > 
> > this belongs to the previous patch, right?
> 
> It may. I made the first patch fixing the panic I saw but if you prefer
> this to be part of the first one, fair enough.

Without these you would just blow up later AFAICS so the fix is not
complete. Also this patch is not complete because the initialization
code should clean up if the allocation fails half way. I have tried to
do that and it blows the code size a bit. I am not convinced this is
worth the savings after all...

Here is what I ended up:
--- 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 44fb1e80701a..54d73c20124e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -141,7 +141,7 @@ struct mem_cgroup_tree {
 	struct mem_cgroup_tree_per_node *rb_tree_per_node[MAX_NUMNODES];
 };
 
-static struct mem_cgroup_tree soft_limit_tree __read_mostly;
+static struct mem_cgroup_tree *soft_limit_tree __read_mostly;
 
 /* for OOM */
 struct mem_cgroup_eventfd_list {
@@ -381,7 +381,9 @@ mem_cgroup_page_nodeinfo(struct mem_cgroup *memcg, struct page *page)
 static struct mem_cgroup_tree_per_node *
 soft_limit_tree_node(int nid)
 {
-	return soft_limit_tree.rb_tree_per_node[nid];
+	if (!soft_limit_tree_node)
+		return NULL;
+	return soft_limit_tree->rb_tree_per_node[nid];
 }
 
 static struct mem_cgroup_tree_per_node *
@@ -389,7 +391,9 @@ soft_limit_tree_from_page(struct page *page)
 {
 	int nid = page_to_nid(page);
 
-	return soft_limit_tree.rb_tree_per_node[nid];
+	if (!soft_limit_tree_node)
+		return NULL;
+	return soft_limit_tree->rb_tree_per_node[nid];
 }
 
 static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_node *mz,
@@ -2969,6 +2973,46 @@ static int memcg_update_tcp_limit(struct mem_cgroup *memcg, unsigned long limit)
 	return ret;
 }
 
+static bool soft_limit_initialize(void)
+{
+	static DEFINE_MUTEX(soft_limit_mutex);
+	struct mem_cgroup_tree *tree;
+	bool ret = true;
+	int node;
+
+	mutex_lock(&soft_limit_mutex);
+	if (soft_limit_tree)
+		goto out_unlock;
+
+	tree = kmalloc(sizeof(*soft_limit_tree), GFP_KERNEL);
+	if (!tree) {
+		ret = false;
+		goto out;
+	}
+	for_each_node(node) {
+		struct mem_cgroup_tree_per_node *rtpn;
+
+		rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL,
+				    node_online(node) ? node : NUMA_NO_NODE);
+		if (!rtpn)
+			goto out_free;
+
+		rtpn->rb_root = RB_ROOT;
+		spin_lock_init(&rtpn->lock);
+		tree->rb_tree_per_node[node] = rtpn;
+	}
+	WRITE_ONCE(soft_limit_tree, tree);
+out_unlock:
+	mutex_unlock(&soft_limit_tree);
+	return ret;
+out_free:
+	for_each_node(node)
+		kfree(tree->rb_tree_per_node[node]);
+	kfree(tree);
+	ret = false;
+	goto out_unlock;
+}
+
 /*
  * The user of this function is...
  * RES_LIMIT.
@@ -3007,6 +3051,11 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
 		}
 		break;
 	case RES_SOFT_LIMIT:
+		if (!soft_limit_initialize()) {
+			ret = -ENOMEM;
+			break;
+		}
+
 		memcg->soft_limit = nr_pages;
 		ret = 0;
 		break;
@@ -5800,17 +5849,6 @@ static int __init mem_cgroup_init(void)
 		INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work,
 			  drain_local_stock);
 
-	for_each_node(node) {
-		struct mem_cgroup_tree_per_node *rtpn;
-
-		rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL,
-				    node_online(node) ? node : NUMA_NO_NODE);
-
-		rtpn->rb_root = RB_ROOT;
-		spin_lock_init(&rtpn->lock);
-		soft_limit_tree.rb_tree_per_node[node] = rtpn;
-	}
-
 	return 0;
 }
 subsys_initcall(mem_cgroup_init);
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/2] mm/cgroup: avoid panic when init with low memory
  2017-02-22 15:58 ` [PATCH 1/2] mm/cgroup: avoid panic when init with low memory Laurent Dufour
@ 2017-02-23  1:12   ` Balbir Singh
  0 siblings, 0 replies; 10+ messages in thread
From: Balbir Singh @ 2017-02-23  1:12 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Johannes Weiner, Michal Hocko, Vladimir Davydov, cgroups,
	linux-mm, linux-kernel

On Wed, Feb 22, 2017 at 04:58:10PM +0100, Laurent Dufour wrote:
> The system may panic when initialisation is done when almost all the
> memory is assigned to the huge pages using the kernel command line
> parameter hugepage=xxxx. Panic may occur like this:
> 
> [    0.082289] Unable to handle kernel paging request for data at address 0x00000000
> [    0.082338] Faulting instruction address: 0xc000000000302b88
> [    0.082377] Oops: Kernel access of bad area, sig: 11 [#1]
> [    0.082408] SMP NR_CPUS=2048 [    0.082424] NUMA
> [    0.082440] pSeries
> [    0.082457] Modules linked in:
> [    0.082490] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.9.0-15-generic #16-Ubuntu
> [    0.082536] task: c00000021ed01600 task.stack: c00000010d108000
> [    0.082575] NIP: c000000000302b88 LR: c000000000270e04 CTR: c00000000016cfd0
> [    0.082621] REGS: c00000010d10b2c0 TRAP: 0300   Not tainted (4.9.0-15-generic)
> [    0.082666] MSR: 8000000002009033 <SF,VEC,EE,ME,IR,DR,RI,LE>[ 0.082770]   CR: 28424422  XER: 00000000
> [    0.082793] CFAR: c0000000003d28b8 DAR: 0000000000000000 DSISR: 40000000 SOFTE: 1
> GPR00: c000000000270e04 c00000010d10b540 c00000000141a300 c00000010fff6300
> GPR04: 0000000000000000 00000000026012c0 c00000010d10b630 0000000487ab0000
> GPR08: 000000010ee90000 c000000001454fd8 0000000000000000 0000000000000000
> GPR12: 0000000000004400 c00000000fb80000 00000000026012c0 00000000026012c0
> GPR16: 00000000026012c0 0000000000000000 0000000000000000 0000000000000002
> GPR20: 000000000000000c 0000000000000000 0000000000000000 00000000024200c0
> GPR24: c0000000016eef48 0000000000000000 c00000010fff7d00 00000000026012c0
> GPR28: 0000000000000000 c00000010fff7d00 c00000010fff6300 c00000010d10b6d0
> NIP [c000000000302b88] mem_cgroup_soft_limit_reclaim+0xf8/0x4f0
> [    0.083456] LR [c000000000270e04] do_try_to_free_pages+0x1b4/0x450
> [    0.083494] Call Trace:
> [    0.083511] [c00000010d10b540] [c00000010d10b640] 0xc00000010d10b640 (unreliable)
> [    0.083567] [c00000010d10b610] [c000000000270e04] do_try_to_free_pages+0x1b4/0x450
> [    0.083622] [c00000010d10b6b0] [c000000000271198] try_to_free_pages+0xf8/0x270
> [    0.083676] [c00000010d10b740] [c000000000259dd8] __alloc_pages_nodemask+0x7a8/0xff0
> [    0.083729] [c00000010d10b960] [c0000000002dd274] new_slab+0x104/0x8e0
> [    0.083776] [c00000010d10ba40] [c0000000002e03d0] ___slab_alloc+0x620/0x700
> [    0.083822] [c00000010d10bb70] [c0000000002e04e4] __slab_alloc+0x34/0x60
> [    0.083868] [c00000010d10bba0] [c0000000002e101c] kmem_cache_alloc_node_trace+0xdc/0x310
> [    0.083947] [c00000010d10bc00] [c000000000eb8120] mem_cgroup_init+0x158/0x1c8
> [    0.083994] [c00000010d10bc40] [c00000000000dde8] do_one_initcall+0x68/0x1d0
> [    0.084041] [c00000010d10bd00] [c000000000e84184] kernel_init_freeable+0x278/0x360
> [    0.084094] [c00000010d10bdc0] [c00000000000e714] kernel_init+0x24/0x170
> [    0.084143] [c00000010d10be30] [c00000000000c0e8] ret_from_kernel_thread+0x5c/0x74
> [    0.084195] Instruction dump:
> [    0.084220] eb81ffe0 eba1ffe8 ebc1fff0 ebe1fff8 4e800020 3d230001 e9499a42 3d220004
> [    0.084300] 3929acd8 794a1f24 7d295214 eac90100 <e9360000> 2fa90000 419eff74 3b200000
> [    0.084382] ---[ end trace 342f5208b00d01b6 ]---
> 
> This is a chicken and egg issue where the kernel try to get free
> memory when allocating per node data in mem_cgroup_init(), but in that
> path mem_cgroup_soft_limit_reclaim() is called which assumes that
> these data are allocated.
> 
> As mem_cgroup_soft_limit_reclaim() is best effort, it should return
> when these data are not yet allocated.
> 
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---

Looks good to me, but we might need to audit other parts. We could have some
checks to see if memcgroup is ready for reclaim

Balbir Singh.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] mm/cgroup: delay soft limit data allocation
  2017-02-22 15:58 ` [PATCH 2/2] mm/cgroup: delay soft limit data allocation Laurent Dufour
  2017-02-22 17:11   ` Michal Hocko
@ 2017-02-23  1:16   ` Balbir Singh
  2017-02-23  9:15     ` Laurent Dufour
  1 sibling, 1 reply; 10+ messages in thread
From: Balbir Singh @ 2017-02-23  1:16 UTC (permalink / raw)
  To: Laurent Dufour
  Cc: Johannes Weiner, Michal Hocko, Vladimir Davydov, cgroups,
	linux-mm, linux-kernel

On Wed, Feb 22, 2017 at 04:58:11PM +0100, Laurent Dufour wrote:
> Until a soft limit is set to a cgroup, the soft limit data are useless
> so delay this allocation when a limit is set.
> 
> Suggested-by: Michal Hocko <mhocko@kernel.org>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
<snip>
> @@ -3000,6 +3035,8 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
>  		}
>  		break;
>  	case RES_SOFT_LIMIT:
> +		if (!soft_limit_initialized)
> +			soft_limit_initialize();

What happens if this fails? Do we disable this interface?
It's a good idea, but I wonder if we can deal with certain
memory cgroups not supporting soft limits due to memory
shortage at the time of using them.

>  		memcg->soft_limit = nr_pages;
>  		ret = 0;
>  		break;

Balbir Singh.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] mm/cgroup: delay soft limit data allocation
  2017-02-22 18:24       ` Michal Hocko
@ 2017-02-23  8:59         ` Laurent Dufour
  0 siblings, 0 replies; 10+ messages in thread
From: Laurent Dufour @ 2017-02-23  8:59 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, Vladimir Davydov, cgroups, linux-mm, linux-kernel

On 22/02/2017 19:24, Michal Hocko wrote:
> On Wed 22-02-17 18:50:19, Laurent Dufour wrote:
>> On 22/02/2017 18:11, Michal Hocko wrote:
>>> On Wed 22-02-17 16:58:11, Laurent Dufour wrote:
>>> [...]
>>>>  static struct mem_cgroup_tree_per_node *
>>>>  soft_limit_tree_node(int nid)
>>>>  {
>>>> @@ -465,6 +497,8 @@ static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page)
>>>>  	struct mem_cgroup_tree_per_node *mctz;
>>>>  
>>>>  	mctz = soft_limit_tree_from_page(page);
>>>> +	if (!mctz)
>>>> +		return;
>>>>  	/*
>>>>  	 * Necessary to update all ancestors when hierarchy is used.
>>>>  	 * because their event counter is not touched.
>>>> @@ -502,7 +536,8 @@ static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg)
>>>>  	for_each_node(nid) {
>>>>  		mz = mem_cgroup_nodeinfo(memcg, nid);
>>>>  		mctz = soft_limit_tree_node(nid);
>>>> -		mem_cgroup_remove_exceeded(mz, mctz);
>>>> +		if (mctz)
>>>> +			mem_cgroup_remove_exceeded(mz, mctz);
>>>>  	}
>>>>  }
>>>>  
>>>
>>> this belongs to the previous patch, right?
>>
>> It may. I made the first patch fixing the panic I saw but if you prefer
>> this to be part of the first one, fair enough.
> 
> Without these you would just blow up later AFAICS so the fix is not
> complete. Also this patch is not complete because the initialization
> code should clean up if the allocation fails half way. I have tried to
> do that and it blows the code size a bit. I am not convinced this is
> worth the savings after all...

I do agree, we will have more code than the data we don't want to allocate.

Bur your proposal sounds to be the cleanest way to handle that, despite
the larger size of the code.
I'll send a new series in that way.

> 
> Here is what I ended up:
> --- 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 44fb1e80701a..54d73c20124e 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -141,7 +141,7 @@ struct mem_cgroup_tree {
>  	struct mem_cgroup_tree_per_node *rb_tree_per_node[MAX_NUMNODES];
>  };
> 
> -static struct mem_cgroup_tree soft_limit_tree __read_mostly;
> +static struct mem_cgroup_tree *soft_limit_tree __read_mostly;
> 
>  /* for OOM */
>  struct mem_cgroup_eventfd_list {
> @@ -381,7 +381,9 @@ mem_cgroup_page_nodeinfo(struct mem_cgroup *memcg, struct page *page)
>  static struct mem_cgroup_tree_per_node *
>  soft_limit_tree_node(int nid)
>  {
> -	return soft_limit_tree.rb_tree_per_node[nid];
> +	if (!soft_limit_tree_node)
> +		return NULL;
> +	return soft_limit_tree->rb_tree_per_node[nid];
>  }
> 
>  static struct mem_cgroup_tree_per_node *
> @@ -389,7 +391,9 @@ soft_limit_tree_from_page(struct page *page)
>  {
>  	int nid = page_to_nid(page);
> 
> -	return soft_limit_tree.rb_tree_per_node[nid];
> +	if (!soft_limit_tree_node)
> +		return NULL;
> +	return soft_limit_tree->rb_tree_per_node[nid];
>  }
> 
>  static void __mem_cgroup_insert_exceeded(struct mem_cgroup_per_node *mz,
> @@ -2969,6 +2973,46 @@ static int memcg_update_tcp_limit(struct mem_cgroup *memcg, unsigned long limit)
>  	return ret;
>  }
> 
> +static bool soft_limit_initialize(void)
> +{
> +	static DEFINE_MUTEX(soft_limit_mutex);
> +	struct mem_cgroup_tree *tree;
> +	bool ret = true;
> +	int node;
> +
> +	mutex_lock(&soft_limit_mutex);
> +	if (soft_limit_tree)
> +		goto out_unlock;
> +
> +	tree = kmalloc(sizeof(*soft_limit_tree), GFP_KERNEL);
> +	if (!tree) {
> +		ret = false;
> +		goto out;
> +	}
> +	for_each_node(node) {
> +		struct mem_cgroup_tree_per_node *rtpn;
> +
> +		rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL,
> +				    node_online(node) ? node : NUMA_NO_NODE);
> +		if (!rtpn)
> +			goto out_free;
> +
> +		rtpn->rb_root = RB_ROOT;
> +		spin_lock_init(&rtpn->lock);
> +		tree->rb_tree_per_node[node] = rtpn;
> +	}
> +	WRITE_ONCE(soft_limit_tree, tree);
> +out_unlock:
> +	mutex_unlock(&soft_limit_tree);
> +	return ret;
> +out_free:
> +	for_each_node(node)
> +		kfree(tree->rb_tree_per_node[node]);
> +	kfree(tree);
> +	ret = false;
> +	goto out_unlock;
> +}
> +
>  /*
>   * The user of this function is...
>   * RES_LIMIT.
> @@ -3007,6 +3051,11 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
>  		}
>  		break;
>  	case RES_SOFT_LIMIT:
> +		if (!soft_limit_initialize()) {
> +			ret = -ENOMEM;
> +			break;
> +		}
> +
>  		memcg->soft_limit = nr_pages;
>  		ret = 0;
>  		break;
> @@ -5800,17 +5849,6 @@ static int __init mem_cgroup_init(void)
>  		INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work,
>  			  drain_local_stock);
> 
> -	for_each_node(node) {
> -		struct mem_cgroup_tree_per_node *rtpn;
> -
> -		rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL,
> -				    node_online(node) ? node : NUMA_NO_NODE);
> -
> -		rtpn->rb_root = RB_ROOT;
> -		spin_lock_init(&rtpn->lock);
> -		soft_limit_tree.rb_tree_per_node[node] = rtpn;
> -	}
> -
>  	return 0;
>  }
>  subsys_initcall(mem_cgroup_init);
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] mm/cgroup: delay soft limit data allocation
  2017-02-23  1:16   ` Balbir Singh
@ 2017-02-23  9:15     ` Laurent Dufour
  0 siblings, 0 replies; 10+ messages in thread
From: Laurent Dufour @ 2017-02-23  9:15 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Johannes Weiner, Michal Hocko, Vladimir Davydov, cgroups,
	linux-mm, linux-kernel

On 23/02/2017 02:16, Balbir Singh wrote:
> On Wed, Feb 22, 2017 at 04:58:11PM +0100, Laurent Dufour wrote:
>> Until a soft limit is set to a cgroup, the soft limit data are useless
>> so delay this allocation when a limit is set.
>>
>> Suggested-by: Michal Hocko <mhocko@kernel.org>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
> <snip>
>> @@ -3000,6 +3035,8 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
>>  		}
>>  		break;
>>  	case RES_SOFT_LIMIT:
>> +		if (!soft_limit_initialized)
>> +			soft_limit_initialize();
> 
> What happens if this fails? Do we disable this interface?
> It's a good idea, but I wonder if we can deal with certain
> memory cgroups not supporting soft limits due to memory
> shortage at the time of using them.

Thanks Balbir for the review.

Regarding this point, Michal sent a new proposal which will return
-ENOMEM in the case the initialization failed. I'll send a new series in
that way.

> 
>>  		memcg->soft_limit = nr_pages;
>>  		ret = 0;
>>  		break;
> 
> Balbir Singh.
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-02-23 10:41 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-22 15:58 [PATCH 0/2] mm/cgroup soft limit data allocation Laurent Dufour
2017-02-22 15:58 ` [PATCH 1/2] mm/cgroup: avoid panic when init with low memory Laurent Dufour
2017-02-23  1:12   ` Balbir Singh
2017-02-22 15:58 ` [PATCH 2/2] mm/cgroup: delay soft limit data allocation Laurent Dufour
2017-02-22 17:11   ` Michal Hocko
2017-02-22 17:50     ` Laurent Dufour
2017-02-22 18:24       ` Michal Hocko
2017-02-23  8:59         ` Laurent Dufour
2017-02-23  1:16   ` Balbir Singh
2017-02-23  9:15     ` Laurent Dufour

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).