linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Soft limit memory management bug fixes
@ 2021-02-09 20:29 Tim Chen
  2021-02-09 20:29 ` [PATCH 1/3] mm: Fix dropped memcg from mem cgroup soft limit tree Tim Chen
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Tim Chen @ 2021-02-09 20:29 UTC (permalink / raw)
  To: Andrew Morton, Johannes Weiner, Michal Hocko, Vladimir Davydov
  Cc: Tim Chen, Dave Hansen, Ying Huang, linux-mm, cgroups, linux-kernel

During testing of tiered memory management based on memory soft limit, I found three 
issues with memory management using cgroup based soft limit in the mainline code.
Fix the issues with the three patches in this series.

Tim Chen (3):
  mm: Fix dropped memcg from mem cgroup soft limit tree
  mm: Force update of mem cgroup soft limit tree on usage excess
  mm: Fix missing mem cgroup soft limit tree updates

 mm/memcontrol.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] mm: Fix dropped memcg from mem cgroup soft limit tree
  2021-02-09 20:29 [PATCH 0/3] Soft limit memory management bug fixes Tim Chen
@ 2021-02-09 20:29 ` Tim Chen
  2021-02-10  9:47   ` Michal Hocko
  2021-02-09 20:29 ` [PATCH 2/3] mm: Force update of mem cgroup soft limit tree on usage excess Tim Chen
  2021-02-09 20:29 ` [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates Tim Chen
  2 siblings, 1 reply; 9+ messages in thread
From: Tim Chen @ 2021-02-09 20:29 UTC (permalink / raw)
  To: Andrew Morton, Johannes Weiner, Michal Hocko, Vladimir Davydov
  Cc: Tim Chen, Dave Hansen, Ying Huang, linux-mm, cgroups, linux-kernel

During soft limit memory reclaim, we will temporarily remove the target
mem cgroup from the cgroup soft limit tree.  We then perform memory
reclaim, update the memory usage excess count and re-insert the mem
cgroup back into the mem cgroup soft limit tree according to the new
memory usage excess count.

However, when memory reclaim failed for a maximum number of attempts
and we bail out of the reclaim loop, we forgot to put the target mem
cgroup chosen for next reclaim back to the soft limit tree. This prevented
pages in the mem cgroup from being reclaimed in the future even though
the mem cgroup exceeded its soft limit.  Fix the logic and put the mem
cgroup back on the tree when page reclaim failed for the mem cgroup.

Reviewed-by: Ying Huang <ying.huang@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 mm/memcontrol.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ed5cc78a8dbf..a51bf90732cb 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3505,8 +3505,12 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
 			loop > MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS))
 			break;
 	} while (!nr_reclaimed);
-	if (next_mz)
+	if (next_mz) {
+		spin_lock_irq(&mctz->lock);
+		__mem_cgroup_insert_exceeded(next_mz, mctz, excess);
+		spin_unlock_irq(&mctz->lock);
 		css_put(&next_mz->memcg->css);
+	}
 	return nr_reclaimed;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] mm: Force update of mem cgroup soft limit tree on usage excess
  2021-02-09 20:29 [PATCH 0/3] Soft limit memory management bug fixes Tim Chen
  2021-02-09 20:29 ` [PATCH 1/3] mm: Fix dropped memcg from mem cgroup soft limit tree Tim Chen
@ 2021-02-09 20:29 ` Tim Chen
  2021-02-10  9:51   ` Michal Hocko
  2021-02-09 20:29 ` [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates Tim Chen
  2 siblings, 1 reply; 9+ messages in thread
From: Tim Chen @ 2021-02-09 20:29 UTC (permalink / raw)
  To: Andrew Morton, Johannes Weiner, Michal Hocko, Vladimir Davydov
  Cc: Tim Chen, Dave Hansen, Ying Huang, linux-mm, cgroups, linux-kernel

To rate limit updates to the mem cgroup soft limit tree, we only perform
updates every SOFTLIMIT_EVENTS_TARGET (defined as 1024) memory events.

However, this sampling based updates may miss a critical update: i.e. when
the mem cgroup first exceeded its limit but it was not on the soft limit tree.
It should be on the tree at that point so it could be subjected to soft
limit page reclaim. If the mem cgroup had few memory events compared with
other mem cgroups, we may not update it and place in on the mem cgroup
soft limit tree for many memory events.  And this mem cgroup excess
usage could creep up and the mem cgroup could be hidden from the soft
limit page reclaim for a long time.

Fix this issue by forcing an update to the mem cgroup soft limit tree if a
mem cgroup has exceeded its memory soft limit but it is not on the mem
cgroup soft limit tree.

Reviewed-by: Ying Huang <ying.huang@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 mm/memcontrol.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a51bf90732cb..d72449eeb85a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -985,15 +985,22 @@ static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg,
  */
 static void memcg_check_events(struct mem_cgroup *memcg, struct page *page)
 {
+	struct mem_cgroup_per_node *mz;
+	bool force_update = false;
+
+	mz = mem_cgroup_nodeinfo(memcg, page_to_nid(page));
+	if (mz && !mz->on_tree && soft_limit_excess(mz->memcg) > 0)
+		force_update = true;
+
 	/* threshold event is triggered in finer grain than soft limit */
-	if (unlikely(mem_cgroup_event_ratelimit(memcg,
+	if (unlikely((force_update) || mem_cgroup_event_ratelimit(memcg,
 						MEM_CGROUP_TARGET_THRESH))) {
 		bool do_softlimit;
 
 		do_softlimit = mem_cgroup_event_ratelimit(memcg,
 						MEM_CGROUP_TARGET_SOFTLIMIT);
 		mem_cgroup_threshold(memcg);
-		if (unlikely(do_softlimit))
+		if (unlikely((force_update) || do_softlimit))
 			mem_cgroup_update_tree(memcg, page);
 	}
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates
  2021-02-09 20:29 [PATCH 0/3] Soft limit memory management bug fixes Tim Chen
  2021-02-09 20:29 ` [PATCH 1/3] mm: Fix dropped memcg from mem cgroup soft limit tree Tim Chen
  2021-02-09 20:29 ` [PATCH 2/3] mm: Force update of mem cgroup soft limit tree on usage excess Tim Chen
@ 2021-02-09 20:29 ` Tim Chen
  2021-02-09 22:22   ` Johannes Weiner
  2021-02-10 10:08   ` Michal Hocko
  2 siblings, 2 replies; 9+ messages in thread
From: Tim Chen @ 2021-02-09 20:29 UTC (permalink / raw)
  To: Andrew Morton, Johannes Weiner, Michal Hocko, Vladimir Davydov
  Cc: Tim Chen, Dave Hansen, Ying Huang, linux-mm, cgroups, linux-kernel

On a per node basis, the mem cgroup soft limit tree on each node tracks
how much a cgroup has exceeded its soft limit memory limit and sorts
the cgroup by its excess usage.  On page release, the trees are not
updated right away, until we have gathered a batch of pages belonging to
the same cgroup. This reduces the frequency of updating the soft limit tree
and locking of the tree and associated cgroup.

However, the batch of pages could contain pages from multiple nodes but
only the soft limit tree from one node would get updated.  Change the
logic so that we update the tree in batch of pages, with each batch of
pages all in the same mem cgroup and memory node.  An update is issued for
the batch of pages of a node collected till now whenever we encounter
a page belonging to a different node.

Reviewed-by: Ying Huang <ying.huang@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
 mm/memcontrol.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d72449eeb85a..f5a4a0e4e2ec 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6804,6 +6804,7 @@ struct uncharge_gather {
 	unsigned long pgpgout;
 	unsigned long nr_kmem;
 	struct page *dummy_page;
+	int nid;
 };
 
 static inline void uncharge_gather_clear(struct uncharge_gather *ug)
@@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
 	 * exclusive access to the page.
 	 */
 
-	if (ug->memcg != page_memcg(page)) {
+	if (ug->memcg != page_memcg(page) ||
+	    /* uncharge batch update soft limit tree on a node basis */
+	    (ug->dummy_page && ug->nid != page_to_nid(page))) {
 		if (ug->memcg) {
 			uncharge_batch(ug);
 			uncharge_gather_clear(ug);
@@ -6869,6 +6872,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
 		ug->pgpgout++;
 
 	ug->dummy_page = page;
+	ug->nid = page_to_nid(page);
 	page->memcg_data = 0;
 	css_put(&ug->memcg->css);
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates
  2021-02-09 20:29 ` [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates Tim Chen
@ 2021-02-09 22:22   ` Johannes Weiner
  2021-02-09 22:34     ` Tim Chen
  2021-02-10 10:08   ` Michal Hocko
  1 sibling, 1 reply; 9+ messages in thread
From: Johannes Weiner @ 2021-02-09 22:22 UTC (permalink / raw)
  To: Tim Chen
  Cc: Andrew Morton, Michal Hocko, Vladimir Davydov, Dave Hansen,
	Ying Huang, linux-mm, cgroups, linux-kernel

Hello Tim,

On Tue, Feb 09, 2021 at 12:29:47PM -0800, Tim Chen wrote:
> @@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  	 * exclusive access to the page.
>  	 */
>  
> -	if (ug->memcg != page_memcg(page)) {
> +	if (ug->memcg != page_memcg(page) ||
> +	    /* uncharge batch update soft limit tree on a node basis */
> +	    (ug->dummy_page && ug->nid != page_to_nid(page))) {

The fix makes sense to me.

However, unconditionally breaking up the batch by node can
unnecessarily regress workloads in cgroups that do not have a soft
limit configured, and cgroup2 which doesn't have soft limits at
all. Consider an interleaving allocation policy for example.

Can you please further gate on memcg->soft_limit != PAGE_COUNTER_MAX,
or at least on !cgroup_subsys_on_dfl(memory_cgrp_subsys)?

Thanks

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates
  2021-02-09 22:22   ` Johannes Weiner
@ 2021-02-09 22:34     ` Tim Chen
  0 siblings, 0 replies; 9+ messages in thread
From: Tim Chen @ 2021-02-09 22:34 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Vladimir Davydov, Dave Hansen,
	Ying Huang, linux-mm, cgroups, linux-kernel



On 2/9/21 2:22 PM, Johannes Weiner wrote:
> Hello Tim,
> 
> On Tue, Feb 09, 2021 at 12:29:47PM -0800, Tim Chen wrote:
>> @@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>>  	 * exclusive access to the page.
>>  	 */
>>  
>> -	if (ug->memcg != page_memcg(page)) {
>> +	if (ug->memcg != page_memcg(page) ||
>> +	    /* uncharge batch update soft limit tree on a node basis */
>> +	    (ug->dummy_page && ug->nid != page_to_nid(page))) {
> 
> The fix makes sense to me.
> 
> However, unconditionally breaking up the batch by node can
> unnecessarily regress workloads in cgroups that do not have a soft
> limit configured, and cgroup2 which doesn't have soft limits at
> all. Consider an interleaving allocation policy for example.
> 
> Can you please further gate on memcg->soft_limit != PAGE_COUNTER_MAX,
> or at least on !cgroup_subsys_on_dfl(memory_cgrp_subsys)?
> 

Sure.  Will fix this.

Tim

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] mm: Fix dropped memcg from mem cgroup soft limit tree
  2021-02-09 20:29 ` [PATCH 1/3] mm: Fix dropped memcg from mem cgroup soft limit tree Tim Chen
@ 2021-02-10  9:47   ` Michal Hocko
  0 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2021-02-10  9:47 UTC (permalink / raw)
  To: Tim Chen
  Cc: Andrew Morton, Johannes Weiner, Vladimir Davydov, Dave Hansen,
	Ying Huang, linux-mm, cgroups, linux-kernel

On Tue 09-02-21 12:29:45, Tim Chen wrote:
> During soft limit memory reclaim, we will temporarily remove the target
> mem cgroup from the cgroup soft limit tree.  We then perform memory
> reclaim, update the memory usage excess count and re-insert the mem
> cgroup back into the mem cgroup soft limit tree according to the new
> memory usage excess count.
> 
> However, when memory reclaim failed for a maximum number of attempts
> and we bail out of the reclaim loop, we forgot to put the target mem
> cgroup chosen for next reclaim back to the soft limit tree. This prevented
> pages in the mem cgroup from being reclaimed in the future even though
> the mem cgroup exceeded its soft limit.  Fix the logic and put the mem
> cgroup back on the tree when page reclaim failed for the mem cgroup.
> 
> Reviewed-by: Ying Huang <ying.huang@intel.com>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>

It seems this goes all the way to when it has been introduced by 
4e41695356fb ("memory controller: soft limit reclaim on contention").
Please add a Fixes tag pointing to the above one. While this looks like
a rare event to happen because there should be some reclaimable memory
usually.

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
>  mm/memcontrol.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ed5cc78a8dbf..a51bf90732cb 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3505,8 +3505,12 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
>  			loop > MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS))
>  			break;
>  	} while (!nr_reclaimed);
> -	if (next_mz)
> +	if (next_mz) {
> +		spin_lock_irq(&mctz->lock);
> +		__mem_cgroup_insert_exceeded(next_mz, mctz, excess);
> +		spin_unlock_irq(&mctz->lock);
>  		css_put(&next_mz->memcg->css);
> +	}
>  	return nr_reclaimed;
>  }
>  
> -- 
> 2.20.1

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] mm: Force update of mem cgroup soft limit tree on usage excess
  2021-02-09 20:29 ` [PATCH 2/3] mm: Force update of mem cgroup soft limit tree on usage excess Tim Chen
@ 2021-02-10  9:51   ` Michal Hocko
  0 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2021-02-10  9:51 UTC (permalink / raw)
  To: Tim Chen
  Cc: Andrew Morton, Johannes Weiner, Vladimir Davydov, Dave Hansen,
	Ying Huang, linux-mm, cgroups, linux-kernel

On Tue 09-02-21 12:29:46, Tim Chen wrote:
> To rate limit updates to the mem cgroup soft limit tree, we only perform
> updates every SOFTLIMIT_EVENTS_TARGET (defined as 1024) memory events.
> 
> However, this sampling based updates may miss a critical update: i.e. when
> the mem cgroup first exceeded its limit but it was not on the soft limit tree.
> It should be on the tree at that point so it could be subjected to soft
> limit page reclaim. If the mem cgroup had few memory events compared with
> other mem cgroups, we may not update it and place in on the mem cgroup
> soft limit tree for many memory events.  And this mem cgroup excess
> usage could creep up and the mem cgroup could be hidden from the soft
> limit page reclaim for a long time.

Have you observed this happening in the real life? I do agree that the
threshold based updates of the tree is not ideal but the whole soft
reclaim code is far from optimal. So why do we care only now? The
feature is essentially dead and fine tuning it sounds like a step back
to me.

> Fix this issue by forcing an update to the mem cgroup soft limit tree if a
> mem cgroup has exceeded its memory soft limit but it is not on the mem
> cgroup soft limit tree.
> 
> Reviewed-by: Ying Huang <ying.huang@intel.com>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  mm/memcontrol.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index a51bf90732cb..d72449eeb85a 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -985,15 +985,22 @@ static bool mem_cgroup_event_ratelimit(struct mem_cgroup *memcg,
>   */
>  static void memcg_check_events(struct mem_cgroup *memcg, struct page *page)
>  {
> +	struct mem_cgroup_per_node *mz;
> +	bool force_update = false;
> +
> +	mz = mem_cgroup_nodeinfo(memcg, page_to_nid(page));
> +	if (mz && !mz->on_tree && soft_limit_excess(mz->memcg) > 0)
> +		force_update = true;
> +
>  	/* threshold event is triggered in finer grain than soft limit */
> -	if (unlikely(mem_cgroup_event_ratelimit(memcg,
> +	if (unlikely((force_update) || mem_cgroup_event_ratelimit(memcg,
>  						MEM_CGROUP_TARGET_THRESH))) {
>  		bool do_softlimit;
>  
>  		do_softlimit = mem_cgroup_event_ratelimit(memcg,
>  						MEM_CGROUP_TARGET_SOFTLIMIT);
>  		mem_cgroup_threshold(memcg);
> -		if (unlikely(do_softlimit))
> +		if (unlikely((force_update) || do_softlimit))
>  			mem_cgroup_update_tree(memcg, page);
>  	}
>  }
> -- 
> 2.20.1

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates
  2021-02-09 20:29 ` [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates Tim Chen
  2021-02-09 22:22   ` Johannes Weiner
@ 2021-02-10 10:08   ` Michal Hocko
  1 sibling, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2021-02-10 10:08 UTC (permalink / raw)
  To: Tim Chen
  Cc: Andrew Morton, Johannes Weiner, Vladimir Davydov, Dave Hansen,
	Ying Huang, linux-mm, cgroups, linux-kernel

On Tue 09-02-21 12:29:47, Tim Chen wrote:
> On a per node basis, the mem cgroup soft limit tree on each node tracks
> how much a cgroup has exceeded its soft limit memory limit and sorts
> the cgroup by its excess usage.  On page release, the trees are not
> updated right away, until we have gathered a batch of pages belonging to
> the same cgroup. This reduces the frequency of updating the soft limit tree
> and locking of the tree and associated cgroup.
> 
> However, the batch of pages could contain pages from multiple nodes but
> only the soft limit tree from one node would get updated.  Change the
> logic so that we update the tree in batch of pages, with each batch of
> pages all in the same mem cgroup and memory node.  An update is issued for
> the batch of pages of a node collected till now whenever we encounter
> a page belonging to a different node.

I do agree with Johannes here. This shouldn't be done unconditionally
for all memcgs. Wouldn't it be much better to do the fix up in the
mem_cgroup_soft_reclaim path instead. Simply check the excess before
doing any reclaim?

Btw. have you seen this triggering a noticeable misbehaving? I would
expect this to have a rather small effect considering how many sources
of memcg_check_events we have.

Unless I have missed something this has been introduced by 747db954cab6
("mm: memcontrol: use page lists for uncharge batching"). Please add
Fixes tag as well if this is really worth fixing.

> Reviewed-by: Ying Huang <ying.huang@intel.com>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> ---
>  mm/memcontrol.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index d72449eeb85a..f5a4a0e4e2ec 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6804,6 +6804,7 @@ struct uncharge_gather {
>  	unsigned long pgpgout;
>  	unsigned long nr_kmem;
>  	struct page *dummy_page;
> +	int nid;
>  };
>  
>  static inline void uncharge_gather_clear(struct uncharge_gather *ug)
> @@ -6849,7 +6850,9 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  	 * exclusive access to the page.
>  	 */
>  
> -	if (ug->memcg != page_memcg(page)) {
> +	if (ug->memcg != page_memcg(page) ||
> +	    /* uncharge batch update soft limit tree on a node basis */
> +	    (ug->dummy_page && ug->nid != page_to_nid(page))) {
>  		if (ug->memcg) {
>  			uncharge_batch(ug);
>  			uncharge_gather_clear(ug);
> @@ -6869,6 +6872,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>  		ug->pgpgout++;
>  
>  	ug->dummy_page = page;
> +	ug->nid = page_to_nid(page);
>  	page->memcg_data = 0;
>  	css_put(&ug->memcg->css);
>  }
> -- 
> 2.20.1

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-02-10 10:23 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-09 20:29 [PATCH 0/3] Soft limit memory management bug fixes Tim Chen
2021-02-09 20:29 ` [PATCH 1/3] mm: Fix dropped memcg from mem cgroup soft limit tree Tim Chen
2021-02-10  9:47   ` Michal Hocko
2021-02-09 20:29 ` [PATCH 2/3] mm: Force update of mem cgroup soft limit tree on usage excess Tim Chen
2021-02-10  9:51   ` Michal Hocko
2021-02-09 20:29 ` [PATCH 3/3] mm: Fix missing mem cgroup soft limit tree updates Tim Chen
2021-02-09 22:22   ` Johannes Weiner
2021-02-09 22:34     ` Tim Chen
2021-02-10 10:08   ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).