All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RT] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"
@ 2017-11-22 12:31 Steven Rostedt
  2017-11-23 16:22 ` Sebastian Andrzej Siewior
  0 siblings, 1 reply; 2+ messages in thread
From: Steven Rostedt @ 2017-11-22 12:31 UTC (permalink / raw)
  To: LKML, linux-rt-users
  Cc: Sebastian Andrzej Siewior, Thomas Gleixner, Mike Galbraith

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:

       refill_stock()
          get_cpu_var()
          drain_stock()
             res_counter_uncharge()
                res_counter_uncharge_until()
                   spin_lock() <== boom

But commit 3e32cb2e0a12b ("mm: memcontrol: lockless page counters") replaced
the calls to res_counter_uncharge() in drain_stock() to the lockless
function page_counter_uncharge(). There is no more spin lock there and no
more reason to have that local lock.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 mm/memcontrol.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 25e0fd082f13..27549bf47139 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1723,7 +1723,6 @@ struct memcg_stock_pcp {
 #define FLUSHING_CACHED_CHARGE	0
 };
 static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
-static DEFINE_LOCAL_IRQ_LOCK(memcg_stock_ll);
 static DEFINE_MUTEX(percpu_charge_mutex);
 
 /**
@@ -1746,7 +1745,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 	if (nr_pages > CHARGE_BATCH)
 		return ret;
 
-	local_lock_irqsave(memcg_stock_ll, flags);
+	local_irq_save(flags);
 
 	stock = this_cpu_ptr(&memcg_stock);
 	if (memcg == stock->cached && stock->nr_pages >= nr_pages) {
@@ -1754,7 +1753,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 		ret = true;
 	}
 
-	local_unlock_irqrestore(memcg_stock_ll, flags);
+	local_irq_restore(flags);
 
 	return ret;
 }
@@ -1785,13 +1784,13 @@ static void drain_local_stock(struct work_struct *dummy)
 	 * The only protection from memory hotplug vs. drain_stock races is
 	 * that we always operate on local CPU stock here with IRQ disabled
 	 */
-	local_lock_irqsave(memcg_stock_ll, flags);
+	local_irq_save(flags);
 
 	stock = this_cpu_ptr(&memcg_stock);
 	drain_stock(stock);
 	clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags);
 
-	local_unlock_irqrestore(memcg_stock_ll, flags);
+	local_irq_restore(flags);
 }
 
 /*
@@ -1803,7 +1802,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 	struct memcg_stock_pcp *stock;
 	unsigned long flags;
 
-	local_lock_irqsave(memcg_stock_ll, flags);
+	local_irq_save(flags);
 
 	stock = this_cpu_ptr(&memcg_stock);
 	if (stock->cached != memcg) { /* reset if necessary */
@@ -1815,7 +1814,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 	if (stock->nr_pages > CHARGE_BATCH)
 		drain_stock(stock);
 
-	local_unlock_irqrestore(memcg_stock_ll, flags);
+	local_irq_restore(flags);
 }
 
 /*
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH RT] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"
  2017-11-22 12:31 [PATCH RT] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Steven Rostedt
@ 2017-11-23 16:22 ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 2+ messages in thread
From: Sebastian Andrzej Siewior @ 2017-11-23 16:22 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: LKML, linux-rt-users, Thomas Gleixner, Mike Galbraith

On 2017-11-22 07:31:19 [-0500], Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
> 
> The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
> fixed this issue:
> 
>        refill_stock()
>           get_cpu_var()
>           drain_stock()
>              res_counter_uncharge()
>                 res_counter_uncharge_until()
>                    spin_lock() <== boom
> 
> But commit 3e32cb2e0a12b ("mm: memcontrol: lockless page counters") replaced
> the calls to res_counter_uncharge() in drain_stock() to the lockless
> function page_counter_uncharge(). There is no more spin lock there and no
> more reason to have that local lock.

as per v4.9+ the reported issue should not have occur due to the
locallocks. I am not sure why those were not used in first place.

The loop in page_counter_uncharge() does not look like it would take
many iterations and in worst case we could have two invocations with
interrupts off. So okay, lets revert that.

That upstream commit appeared in v3.19 and the patch in question in
v3.18.7-rt2 and v3.18 seems still to be maintained. So I guess that
v3.18 would need the locallocks that we are about to remove here. I am
not sure if any earlier versions have the patch backported.

> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

Sebastian

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-11-23 16:22 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-22 12:31 [PATCH RT] Revert "memcontrol: Prevent scheduling while atomic in cgroup code" Steven Rostedt
2017-11-23 16:22 ` Sebastian Andrzej Siewior

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.