From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 301B16453 for ; Tue, 22 Mar 2022 21:40:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E99DFC340F2; Tue, 22 Mar 2022 21:40:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985251; bh=NU1VSnI1iWmG1jYFEAa+FvjnPN15ib16sIz1RW1jo5M=; h=Date:To:From:In-Reply-To:Subject:From; b=dDSntCKGoEcn46fN1WVD19tij1XD8O0XMxW9ZIXgiwt8wdmfNBB7hafBKf/WrCzAp 0vouDlpAHLvTEk3bdlSbaITcyH5WY6BkvUrX8h7z08pX/IwfjCTPP9A5DDMabCkLvy 6Q9RZxHeVyvi43r7fGc4/QXwt6yBoCurfjQ/PE3s= Date: Tue, 22 Mar 2022 14:40:50 -0700 To: vdavydov.dev@gmail.com,tglx@linutronix.de,shakeelb@google.com,roman.gushchin@linux.dev,peterz@infradead.org,oliver.sang@intel.com,mkoutny@suse.com,mhocko@suse.com,mhocko@kernel.org,longman@redhat.com,hannes@cmpxchg.org,bigeasy@linutronix.de,akpm@linux-foundation.org,patches@lists.linux.dev,linux-mm@kvack.org,mm-commits@vger.kernel.org,torvalds@linux-foundation.org,akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 046/227] mm/memcg: disable migration instead of preemption in drain_all_stock(). Message-Id: <20220322214050.E99DFC340F2@smtp.kernel.org> Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: From: Sebastian Andrzej Siewior Subject: mm/memcg: disable migration instead of preemption in drain_all_stock(). Before the for-each-CPU loop, preemption is disabled so that so that drain_local_stock() can be invoked directly instead of scheduling a worker. Ensuring that drain_local_stock() completed on the local CPU is not correctness problem. It _could_ be that the charging path will be forced to reclaim memory because cached charges are still waiting for their draining. Disabling preemption before invoking drain_local_stock() is problematic on PREEMPT_RT due to the sleeping locks involved. To ensure that no CPU migrations happens across for_each_online_cpu() it is enouhg to use migrate_disable() which disables migration and keeps context preemptible to a sleeping lock can be acquired. A race with CPU hotplug is not a problem because pcp data is not going away. In the worst case we just schedule draining of an empty stock. Use migrate_disable() instead of get_cpu() around the for_each_online_cpu() loop. Link: https://lkml.kernel.org/r/20220226204144.1008339-7-bigeasy@linutronix.de Signed-off-by: Sebastian Andrzej Siewior Acked-by: Michal Hocko Cc: Johannes Weiner Cc: kernel test robot Cc: Michal Hocko Cc: Michal Koutný Cc: Peter Zijlstra Cc: Roman Gushchin Cc: Shakeel Butt Cc: Thomas Gleixner Cc: Vladimir Davydov Cc: Waiman Long Signed-off-by: Andrew Morton --- mm/memcontrol.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/mm/memcontrol.c~mm-memcg-disable-migration-instead-of-preemption-in-drain_all_stock +++ a/mm/memcontrol.c @@ -2300,7 +2300,8 @@ static void drain_all_stock(struct mem_c * as well as workers from this path always operate on the local * per-cpu data. CPU up doesn't touch memcg_stock at all. */ - curcpu = get_cpu(); + migrate_disable(); + curcpu = smp_processor_id(); for_each_online_cpu(cpu) { struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); struct mem_cgroup *memcg; @@ -2323,7 +2324,7 @@ static void drain_all_stock(struct mem_c schedule_work_on(cpu, &stock->work); } } - put_cpu(); + migrate_enable(); mutex_unlock(&percpu_charge_mutex); } _ From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00078C433EF for ; Tue, 22 Mar 2022 21:41:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235957AbiCVVm1 (ORCPT ); Tue, 22 Mar 2022 17:42:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235946AbiCVVmW (ORCPT ); Tue, 22 Mar 2022 17:42:22 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F1C05EDD2 for ; Tue, 22 Mar 2022 14:40:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 426C8B81D9E for ; Tue, 22 Mar 2022 21:40:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E99DFC340F2; Tue, 22 Mar 2022 21:40:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1647985251; bh=NU1VSnI1iWmG1jYFEAa+FvjnPN15ib16sIz1RW1jo5M=; h=Date:To:From:In-Reply-To:Subject:From; b=dDSntCKGoEcn46fN1WVD19tij1XD8O0XMxW9ZIXgiwt8wdmfNBB7hafBKf/WrCzAp 0vouDlpAHLvTEk3bdlSbaITcyH5WY6BkvUrX8h7z08pX/IwfjCTPP9A5DDMabCkLvy 6Q9RZxHeVyvi43r7fGc4/QXwt6yBoCurfjQ/PE3s= Date: Tue, 22 Mar 2022 14:40:50 -0700 To: vdavydov.dev@gmail.com, tglx@linutronix.de, shakeelb@google.com, roman.gushchin@linux.dev, peterz@infradead.org, oliver.sang@intel.com, mkoutny@suse.com, mhocko@suse.com, mhocko@kernel.org, longman@redhat.com, hannes@cmpxchg.org, bigeasy@linutronix.de, akpm@linux-foundation.org, patches@lists.linux.dev, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, akpm@linux-foundation.org From: Andrew Morton In-Reply-To: <20220322143803.04a5e59a07e48284f196a2f9@linux-foundation.org> Subject: [patch 046/227] mm/memcg: disable migration instead of preemption in drain_all_stock(). Message-Id: <20220322214050.E99DFC340F2@smtp.kernel.org> Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Sebastian Andrzej Siewior Subject: mm/memcg: disable migration instead of preemption in drain_all_stock(). Before the for-each-CPU loop, preemption is disabled so that so that drain_local_stock() can be invoked directly instead of scheduling a worker. Ensuring that drain_local_stock() completed on the local CPU is not correctness problem. It _could_ be that the charging path will be forced to reclaim memory because cached charges are still waiting for their draining. Disabling preemption before invoking drain_local_stock() is problematic on PREEMPT_RT due to the sleeping locks involved. To ensure that no CPU migrations happens across for_each_online_cpu() it is enouhg to use migrate_disable() which disables migration and keeps context preemptible to a sleeping lock can be acquired. A race with CPU hotplug is not a problem because pcp data is not going away. In the worst case we just schedule draining of an empty stock. Use migrate_disable() instead of get_cpu() around the for_each_online_cpu() loop. Link: https://lkml.kernel.org/r/20220226204144.1008339-7-bigeasy@linutronix.de Signed-off-by: Sebastian Andrzej Siewior Acked-by: Michal Hocko Cc: Johannes Weiner Cc: kernel test robot Cc: Michal Hocko Cc: Michal Koutný Cc: Peter Zijlstra Cc: Roman Gushchin Cc: Shakeel Butt Cc: Thomas Gleixner Cc: Vladimir Davydov Cc: Waiman Long Signed-off-by: Andrew Morton --- mm/memcontrol.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/mm/memcontrol.c~mm-memcg-disable-migration-instead-of-preemption-in-drain_all_stock +++ a/mm/memcontrol.c @@ -2300,7 +2300,8 @@ static void drain_all_stock(struct mem_c * as well as workers from this path always operate on the local * per-cpu data. CPU up doesn't touch memcg_stock at all. */ - curcpu = get_cpu(); + migrate_disable(); + curcpu = smp_processor_id(); for_each_online_cpu(cpu) { struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu); struct mem_cgroup *memcg; @@ -2323,7 +2324,7 @@ static void drain_all_stock(struct mem_c schedule_work_on(cpu, &stock->work); } } - put_cpu(); + migrate_enable(); mutex_unlock(&percpu_charge_mutex); } _