From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> To: cgroups@vger.kernel.org, linux-mm@kvack.org Cc: "Andrew Morton" <akpm@linux-foundation.org>, "Johannes Weiner" <hannes@cmpxchg.org>, "Michal Hocko" <mhocko@kernel.org>, "Michal Koutný" <mkoutny@suse.com>, "Peter Zijlstra" <peterz@infradead.org>, "Thomas Gleixner" <tglx@linutronix.de>, "Vladimir Davydov" <vdavydov.dev@gmail.com>, "Waiman Long" <longman@redhat.com>, "Sebastian Andrzej Siewior" <bigeasy@linutronix.de> Subject: [PATCH 4/4] mm/memcg: Allow the task_obj optimization only on non-PREEMPTIBLE kernels. Date: Tue, 25 Jan 2022 17:43:37 +0100 [thread overview] Message-ID: <20220125164337.2071854-5-bigeasy@linutronix.de> (raw) In-Reply-To: <20220125164337.2071854-1-bigeasy@linutronix.de> Based on my understanding the optimisation with task_obj for in_task() mask sense on non-PREEMPTIBLE kernels because preempt_disable()/enable() is optimized away. This could be then restricted to !CONFIG_PREEMPTION kernel instead to only PREEMPT_RT. With CONFIG_PREEMPT_DYNAMIC a non-PREEMPTIBLE kernel can also be configured but these kernels always have preempt_disable()/enable() present so it probably makes no sense here for the optimisation. I did a micro benchmark with disabled interrupts and a loop of 100.000.000 invokcations of kfree(kmalloc()). Based on the results it makes no sense to add an exception based on dynamic preemption. Restrict the optimisation to !CONFIG_PREEMPTION kernels. Link: https://lore.kernel.org/all/YdX+INO9gQje6d0S@linutronix.de Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- mm/memcontrol.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2d8be88c00888..20ea8f28ad99b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2030,7 +2030,7 @@ struct memcg_stock_pcp { local_lock_t stock_lock; struct mem_cgroup *cached; /* this never be root cgroup */ unsigned int nr_pages; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION /* Protects only task_obj */ local_lock_t task_obj_lock; struct obj_stock task_obj; @@ -2043,7 +2043,7 @@ struct memcg_stock_pcp { }; static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock) = { .stock_lock = INIT_LOCAL_LOCK(stock_lock), -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION .task_obj_lock = INIT_LOCAL_LOCK(task_obj_lock), #endif }; @@ -2132,7 +2132,7 @@ static void drain_local_stock(struct work_struct *dummy) * drain_stock races is that we always operate on local CPU stock * here with IRQ disabled */ -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION local_lock(&memcg_stock.task_obj_lock); old = drain_obj_stock(&this_cpu_ptr(&memcg_stock)->task_obj, NULL); local_unlock(&memcg_stock.task_obj_lock); @@ -2741,7 +2741,7 @@ static inline struct obj_stock *get_obj_stock(unsigned long *pflags, { struct memcg_stock_pcp *stock; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (likely(in_task())) { *pflags = 0UL; *stock_lock_acquried = false; @@ -2759,7 +2759,7 @@ static inline struct obj_stock *get_obj_stock(unsigned long *pflags, static inline void put_obj_stock(unsigned long flags, bool stock_lock_acquried) { -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (likely(!stock_lock_acquried)) { local_unlock(&memcg_stock.task_obj_lock); return; @@ -3177,7 +3177,7 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, { struct mem_cgroup *memcg; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (in_task() && stock->task_obj.cached_objcg) { memcg = obj_cgroup_memcg(stock->task_obj.cached_objcg); if (memcg && mem_cgroup_is_descendant(memcg, root_memcg)) -- 2.34.1
WARNING: multiple messages have this Message-ID (diff)
From: Sebastian Andrzej Siewior <bigeasy-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org> To: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org Cc: "Andrew Morton" <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>, "Johannes Weiner" <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>, "Michal Hocko" <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>, "Michal Koutný" <mkoutny-IBi9RG/b67k@public.gmane.org>, "Peter Zijlstra" <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>, "Thomas Gleixner" <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>, "Vladimir Davydov" <vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>, "Waiman Long" <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, "Sebastian Andrzej Siewior" <bigeasy-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org> Subject: [PATCH 4/4] mm/memcg: Allow the task_obj optimization only on non-PREEMPTIBLE kernels. Date: Tue, 25 Jan 2022 17:43:37 +0100 [thread overview] Message-ID: <20220125164337.2071854-5-bigeasy@linutronix.de> (raw) In-Reply-To: <20220125164337.2071854-1-bigeasy-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org> Based on my understanding the optimisation with task_obj for in_task() mask sense on non-PREEMPTIBLE kernels because preempt_disable()/enable() is optimized away. This could be then restricted to !CONFIG_PREEMPTION kernel instead to only PREEMPT_RT. With CONFIG_PREEMPT_DYNAMIC a non-PREEMPTIBLE kernel can also be configured but these kernels always have preempt_disable()/enable() present so it probably makes no sense here for the optimisation. I did a micro benchmark with disabled interrupts and a loop of 100.000.000 invokcations of kfree(kmalloc()). Based on the results it makes no sense to add an exception based on dynamic preemption. Restrict the optimisation to !CONFIG_PREEMPTION kernels. Link: https://lore.kernel.org/all/YdX+INO9gQje6d0S-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org> --- mm/memcontrol.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2d8be88c00888..20ea8f28ad99b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2030,7 +2030,7 @@ struct memcg_stock_pcp { local_lock_t stock_lock; struct mem_cgroup *cached; /* this never be root cgroup */ unsigned int nr_pages; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION /* Protects only task_obj */ local_lock_t task_obj_lock; struct obj_stock task_obj; @@ -2043,7 +2043,7 @@ struct memcg_stock_pcp { }; static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock) = { .stock_lock = INIT_LOCAL_LOCK(stock_lock), -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION .task_obj_lock = INIT_LOCAL_LOCK(task_obj_lock), #endif }; @@ -2132,7 +2132,7 @@ static void drain_local_stock(struct work_struct *dummy) * drain_stock races is that we always operate on local CPU stock * here with IRQ disabled */ -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION local_lock(&memcg_stock.task_obj_lock); old = drain_obj_stock(&this_cpu_ptr(&memcg_stock)->task_obj, NULL); local_unlock(&memcg_stock.task_obj_lock); @@ -2741,7 +2741,7 @@ static inline struct obj_stock *get_obj_stock(unsigned long *pflags, { struct memcg_stock_pcp *stock; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (likely(in_task())) { *pflags = 0UL; *stock_lock_acquried = false; @@ -2759,7 +2759,7 @@ static inline struct obj_stock *get_obj_stock(unsigned long *pflags, static inline void put_obj_stock(unsigned long flags, bool stock_lock_acquried) { -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (likely(!stock_lock_acquried)) { local_unlock(&memcg_stock.task_obj_lock); return; @@ -3177,7 +3177,7 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, { struct mem_cgroup *memcg; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (in_task() && stock->task_obj.cached_objcg) { memcg = obj_cgroup_memcg(stock->task_obj.cached_objcg); if (memcg && mem_cgroup_is_descendant(memcg, root_memcg)) -- 2.34.1
next prev parent reply other threads:[~2022-01-25 16:43 UTC|newest] Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-01-25 16:43 [PATCH 0/4] mm/memcg: Address PREEMPT_RT problems instead of disabling it Sebastian Andrzej Siewior 2022-01-25 16:43 ` Sebastian Andrzej Siewior 2022-01-25 16:43 ` [PATCH 1/4] mm/memcg: Disable threshold event handlers on PREEMPT_RT Sebastian Andrzej Siewior 2022-01-25 16:43 ` Sebastian Andrzej Siewior 2022-01-26 14:40 ` Michal Hocko 2022-01-26 14:40 ` Michal Hocko 2022-01-26 14:45 ` Sebastian Andrzej Siewior 2022-01-26 14:45 ` Sebastian Andrzej Siewior 2022-01-26 15:04 ` Michal Koutný 2022-01-26 15:04 ` Michal Koutný 2022-01-27 13:36 ` Sebastian Andrzej Siewior 2022-01-27 13:36 ` Sebastian Andrzej Siewior 2022-01-26 15:21 ` Michal Hocko 2022-01-26 15:21 ` Michal Hocko 2022-01-25 16:43 ` [PATCH 2/4] mm/memcg: Protect per-CPU counter by disabling preemption on PREEMPT_RT where needed Sebastian Andrzej Siewior 2022-01-25 16:43 ` Sebastian Andrzej Siewior 2022-01-26 10:06 ` Vlastimil Babka 2022-01-26 10:06 ` Vlastimil Babka 2022-01-26 11:24 ` Sebastian Andrzej Siewior 2022-01-26 11:24 ` Sebastian Andrzej Siewior 2022-01-26 14:56 ` Michal Hocko 2022-01-26 14:56 ` Michal Hocko 2022-01-25 16:43 ` [PATCH 3/4] mm/memcg: Add a local_lock_t for IRQ and TASK object Sebastian Andrzej Siewior 2022-01-25 16:43 ` Sebastian Andrzej Siewior 2022-01-26 15:20 ` Michal Hocko 2022-01-26 15:20 ` Michal Hocko 2022-01-27 11:53 ` Sebastian Andrzej Siewior 2022-01-27 11:53 ` Sebastian Andrzej Siewior 2022-02-01 12:04 ` Michal Hocko 2022-02-01 12:04 ` Michal Hocko 2022-02-01 12:11 ` Sebastian Andrzej Siewior 2022-02-01 12:11 ` Sebastian Andrzej Siewior 2022-02-01 15:29 ` Michal Hocko 2022-02-01 15:29 ` Michal Hocko 2022-02-03 9:54 ` Sebastian Andrzej Siewior 2022-02-03 9:54 ` Sebastian Andrzej Siewior 2022-02-03 10:09 ` Michal Hocko 2022-02-03 10:09 ` Michal Hocko 2022-02-03 11:09 ` Sebastian Andrzej Siewior 2022-02-03 11:09 ` Sebastian Andrzej Siewior 2022-02-08 17:58 ` Shakeel Butt 2022-02-08 17:58 ` Shakeel Butt 2022-02-09 9:17 ` Michal Hocko 2022-02-09 9:17 ` Michal Hocko 2022-01-26 16:57 ` Vlastimil Babka 2022-01-26 16:57 ` Vlastimil Babka 2022-01-31 15:06 ` Sebastian Andrzej Siewior 2022-01-31 15:06 ` Sebastian Andrzej Siewior 2022-02-03 16:01 ` Vlastimil Babka 2022-02-03 16:01 ` Vlastimil Babka 2022-02-08 17:17 ` Sebastian Andrzej Siewior 2022-02-08 17:17 ` Sebastian Andrzej Siewior 2022-02-08 17:28 ` Michal Hocko 2022-02-08 17:28 ` Michal Hocko 2022-02-09 1:48 ` [mm/memcg] 86895e1e85: WARNING:possible_circular_locking_dependency_detected kernel test robot 2022-02-09 1:48 ` kernel test robot 2022-01-25 16:43 ` Sebastian Andrzej Siewior [this message] 2022-01-25 16:43 ` [PATCH 4/4] mm/memcg: Allow the task_obj optimization only on non-PREEMPTIBLE kernels Sebastian Andrzej Siewior 2022-01-25 23:21 ` [PATCH 0/4] mm/memcg: Address PREEMPT_RT problems instead of disabling it Andrew Morton 2022-01-25 23:21 ` Andrew Morton 2022-01-26 7:30 ` Sebastian Andrzej Siewior 2022-01-26 7:30 ` Sebastian Andrzej Siewior
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220125164337.2071854-5-bigeasy@linutronix.de \ --to=bigeasy@linutronix.de \ --cc=akpm@linux-foundation.org \ --cc=cgroups@vger.kernel.org \ --cc=hannes@cmpxchg.org \ --cc=linux-mm@kvack.org \ --cc=longman@redhat.com \ --cc=mhocko@kernel.org \ --cc=mkoutny@suse.com \ --cc=peterz@infradead.org \ --cc=tglx@linutronix.de \ --cc=vdavydov.dev@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.