* + mm-list_lru-optimize-memcg_reparent_list_lru_node.patch added to -mm tree
@ 2022-03-09 21:26 Andrew Morton
0 siblings, 0 replies; only message in thread
From: Andrew Morton @ 2022-03-09 21:26 UTC (permalink / raw)
To: mm-commits, songmuchun, shakeelb, roman.gushchin, mhocko, hannes,
longman, akpm
The patch titled
Subject: mm/list_lru: optimize memcg_reparent_list_lru_node()
has been added to the -mm tree. Its filename is
mm-list_lru-optimize-memcg_reparent_list_lru_node.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-list_lru-optimize-memcg_reparent_list_lru_node.patch
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-list_lru-optimize-memcg_reparent_list_lru_node.patch
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Waiman Long <longman@redhat.com>
Subject: mm/list_lru: optimize memcg_reparent_list_lru_node()
Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node() to be
race free"), we are tracking the total number of lru entries in a
list_lru_node in its nr_items field. In the case of
memcg_reparent_list_lru_node(), there is nothing to be done if nr_items is
0. We don't even need to take the nlru->lock as no new lru entry could be
added by a racing list_lru_add() to the draining src_idx memcg at this
point.
On systems that serve a lot of containers, it is possible that there can
be thousands of list_lru's present due to the fact that each container may
mount its own container specific filesystems. As a typical container uses
only a few cpus, it is likely that only the list_lru_node that contains
those cpus will be utilized while the rests may be empty. In other words,
there can be a lot of list_lru_node with 0 nr_items. By skipping a
lock/unlock operation and loading a cacheline from memcg_lrus, a sizeable
number of cpu cycles can be saved. That can be substantial if we are
talking about thousands of list_lru_node's with 0 nr_items.
Link: https://lkml.kernel.org/r/20220309144000.1470138-1-longman@redhat.com
Signed-off-by: Waiman Long <longman@redhat.com>
Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shakeel Butt <shakeelb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/list_lru.c | 6 ++++++
1 file changed, 6 insertions(+)
--- a/mm/list_lru.c~mm-list_lru-optimize-memcg_reparent_list_lru_node
+++ a/mm/list_lru.c
@@ -519,6 +519,12 @@ static void memcg_drain_list_lru_node(st
struct list_lru_one *src, *dst;
/*
+ * If there is no lru entry in this nlru, we can skip it immediately.
+ */
+ if (!READ_ONCE(nlru->nr_items))
+ return;
+
+ /*
* Since list_lru_{add,del} may be called under an IRQ-safe lock,
* we have to use IRQ-safe primitives here to avoid deadlock.
*/
_
Patches currently in -mm which might be from longman@redhat.com are
mm-list_lru-optimize-memcg_reparent_list_lru_node.patch
lib-vsprintf-avoid-redundant-work-with-0-size.patch
mm-page_owner-use-scnprintf-to-avoid-excessive-buffer-overrun-check.patch
mm-page_owner-print-memcg-information.patch
mm-page_owner-record-task-command-name.patch
ipc-mqueue-use-get_tree_nodev-in-mqueue_get_tree.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2022-03-09 21:26 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-09 21:26 + mm-list_lru-optimize-memcg_reparent_list_lru_node.patch added to -mm tree Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).