From: Michal Hocko <mhocko@suse.cz> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>, Johannes Weiner <hannes@cmpxchg.org>, Ying Han <yinghan@google.com>, Tejun Heo <htejun@gmail.com>, Glauber Costa <glommer@parallels.com>, Li Zefan <lizefan@huawei.com> Subject: [PATCH v3 1/7] memcg: synchronize per-zone iterator access by a spinlock Date: Thu, 3 Jan 2013 18:54:15 +0100 [thread overview] Message-ID: <1357235661-29564-2-git-send-email-mhocko@suse.cz> (raw) In-Reply-To: <1357235661-29564-1-git-send-email-mhocko@suse.cz> per-zone per-priority iterator is aimed at coordinating concurrent reclaimers on the same hierarchy (or the global reclaim when all groups are reclaimed) so that all groups get reclaimed evenly as much as possible. iter->position holds the last css->id visited and iter->generation signals the completed tree walk (when it is incremented). Concurrent reclaimers are supposed to provide a reclaim cookie which holds the reclaim priority and the last generation they saw. If cookie's generation doesn't match the iterator's view then other concurrent reclaimer already did the job and the tree walk is done for that priority. This scheme works nicely in most cases but it is not raceless. Two racing reclaimers can see the same iter->position and so bang on the same group. iter->generation increment is not serialized as well so a reclaimer can see an updated iter->position with and old generation so the iteration might be restarted from the root of the hierarchy. The simplest way to fix this issue is to synchronise access to the iterator by a lock. This implementation uses per-zone per-priority spinlock which linearizes only directly racing reclaimers which use reclaim cookies so the effect of the new locking should be really minimal. I have to note that I haven't seen this as a real issue so far. The primary motivation for the change is different. The following patch will change the way how the iterator is implemented and css->id iteration will be replaced cgroup generic iteration which requires storing mem_cgroup pointer into iterator and that requires reference counting and so concurrent access will be a problem. Signed-off-by: Michal Hocko <mhocko@suse.cz> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> --- mm/memcontrol.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1ea8951..e71cfde 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -148,6 +148,8 @@ struct mem_cgroup_reclaim_iter { int position; /* scan generation, increased every round-trip */ unsigned int generation; + /* lock to protect the position and generation */ + spinlock_t iter_lock; }; /* @@ -1161,8 +1163,11 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, mz = mem_cgroup_zoneinfo(root, nid, zid); iter = &mz->reclaim_iter[reclaim->priority]; - if (prev && reclaim->generation != iter->generation) + spin_lock(&iter->iter_lock); + if (prev && reclaim->generation != iter->generation) { + spin_unlock(&iter->iter_lock); return NULL; + } id = iter->position; } @@ -1181,6 +1186,7 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, iter->generation++; else if (!prev && memcg) reclaim->generation = iter->generation; + spin_unlock(&iter->iter_lock); } if (prev && !css) @@ -6051,8 +6057,12 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node) return 1; for (zone = 0; zone < MAX_NR_ZONES; zone++) { + int prio; + mz = &pn->zoneinfo[zone]; lruvec_init(&mz->lruvec); + for (prio = 0; prio < DEF_PRIORITY + 1; prio++) + spin_lock_init(&mz->reclaim_iter[prio].iter_lock); mz->usage_in_excess = 0; mz->on_tree = false; mz->memcg = memcg; -- 1.7.10.4
WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@suse.cz> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>, Johannes Weiner <hannes@cmpxchg.org>, Ying Han <yinghan@google.com>, Tejun Heo <htejun@gmail.com>, Glauber Costa <glommer@parallels.com>, Li Zefan <lizefan@huawei.com> Subject: [PATCH v3 1/7] memcg: synchronize per-zone iterator access by a spinlock Date: Thu, 3 Jan 2013 18:54:15 +0100 [thread overview] Message-ID: <1357235661-29564-2-git-send-email-mhocko@suse.cz> (raw) In-Reply-To: <1357235661-29564-1-git-send-email-mhocko@suse.cz> per-zone per-priority iterator is aimed at coordinating concurrent reclaimers on the same hierarchy (or the global reclaim when all groups are reclaimed) so that all groups get reclaimed evenly as much as possible. iter->position holds the last css->id visited and iter->generation signals the completed tree walk (when it is incremented). Concurrent reclaimers are supposed to provide a reclaim cookie which holds the reclaim priority and the last generation they saw. If cookie's generation doesn't match the iterator's view then other concurrent reclaimer already did the job and the tree walk is done for that priority. This scheme works nicely in most cases but it is not raceless. Two racing reclaimers can see the same iter->position and so bang on the same group. iter->generation increment is not serialized as well so a reclaimer can see an updated iter->position with and old generation so the iteration might be restarted from the root of the hierarchy. The simplest way to fix this issue is to synchronise access to the iterator by a lock. This implementation uses per-zone per-priority spinlock which linearizes only directly racing reclaimers which use reclaim cookies so the effect of the new locking should be really minimal. I have to note that I haven't seen this as a real issue so far. The primary motivation for the change is different. The following patch will change the way how the iterator is implemented and css->id iteration will be replaced cgroup generic iteration which requires storing mem_cgroup pointer into iterator and that requires reference counting and so concurrent access will be a problem. Signed-off-by: Michal Hocko <mhocko@suse.cz> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> --- mm/memcontrol.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1ea8951..e71cfde 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -148,6 +148,8 @@ struct mem_cgroup_reclaim_iter { int position; /* scan generation, increased every round-trip */ unsigned int generation; + /* lock to protect the position and generation */ + spinlock_t iter_lock; }; /* @@ -1161,8 +1163,11 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, mz = mem_cgroup_zoneinfo(root, nid, zid); iter = &mz->reclaim_iter[reclaim->priority]; - if (prev && reclaim->generation != iter->generation) + spin_lock(&iter->iter_lock); + if (prev && reclaim->generation != iter->generation) { + spin_unlock(&iter->iter_lock); return NULL; + } id = iter->position; } @@ -1181,6 +1186,7 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, iter->generation++; else if (!prev && memcg) reclaim->generation = iter->generation; + spin_unlock(&iter->iter_lock); } if (prev && !css) @@ -6051,8 +6057,12 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node) return 1; for (zone = 0; zone < MAX_NR_ZONES; zone++) { + int prio; + mz = &pn->zoneinfo[zone]; lruvec_init(&mz->lruvec); + for (prio = 0; prio < DEF_PRIORITY + 1; prio++) + spin_lock_init(&mz->reclaim_iter[prio].iter_lock); mz->usage_in_excess = 0; mz->on_tree = false; mz->memcg = memcg; -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-01-03 17:54 UTC|newest] Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top 2013-01-03 17:54 [PATCH v3 0/7] rework mem_cgroup iterator Michal Hocko 2013-01-03 17:54 ` Michal Hocko 2013-01-03 17:54 ` Michal Hocko [this message] 2013-01-03 17:54 ` [PATCH v3 1/7] memcg: synchronize per-zone iterator access by a spinlock Michal Hocko 2013-01-03 17:54 ` [PATCH v3 2/7] memcg: keep prev's css alive for the whole mem_cgroup_iter Michal Hocko 2013-01-03 17:54 ` Michal Hocko 2013-01-03 17:54 ` [PATCH v3 3/7] memcg: rework mem_cgroup_iter to use cgroup iterators Michal Hocko 2013-01-03 17:54 ` Michal Hocko 2013-01-03 17:54 ` [PATCH v3 4/7] memcg: remove memcg from the reclaim iterators Michal Hocko 2013-01-03 17:54 ` Michal Hocko 2013-01-07 6:18 ` Kamezawa Hiroyuki 2013-01-07 6:18 ` Kamezawa Hiroyuki 2013-02-08 19:33 ` Johannes Weiner 2013-02-08 19:33 ` Johannes Weiner 2013-02-11 15:16 ` Michal Hocko 2013-02-11 15:16 ` Michal Hocko 2013-02-11 17:56 ` Johannes Weiner 2013-02-11 17:56 ` Johannes Weiner 2013-02-11 19:29 ` Michal Hocko 2013-02-11 19:29 ` Michal Hocko 2013-02-11 19:58 ` Johannes Weiner 2013-02-11 19:58 ` Johannes Weiner 2013-02-11 21:27 ` Michal Hocko 2013-02-11 21:27 ` Michal Hocko 2013-02-11 22:07 ` Michal Hocko 2013-02-11 22:07 ` Michal Hocko 2013-02-11 22:39 ` Johannes Weiner 2013-02-11 22:39 ` Johannes Weiner 2013-02-12 9:54 ` Michal Hocko 2013-02-12 9:54 ` Michal Hocko 2013-02-12 15:10 ` Johannes Weiner 2013-02-12 15:10 ` Johannes Weiner 2013-02-12 15:43 ` Michal Hocko 2013-02-12 15:43 ` Michal Hocko 2013-02-12 16:10 ` Paul E. McKenney 2013-02-12 16:10 ` Paul E. McKenney 2013-02-12 17:25 ` Johannes Weiner 2013-02-12 17:25 ` Johannes Weiner 2013-02-12 18:31 ` Paul E. McKenney 2013-02-12 18:31 ` Paul E. McKenney 2013-02-12 19:53 ` Johannes Weiner 2013-02-12 19:53 ` Johannes Weiner 2013-02-13 9:51 ` Michal Hocko 2013-02-13 9:51 ` Michal Hocko 2013-02-12 17:56 ` Michal Hocko 2013-02-12 17:56 ` Michal Hocko 2013-02-12 16:13 ` Michal Hocko 2013-02-12 16:13 ` Michal Hocko 2013-02-12 16:24 ` Michal Hocko 2013-02-12 16:24 ` Michal Hocko 2013-02-12 16:37 ` Michal Hocko 2013-02-12 16:37 ` Michal Hocko 2013-02-12 16:41 ` Johannes Weiner 2013-02-12 16:41 ` Johannes Weiner 2013-02-12 17:12 ` Michal Hocko 2013-02-12 17:12 ` Michal Hocko 2013-02-12 17:37 ` Johannes Weiner 2013-02-12 17:37 ` Johannes Weiner 2013-02-13 8:11 ` Glauber Costa 2013-02-13 8:11 ` Glauber Costa 2013-02-13 10:38 ` Michal Hocko 2013-02-13 10:38 ` Michal Hocko 2013-02-13 10:34 ` Michal Hocko 2013-02-13 10:34 ` Michal Hocko 2013-02-13 12:56 ` Michal Hocko 2013-02-13 12:56 ` Michal Hocko 2013-02-12 16:33 ` Johannes Weiner 2013-02-12 16:33 ` Johannes Weiner 2013-01-03 17:54 ` [PATCH v3 5/7] memcg: simplify mem_cgroup_iter Michal Hocko 2013-01-03 17:54 ` Michal Hocko 2013-01-03 17:54 ` [PATCH v3 6/7] memcg: further " Michal Hocko 2013-01-03 17:54 ` Michal Hocko 2013-01-03 17:54 ` [PATCH v3 7/7] cgroup: remove css_get_next Michal Hocko 2013-01-03 17:54 ` Michal Hocko 2013-01-04 3:42 ` Li Zefan 2013-01-04 3:42 ` Li Zefan 2013-01-23 12:52 ` [PATCH v3 0/7] rework mem_cgroup iterator Michal Hocko 2013-01-23 12:52 ` Michal Hocko
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1357235661-29564-2-git-send-email-mhocko@suse.cz \ --to=mhocko@suse.cz \ --cc=glommer@parallels.com \ --cc=hannes@cmpxchg.org \ --cc=htejun@gmail.com \ --cc=kamezawa.hiroyu@jp.fujitsu.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=lizefan@huawei.com \ --cc=yinghan@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.