All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] memcg: replace ss->id_lock with a rwlock
@ 2011-08-10 18:20 Andrew Bresticker
  2011-08-11  0:02 ` KAMEZAWA Hiroyuki
  2011-08-19 13:55 ` Johannes Weiner
  0 siblings, 2 replies; 5+ messages in thread
From: Andrew Bresticker @ 2011-08-10 18:20 UTC (permalink / raw)
  To: Paul Menage, Li Zefan, KAMEZAWA Hiroyuki, Ying Han
  Cc: linux-mm, Andrew Bresticker

While back-porting Johannes Weiner's patch "mm: memcg-aware global reclaim"
for an internal effort, we noticed a significant performance regression
during page-reclaim heavy workloads due to high contention of the ss->id_lock.
This lock protects idr map, and serializes calls to idr_get_next() in
css_get_next() (which is used during the memcg hierarchy walk).  Since
idr_get_next() is just doing a look up, we need only serialize it with
respect to idr_remove()/idr_get_new().  By making the ss->id_lock a
rwlock, contention is greatly reduced and performance improves.

Tested: cat a 256m file from a ramdisk in a 128m container 50 times
on each core (one file + container per core) in parallel on a NUMA
machine.  Result is the time for the test to complete in 1 of the
containers.  Both kernels included Johannes' memcg-aware global
reclaim patches.
Before rwlock patch: 1710.778s
After rwlock patch: 152.227s

Signed-off-by: Andrew Bresticker <abrestic@google.com>
---
 include/linux/cgroup.h |    2 +-
 kernel/cgroup.c        |   18 +++++++++---------
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index da7e4bc..1b7f9d5 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -516,7 +516,7 @@ struct cgroup_subsys {
 	struct list_head sibling;
 	/* used when use_id == true */
 	struct idr idr;
-	spinlock_t id_lock;
+	rwlock_t id_lock;
 
 	/* should be defined only by modular subsystems */
 	struct module *module;
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 1d2b6ce..bc3caf0 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4880,9 +4880,9 @@ void free_css_id(struct cgroup_subsys *ss, struct cgroup_subsys_state *css)
 
 	rcu_assign_pointer(id->css, NULL);
 	rcu_assign_pointer(css->id, NULL);
-	spin_lock(&ss->id_lock);
+	write_lock(&ss->id_lock);
 	idr_remove(&ss->idr, id->id);
-	spin_unlock(&ss->id_lock);
+	write_unlock(&ss->id_lock);
 	kfree_rcu(id, rcu_head);
 }
 EXPORT_SYMBOL_GPL(free_css_id);
@@ -4908,10 +4908,10 @@ static struct css_id *get_new_cssid(struct cgroup_subsys *ss, int depth)
 		error = -ENOMEM;
 		goto err_out;
 	}
-	spin_lock(&ss->id_lock);
+	write_lock(&ss->id_lock);
 	/* Don't use 0. allocates an ID of 1-65535 */
 	error = idr_get_new_above(&ss->idr, newid, 1, &myid);
-	spin_unlock(&ss->id_lock);
+	write_unlock(&ss->id_lock);
 
 	/* Returns error when there are no free spaces for new ID.*/
 	if (error) {
@@ -4926,9 +4926,9 @@ static struct css_id *get_new_cssid(struct cgroup_subsys *ss, int depth)
 	return newid;
 remove_idr:
 	error = -ENOSPC;
-	spin_lock(&ss->id_lock);
+	write_lock(&ss->id_lock);
 	idr_remove(&ss->idr, myid);
-	spin_unlock(&ss->id_lock);
+	write_unlock(&ss->id_lock);
 err_out:
 	kfree(newid);
 	return ERR_PTR(error);
@@ -4940,7 +4940,7 @@ static int __init_or_module cgroup_init_idr(struct cgroup_subsys *ss,
 {
 	struct css_id *newid;
 
-	spin_lock_init(&ss->id_lock);
+	rwlock_init(&ss->id_lock);
 	idr_init(&ss->idr);
 
 	newid = get_new_cssid(ss, 0);
@@ -5035,9 +5035,9 @@ css_get_next(struct cgroup_subsys *ss, int id,
 		 * scan next entry from bitmap(tree), tmpid is updated after
 		 * idr_get_next().
 		 */
-		spin_lock(&ss->id_lock);
+		read_lock(&ss->id_lock);
 		tmp = idr_get_next(&ss->idr, &tmpid);
-		spin_unlock(&ss->id_lock);
+		read_unlock(&ss->id_lock);
 
 		if (!tmp)
 			break;
-- 
1.7.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] memcg: replace ss->id_lock with a rwlock
  2011-08-10 18:20 [PATCH] memcg: replace ss->id_lock with a rwlock Andrew Bresticker
@ 2011-08-11  0:02 ` KAMEZAWA Hiroyuki
  2011-08-19 13:55 ` Johannes Weiner
  1 sibling, 0 replies; 5+ messages in thread
From: KAMEZAWA Hiroyuki @ 2011-08-11  0:02 UTC (permalink / raw)
  To: Andrew Bresticker; +Cc: Paul Menage, Li Zefan, Ying Han, linux-mm

On Wed, 10 Aug 2011 11:20:33 -0700
Andrew Bresticker <abrestic@google.com> wrote:

> While back-porting Johannes Weiner's patch "mm: memcg-aware global reclaim"
> for an internal effort, we noticed a significant performance regression
> during page-reclaim heavy workloads due to high contention of the ss->id_lock.
> This lock protects idr map, and serializes calls to idr_get_next() in
> css_get_next() (which is used during the memcg hierarchy walk).  Since
> idr_get_next() is just doing a look up, we need only serialize it with
> respect to idr_remove()/idr_get_new().  By making the ss->id_lock a
> rwlock, contention is greatly reduced and performance improves.
> 
> Tested: cat a 256m file from a ramdisk in a 128m container 50 times
> on each core (one file + container per core) in parallel on a NUMA
> machine.  Result is the time for the test to complete in 1 of the
> containers.  Both kernels included Johannes' memcg-aware global
> reclaim patches.
> Before rwlock patch: 1710.778s
> After rwlock patch: 152.227s
> 
> Signed-off-by: Andrew Bresticker <abrestic@google.com>

Hopefully, the changelog should be based on the latest Linus's git tree
or mmotm. Even now, if a system has multiple hierarchies of memcg, I think
the contention will happen.

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] memcg: replace ss->id_lock with a rwlock
  2011-08-10 18:20 [PATCH] memcg: replace ss->id_lock with a rwlock Andrew Bresticker
  2011-08-11  0:02 ` KAMEZAWA Hiroyuki
@ 2011-08-19 13:55 ` Johannes Weiner
  2011-08-24  4:10   ` Ying Han
  1 sibling, 1 reply; 5+ messages in thread
From: Johannes Weiner @ 2011-08-19 13:55 UTC (permalink / raw)
  To: Andrew Bresticker
  Cc: Paul Menage, Li Zefan, KAMEZAWA Hiroyuki, Ying Han, linux-mm

Hello Andrew,

On Wed, Aug 10, 2011 at 11:20:33AM -0700, Andrew Bresticker wrote:
> While back-porting Johannes Weiner's patch "mm: memcg-aware global reclaim"
> for an internal effort, we noticed a significant performance regression
> during page-reclaim heavy workloads due to high contention of the ss->id_lock.
> This lock protects idr map, and serializes calls to idr_get_next() in
> css_get_next() (which is used during the memcg hierarchy walk).  Since
> idr_get_next() is just doing a look up, we need only serialize it with
> respect to idr_remove()/idr_get_new().  By making the ss->id_lock a
> rwlock, contention is greatly reduced and performance improves.
> 
> Tested: cat a 256m file from a ramdisk in a 128m container 50 times
> on each core (one file + container per core) in parallel on a NUMA
> machine.  Result is the time for the test to complete in 1 of the
> containers.  Both kernels included Johannes' memcg-aware global
> reclaim patches.
> Before rwlock patch: 1710.778s
> After rwlock patch: 152.227s

The reason why there is much more hierarchy walking going on is
because there was actually a design bug in the hierarchy reclaim.

The old code would pick one memcg and scan it at decreasing priority
levels until SCAN_CLUSTER_MAX pages were reclaimed.  For each memcg
scanned with priority level 12, there were SWAP_CLUSTER_MAX pages
reclaimed.

My last revision would bail the whole hierarchy walk once it reclaimed
SWAP_CLUSTER_MAX.  Also, at the time, small memcgs were not
force-scanned yet.  So 128m containers would force the priority level
to 10 before scanning anything at all (128M / pagesize >> priority),
and then bail after one or two scanned memcgs.  This means that for
each SWAP_CLUSTER_MAX reclaimed pages there was a nr_of_containers * 2
overhead of just walking the hierarchy to no avail.

I changed this and removed the bail condition based on the number of
reclaimed pages.  Instead, the cycle ends when all reclaimers together
made a full round-trip through the hierarchy.  The more cgroups, the
more likely that there are several tasks going into reclaim
concurrently, it should be a reasonable share of work for each one.

The number of reclaim invocations, thus the number of hierarchy walks,
is back to sane levels again and the id_lock contention should be less
of an issue.

Your patch still makes sense, but it's probably less urgent.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] memcg: replace ss->id_lock with a rwlock
  2011-08-19 13:55 ` Johannes Weiner
@ 2011-08-24  4:10   ` Ying Han
  2011-08-24  4:12     ` Ying Han
  0 siblings, 1 reply; 5+ messages in thread
From: Ying Han @ 2011-08-24  4:10 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Bresticker, Paul Menage, Li Zefan, KAMEZAWA Hiroyuki, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2877 bytes --]

On Fri, Aug 19, 2011 at 6:55 AM, Johannes Weiner <jweiner@redhat.com> wrote:

> Hello Andrew,
>
> On Wed, Aug 10, 2011 at 11:20:33AM -0700, Andrew Bresticker wrote:
> > While back-porting Johannes Weiner's patch "mm: memcg-aware global
> reclaim"
> > for an internal effort, we noticed a significant performance regression
> > during page-reclaim heavy workloads due to high contention of the
> ss->id_lock.
> > This lock protects idr map, and serializes calls to idr_get_next() in
> > css_get_next() (which is used during the memcg hierarchy walk).  Since
> > idr_get_next() is just doing a look up, we need only serialize it with
> > respect to idr_remove()/idr_get_new().  By making the ss->id_lock a
> > rwlock, contention is greatly reduced and performance improves.
> >
> > Tested: cat a 256m file from a ramdisk in a 128m container 50 times
> > on each core (one file + container per core) in parallel on a NUMA
> > machine.  Result is the time for the test to complete in 1 of the
> > containers.  Both kernels included Johannes' memcg-aware global
> > reclaim patches.
> > Before rwlock patch: 1710.778s
> > After rwlock patch: 152.227s
>
> The reason why there is much more hierarchy walking going on is
> because there was actually a design bug in the hierarchy reclaim.
>
> The old code would pick one memcg and scan it at decreasing priority
> levels until SCAN_CLUSTER_MAX pages were reclaimed.  For each memcg
> scanned with priority level 12, there were SWAP_CLUSTER_MAX pages
> reclaimed.
>
> My last revision would bail the whole hierarchy walk once it reclaimed
> SWAP_CLUSTER_MAX.  Also, at the time, small memcgs were not
> force-scanned yet.  So 128m containers would force the priority level
> to 10 before scanning anything at all (128M / pagesize >> priority),
> and then bail after one or two scanned memcgs.  This means that for
> each SWAP_CLUSTER_MAX reclaimed pages there was a nr_of_containers * 2
> overhead of just walking the hierarchy to no avail.
>

Good point.

To make it a bit clear, the revision which bails out the hierarchy_walk
based on nr_reclaimed is that we are looking at right now.

>
> I changed this and removed the bail condition based on the number of
> reclaimed pages.  Instead, the cycle ends when all reclaimers together
> made a full round-trip through the hierarchy.  The more cgroups, the
> more likely that there are several tasks going into reclaim
> concurrently, it should be a reasonable share of work for each one.
>

The number of reclaim invocations, thus the number of hierarchy walks,
> is back to sane levels again and the id_lock contention should be less
> of an issue.
>

looking forward to see the change.

>
> Your patch still makes sense, but it's probably less urgent.
>

I think the patch itself make senses regardless of the global reclaim
change. It seems to be a
optimization in general.

--Ying

[-- Attachment #2: Type: text/html, Size: 3846 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] memcg: replace ss->id_lock with a rwlock
  2011-08-24  4:10   ` Ying Han
@ 2011-08-24  4:12     ` Ying Han
  0 siblings, 0 replies; 5+ messages in thread
From: Ying Han @ 2011-08-24  4:12 UTC (permalink / raw)
  To: Johannes Weiner; +Cc: Paul Menage, Li Zefan, KAMEZAWA Hiroyuki, linux-mm

[-- Attachment #1: Type: text/plain, Size: 3035 bytes --]

On Tue, Aug 23, 2011 at 9:10 PM, Ying Han <yinghan@google.com> wrote:

>
>
> On Fri, Aug 19, 2011 at 6:55 AM, Johannes Weiner <jweiner@redhat.com>wrote:
>
>> Hello Andrew,
>>
>> On Wed, Aug 10, 2011 at 11:20:33AM -0700, Andrew Bresticker wrote:
>> > While back-porting Johannes Weiner's patch "mm: memcg-aware global
>> reclaim"
>> > for an internal effort, we noticed a significant performance regression
>> > during page-reclaim heavy workloads due to high contention of the
>> ss->id_lock.
>> > This lock protects idr map, and serializes calls to idr_get_next() in
>> > css_get_next() (which is used during the memcg hierarchy walk).  Since
>> > idr_get_next() is just doing a look up, we need only serialize it with
>> > respect to idr_remove()/idr_get_new().  By making the ss->id_lock a
>> > rwlock, contention is greatly reduced and performance improves.
>> >
>> > Tested: cat a 256m file from a ramdisk in a 128m container 50 times
>> > on each core (one file + container per core) in parallel on a NUMA
>> > machine.  Result is the time for the test to complete in 1 of the
>> > containers.  Both kernels included Johannes' memcg-aware global
>> > reclaim patches.
>> > Before rwlock patch: 1710.778s
>> > After rwlock patch: 152.227s
>>
>> The reason why there is much more hierarchy walking going on is
>> because there was actually a design bug in the hierarchy reclaim.
>>
>> The old code would pick one memcg and scan it at decreasing priority
>> levels until SCAN_CLUSTER_MAX pages were reclaimed.  For each memcg
>> scanned with priority level 12, there were SWAP_CLUSTER_MAX pages
>> reclaimed.
>>
>> My last revision would bail the whole hierarchy walk once it reclaimed
>> SWAP_CLUSTER_MAX.  Also, at the time, small memcgs were not
>> force-scanned yet.  So 128m containers would force the priority level
>> to 10 before scanning anything at all (128M / pagesize >> priority),
>> and then bail after one or two scanned memcgs.  This means that for
>> each SWAP_CLUSTER_MAX reclaimed pages there was a nr_of_containers * 2
>> overhead of just walking the hierarchy to no avail.
>>
>
> Good point.
>
> To make it a bit clear, the revision which bails out the hierarchy_walk
> based on nr_reclaimed is that we are looking at right now.
>
>>
>> I changed this and removed the bail condition based on the number of
>> reclaimed pages.  Instead, the cycle ends when all reclaimers together
>> made a full round-trip through the hierarchy.  The more cgroups, the
>> more likely that there are several tasks going into reclaim
>> concurrently, it should be a reasonable share of work for each one.
>>
>
> The number of reclaim invocations, thus the number of hierarchy walks,
>> is back to sane levels again and the id_lock contention should be less
>> of an issue.
>>
>
> looking forward to see the change.
>
>>
>> Your patch still makes sense, but it's probably less urgent.
>>
>
> I think the patch itself make senses regardless of the global reclaim
> change. It seems to be a
> optimization in general.
>
> --Ying
>
>

[-- Attachment #2: Type: text/html, Size: 4305 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-08-24  4:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-08-10 18:20 [PATCH] memcg: replace ss->id_lock with a rwlock Andrew Bresticker
2011-08-11  0:02 ` KAMEZAWA Hiroyuki
2011-08-19 13:55 ` Johannes Weiner
2011-08-24  4:10   ` Ying Han
2011-08-24  4:12     ` Ying Han

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.