* [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate
@ 2011-12-22 4:18 ` Mandeep Singh Baines
0 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Li Zefan, linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Frederic Weisbecker, Mandeep Singh Baines, Oleg Nesterov,
Paul Menage, Tejun Heo, cgroups-u79uwXL29TY76Z2rM5mHXA,
Andrew Morton,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
We can now assume that the css_set reference held by the task
will not go away for an exiting task. PF_EXITING state can be
trusted throughout migration by checking it after locking
threadgroup.
While at it, renamed css_set_check_fetched to css_set_fetched.
!css_set_fetched() seems to read better than
!css_set_check_fetched().
Changes in V4:
* https://lkml.org/lkml/2011/12/20/368 (Tejun Heo)
* Fix typo in commit message
* Undid the rename of css_set_check_fetched
* https://lkml.org/lkml/2011/12/20/427 (Li Zefan)
* Fix comment in cgroup_task_migrate()
Changes in V3:
* https://lkml.org/lkml/2011/12/20/255 (Frederic Weisbecker)
* Fixed to put error in retval
Changes in V2:
* https://lkml.org/lkml/2011/12/19/289 (Tejun Heo)
* Updated commit message
Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
---
kernel/cgroup.c | 28 ++++++++--------------------
1 files changed, 8 insertions(+), 20 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 4936d88..8228808 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1850,14 +1850,12 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
struct css_set *newcg;
/*
- * get old css_set. We are synchronized through threadgroup_lock()
- * against PF_EXITING setting such that we can't race against
- * cgroup_exit() changing the css_set to init_css_set and dropping the
- * old one.
+ * We are synchronized through threadgroup_lock() against PF_EXITING
+ * setting such that we can't race against cgroup_exit() changing the
+ * css_set to init_css_set and dropping the old one.
*/
WARN_ON_ONCE(tsk->flags & PF_EXITING);
oldcg = tsk->cgroups;
- get_css_set(oldcg);
/* locate or allocate a new css_set for this task. */
if (guarantee) {
@@ -1872,12 +1870,9 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
might_sleep();
/* find_css_set will give us newcg already referenced. */
newcg = find_css_set(oldcg, cgrp);
- if (!newcg) {
- put_css_set(oldcg);
+ if (!newcg)
return -ENOMEM;
- }
}
- put_css_set(oldcg);
task_lock(tsk);
rcu_assign_pointer(tsk->cgroups, newcg);
@@ -2186,18 +2181,11 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
* init_css_set.
*/
oldcg = tc->task->cgroups;
- get_css_set(oldcg);
- /* see if the new one for us is already in the list? */
- if (css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list)) {
- /* was already there, nothing to do. */
- put_css_set(oldcg);
- } else {
- /* we don't already have it. get new one. */
- retval = css_set_prefetch(cgrp, oldcg, &newcg_list);
- put_css_set(oldcg);
- if (retval)
+
+ /* if we don't already have it in the list get a new one */
+ if (!css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list))
+ if (retval = css_set_prefetch(cgrp, oldcg, &newcg_list))
goto out_list_teardown;
- }
}
/*
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate
@ 2011-12-22 4:18 ` Mandeep Singh Baines
0 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Tejun Heo, Li Zefan, Frederic Weisbecker, linux-kernel
Cc: Mandeep Singh Baines, Tejun Heo, Li Zefan, containers, cgroups,
KAMEZAWA Hiroyuki, Frederic Weisbecker, Oleg Nesterov,
Andrew Morton, Paul Menage
We can now assume that the css_set reference held by the task
will not go away for an exiting task. PF_EXITING state can be
trusted throughout migration by checking it after locking
threadgroup.
While at it, renamed css_set_check_fetched to css_set_fetched.
!css_set_fetched() seems to read better than
!css_set_check_fetched().
Changes in V4:
* https://lkml.org/lkml/2011/12/20/368 (Tejun Heo)
* Fix typo in commit message
* Undid the rename of css_set_check_fetched
* https://lkml.org/lkml/2011/12/20/427 (Li Zefan)
* Fix comment in cgroup_task_migrate()
Changes in V3:
* https://lkml.org/lkml/2011/12/20/255 (Frederic Weisbecker)
* Fixed to put error in retval
Changes in V2:
* https://lkml.org/lkml/2011/12/19/289 (Tejun Heo)
* Updated commit message
Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: containers@lists.linux-foundation.org
Cc: cgroups@vger.kernel.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul Menage <paul@paulmenage.org>
---
kernel/cgroup.c | 28 ++++++++--------------------
1 files changed, 8 insertions(+), 20 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 4936d88..8228808 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1850,14 +1850,12 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
struct css_set *newcg;
/*
- * get old css_set. We are synchronized through threadgroup_lock()
- * against PF_EXITING setting such that we can't race against
- * cgroup_exit() changing the css_set to init_css_set and dropping the
- * old one.
+ * We are synchronized through threadgroup_lock() against PF_EXITING
+ * setting such that we can't race against cgroup_exit() changing the
+ * css_set to init_css_set and dropping the old one.
*/
WARN_ON_ONCE(tsk->flags & PF_EXITING);
oldcg = tsk->cgroups;
- get_css_set(oldcg);
/* locate or allocate a new css_set for this task. */
if (guarantee) {
@@ -1872,12 +1870,9 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
might_sleep();
/* find_css_set will give us newcg already referenced. */
newcg = find_css_set(oldcg, cgrp);
- if (!newcg) {
- put_css_set(oldcg);
+ if (!newcg)
return -ENOMEM;
- }
}
- put_css_set(oldcg);
task_lock(tsk);
rcu_assign_pointer(tsk->cgroups, newcg);
@@ -2186,18 +2181,11 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
* init_css_set.
*/
oldcg = tc->task->cgroups;
- get_css_set(oldcg);
- /* see if the new one for us is already in the list? */
- if (css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list)) {
- /* was already there, nothing to do. */
- put_css_set(oldcg);
- } else {
- /* we don't already have it. get new one. */
- retval = css_set_prefetch(cgrp, oldcg, &newcg_list);
- put_css_set(oldcg);
- if (retval)
+
+ /* if we don't already have it in the list get a new one */
+ if (!css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list))
+ if (retval = css_set_prefetch(cgrp, oldcg, &newcg_list))
goto out_list_teardown;
- }
}
/*
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate
@ 2011-12-22 4:18 ` Mandeep Singh Baines
0 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Tejun Heo, Li Zefan, Frederic Weisbecker,
linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Frederic Weisbecker, Mandeep Singh Baines, Oleg Nesterov,
Paul Menage, Tejun Heo, cgroups-u79uwXL29TY76Z2rM5mHXA,
Andrew Morton,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
We can now assume that the css_set reference held by the task
will not go away for an exiting task. PF_EXITING state can be
trusted throughout migration by checking it after locking
threadgroup.
While at it, renamed css_set_check_fetched to css_set_fetched.
!css_set_fetched() seems to read better than
!css_set_check_fetched().
Changes in V4:
* https://lkml.org/lkml/2011/12/20/368 (Tejun Heo)
* Fix typo in commit message
* Undid the rename of css_set_check_fetched
* https://lkml.org/lkml/2011/12/20/427 (Li Zefan)
* Fix comment in cgroup_task_migrate()
Changes in V3:
* https://lkml.org/lkml/2011/12/20/255 (Frederic Weisbecker)
* Fixed to put error in retval
Changes in V2:
* https://lkml.org/lkml/2011/12/19/289 (Tejun Heo)
* Updated commit message
Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
---
kernel/cgroup.c | 28 ++++++++--------------------
1 files changed, 8 insertions(+), 20 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 4936d88..8228808 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1850,14 +1850,12 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
struct css_set *newcg;
/*
- * get old css_set. We are synchronized through threadgroup_lock()
- * against PF_EXITING setting such that we can't race against
- * cgroup_exit() changing the css_set to init_css_set and dropping the
- * old one.
+ * We are synchronized through threadgroup_lock() against PF_EXITING
+ * setting such that we can't race against cgroup_exit() changing the
+ * css_set to init_css_set and dropping the old one.
*/
WARN_ON_ONCE(tsk->flags & PF_EXITING);
oldcg = tsk->cgroups;
- get_css_set(oldcg);
/* locate or allocate a new css_set for this task. */
if (guarantee) {
@@ -1872,12 +1870,9 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
might_sleep();
/* find_css_set will give us newcg already referenced. */
newcg = find_css_set(oldcg, cgrp);
- if (!newcg) {
- put_css_set(oldcg);
+ if (!newcg)
return -ENOMEM;
- }
}
- put_css_set(oldcg);
task_lock(tsk);
rcu_assign_pointer(tsk->cgroups, newcg);
@@ -2186,18 +2181,11 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
* init_css_set.
*/
oldcg = tc->task->cgroups;
- get_css_set(oldcg);
- /* see if the new one for us is already in the list? */
- if (css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list)) {
- /* was already there, nothing to do. */
- put_css_set(oldcg);
- } else {
- /* we don't already have it. get new one. */
- retval = css_set_prefetch(cgrp, oldcg, &newcg_list);
- put_css_set(oldcg);
- if (retval)
+
+ /* if we don't already have it in the list get a new one */
+ if (!css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list))
+ if (retval = css_set_prefetch(cgrp, oldcg, &newcg_list))
goto out_list_teardown;
- }
}
/*
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 2/4] cgroup: remove redundant get/put of task struct
2011-12-22 4:18 ` Mandeep Singh Baines
(?)
@ 2011-12-22 4:18 ` Mandeep Singh Baines
-1 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Li Zefan, linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Frederic Weisbecker, Mandeep Singh Baines, Oleg Nesterov,
Paul Menage, Tejun Heo, cgroups-u79uwXL29TY76Z2rM5mHXA,
Andrew Morton,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
threadgroup_lock() guarantees that the target threadgroup will
remain stable - no new task will be added, no new PF_EXITING
will be set and exec won't happen.
Changes in V2:
* https://lkml.org/lkml/2011/12/20/369 (Tejun Heo)
* Undo incorrect removal of get/put from attach_task_by_pid()
* Author
* Remove a comment which is made stale by this change
Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
---
kernel/cgroup.c | 11 ++---------
1 files changed, 2 insertions(+), 9 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 8228808..a85a700 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2116,7 +2116,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
retval = -EAGAIN;
goto out_free_group_list;
}
- /* take a reference on each task in the group to go in the array. */
+
tsk = leader;
i = nr_migrating_tasks = 0;
do {
@@ -2128,7 +2128,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* as per above, nr_threads may decrease, but not increase. */
BUG_ON(i >= group_size);
- get_task_struct(tsk);
/*
* saying GFP_ATOMIC has no effect here because we did prealloc
* earlier, but it's good form to communicate our expectations.
@@ -2150,7 +2149,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
if (!nr_migrating_tasks)
- goto out_put_tasks;
+ goto out_free_group_list;
/*
* step 1: check that we can legitimately attach to the cgroup.
@@ -2234,12 +2233,6 @@ out_cancel_attach:
ss->cancel_attach(ss, cgrp, &tset);
}
}
-out_put_tasks:
- /* clean up the array of referenced threads in the group. */
- for (i = 0; i < group_size; i++) {
- tc = flex_array_get(group, i);
- put_task_struct(tc->task);
- }
out_free_group_list:
flex_array_free(group);
return retval;
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 2/4] cgroup: remove redundant get/put of task struct
@ 2011-12-22 4:18 ` Mandeep Singh Baines
0 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Tejun Heo, Li Zefan, Frederic Weisbecker, linux-kernel
Cc: Mandeep Singh Baines, Tejun Heo, Li Zefan, containers, cgroups,
KAMEZAWA Hiroyuki, Frederic Weisbecker, Oleg Nesterov,
Andrew Morton, Paul Menage
threadgroup_lock() guarantees that the target threadgroup will
remain stable - no new task will be added, no new PF_EXITING
will be set and exec won't happen.
Changes in V2:
* https://lkml.org/lkml/2011/12/20/369 (Tejun Heo)
* Undo incorrect removal of get/put from attach_task_by_pid()
* Author
* Remove a comment which is made stale by this change
Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: containers@lists.linux-foundation.org
Cc: cgroups@vger.kernel.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul Menage <paul@paulmenage.org>
---
kernel/cgroup.c | 11 ++---------
1 files changed, 2 insertions(+), 9 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 8228808..a85a700 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2116,7 +2116,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
retval = -EAGAIN;
goto out_free_group_list;
}
- /* take a reference on each task in the group to go in the array. */
+
tsk = leader;
i = nr_migrating_tasks = 0;
do {
@@ -2128,7 +2128,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* as per above, nr_threads may decrease, but not increase. */
BUG_ON(i >= group_size);
- get_task_struct(tsk);
/*
* saying GFP_ATOMIC has no effect here because we did prealloc
* earlier, but it's good form to communicate our expectations.
@@ -2150,7 +2149,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
if (!nr_migrating_tasks)
- goto out_put_tasks;
+ goto out_free_group_list;
/*
* step 1: check that we can legitimately attach to the cgroup.
@@ -2234,12 +2233,6 @@ out_cancel_attach:
ss->cancel_attach(ss, cgrp, &tset);
}
}
-out_put_tasks:
- /* clean up the array of referenced threads in the group. */
- for (i = 0; i < group_size; i++) {
- tc = flex_array_get(group, i);
- put_task_struct(tc->task);
- }
out_free_group_list:
flex_array_free(group);
return retval;
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 2/4] cgroup: remove redundant get/put of task struct
@ 2011-12-22 4:18 ` Mandeep Singh Baines
0 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Tejun Heo, Li Zefan, Frederic Weisbecker,
linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Frederic Weisbecker, Mandeep Singh Baines, Oleg Nesterov,
Paul Menage, Tejun Heo, cgroups-u79uwXL29TY76Z2rM5mHXA,
Andrew Morton,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
threadgroup_lock() guarantees that the target threadgroup will
remain stable - no new task will be added, no new PF_EXITING
will be set and exec won't happen.
Changes in V2:
* https://lkml.org/lkml/2011/12/20/369 (Tejun Heo)
* Undo incorrect removal of get/put from attach_task_by_pid()
* Author
* Remove a comment which is made stale by this change
Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
---
kernel/cgroup.c | 11 ++---------
1 files changed, 2 insertions(+), 9 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 8228808..a85a700 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2116,7 +2116,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
retval = -EAGAIN;
goto out_free_group_list;
}
- /* take a reference on each task in the group to go in the array. */
+
tsk = leader;
i = nr_migrating_tasks = 0;
do {
@@ -2128,7 +2128,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* as per above, nr_threads may decrease, but not increase. */
BUG_ON(i >= group_size);
- get_task_struct(tsk);
/*
* saying GFP_ATOMIC has no effect here because we did prealloc
* earlier, but it's good form to communicate our expectations.
@@ -2150,7 +2149,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
if (!nr_migrating_tasks)
- goto out_put_tasks;
+ goto out_free_group_list;
/*
* step 1: check that we can legitimately attach to the cgroup.
@@ -2234,12 +2233,6 @@ out_cancel_attach:
ss->cancel_attach(ss, cgrp, &tset);
}
}
-out_put_tasks:
- /* clean up the array of referenced threads in the group. */
- for (i = 0; i < group_size; i++) {
- tc = flex_array_get(group, i);
- put_task_struct(tc->task);
- }
out_free_group_list:
flex_array_free(group);
return retval;
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 3/4] cgroup: only need to check oldcgrp==newgrp once
2011-12-22 4:18 ` Mandeep Singh Baines
(?)
@ 2011-12-22 4:18 ` Mandeep Singh Baines
-1 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Li Zefan, linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Frederic Weisbecker, Mandeep Singh Baines, Oleg Nesterov,
Paul Menage, Tejun Heo, cgroups-u79uwXL29TY76Z2rM5mHXA,
Andrew Morton,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
In cgroup_attach_proc it is now sufficient to only check that
oldcgrp==newcgrp once. Now that we are using threadgroup_lock()
during the migrations, oldcgrp will not change.
Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Reviewed-by: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
---
kernel/cgroup.c | 22 ++++++----------------
1 files changed, 6 insertions(+), 16 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index a85a700..1042b3c 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2067,7 +2067,7 @@ static int css_set_prefetch(struct cgroup *cgrp, struct css_set *cg,
*/
int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
{
- int retval, i, group_size, nr_migrating_tasks;
+ int retval, i, group_size;
struct cgroup_subsys *ss, *failed_ss = NULL;
/* guaranteed to be initialized later, but the compiler needs this */
struct css_set *oldcg;
@@ -2118,7 +2118,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
tsk = leader;
- i = nr_migrating_tasks = 0;
+ i = 0;
do {
struct task_and_cgroup ent;
@@ -2134,11 +2134,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
*/
ent.task = tsk;
ent.cgrp = task_cgroup_from_root(tsk, root);
+ /* nothing to do if this task is already in the cgroup */
+ if (ent.cgrp == cgrp)
+ continue;
retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
BUG_ON(retval != 0);
i++;
- if (ent.cgrp != cgrp)
- nr_migrating_tasks++;
} while_each_thread(leader, tsk);
/* remember the number of threads in the array for later. */
group_size = i;
@@ -2148,7 +2149,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
- if (!nr_migrating_tasks)
+ if (!group_size)
goto out_free_group_list;
/*
@@ -2171,14 +2172,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
INIT_LIST_HEAD(&newcg_list);
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- /* nothing to do if this task is already in the cgroup */
- if (tc->cgrp == cgrp)
- continue;
- /*
- * get old css_set pointer. threadgroup is locked so this is
- * safe against concurrent cgroup_exit() changing this to
- * init_css_set.
- */
oldcg = tc->task->cgroups;
/* if we don't already have it in the list get a new one */
@@ -2194,9 +2187,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
*/
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- /* leave current thread as it is if it's already there */
- if (tc->cgrp == cgrp)
- continue;
retval = cgroup_task_migrate(cgrp, tc->cgrp, tc->task, true);
BUG_ON(retval);
}
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 3/4] cgroup: only need to check oldcgrp==newgrp once
@ 2011-12-22 4:18 ` Mandeep Singh Baines
0 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Tejun Heo, Li Zefan, Frederic Weisbecker, linux-kernel
Cc: Mandeep Singh Baines, Tejun Heo, containers, cgroups,
KAMEZAWA Hiroyuki, Frederic Weisbecker, Oleg Nesterov,
Andrew Morton, Paul Menage
In cgroup_attach_proc it is now sufficient to only check that
oldcgrp==newcgrp once. Now that we are using threadgroup_lock()
during the migrations, oldcgrp will not change.
Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Reviewed-by: Li Zefan <lizf@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: containers@lists.linux-foundation.org
Cc: cgroups@vger.kernel.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul Menage <paul@paulmenage.org>
---
kernel/cgroup.c | 22 ++++++----------------
1 files changed, 6 insertions(+), 16 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index a85a700..1042b3c 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2067,7 +2067,7 @@ static int css_set_prefetch(struct cgroup *cgrp, struct css_set *cg,
*/
int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
{
- int retval, i, group_size, nr_migrating_tasks;
+ int retval, i, group_size;
struct cgroup_subsys *ss, *failed_ss = NULL;
/* guaranteed to be initialized later, but the compiler needs this */
struct css_set *oldcg;
@@ -2118,7 +2118,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
tsk = leader;
- i = nr_migrating_tasks = 0;
+ i = 0;
do {
struct task_and_cgroup ent;
@@ -2134,11 +2134,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
*/
ent.task = tsk;
ent.cgrp = task_cgroup_from_root(tsk, root);
+ /* nothing to do if this task is already in the cgroup */
+ if (ent.cgrp == cgrp)
+ continue;
retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
BUG_ON(retval != 0);
i++;
- if (ent.cgrp != cgrp)
- nr_migrating_tasks++;
} while_each_thread(leader, tsk);
/* remember the number of threads in the array for later. */
group_size = i;
@@ -2148,7 +2149,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
- if (!nr_migrating_tasks)
+ if (!group_size)
goto out_free_group_list;
/*
@@ -2171,14 +2172,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
INIT_LIST_HEAD(&newcg_list);
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- /* nothing to do if this task is already in the cgroup */
- if (tc->cgrp == cgrp)
- continue;
- /*
- * get old css_set pointer. threadgroup is locked so this is
- * safe against concurrent cgroup_exit() changing this to
- * init_css_set.
- */
oldcg = tc->task->cgroups;
/* if we don't already have it in the list get a new one */
@@ -2194,9 +2187,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
*/
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- /* leave current thread as it is if it's already there */
- if (tc->cgrp == cgrp)
- continue;
retval = cgroup_task_migrate(cgrp, tc->cgrp, tc->task, true);
BUG_ON(retval);
}
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 3/4] cgroup: only need to check oldcgrp==newgrp once
@ 2011-12-22 4:18 ` Mandeep Singh Baines
0 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Tejun Heo, Li Zefan, Frederic Weisbecker,
linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Frederic Weisbecker, Mandeep Singh Baines, Oleg Nesterov,
Paul Menage, Tejun Heo, cgroups-u79uwXL29TY76Z2rM5mHXA,
Andrew Morton,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
In cgroup_attach_proc it is now sufficient to only check that
oldcgrp==newcgrp once. Now that we are using threadgroup_lock()
during the migrations, oldcgrp will not change.
Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Reviewed-by: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
---
kernel/cgroup.c | 22 ++++++----------------
1 files changed, 6 insertions(+), 16 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index a85a700..1042b3c 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -2067,7 +2067,7 @@ static int css_set_prefetch(struct cgroup *cgrp, struct css_set *cg,
*/
int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
{
- int retval, i, group_size, nr_migrating_tasks;
+ int retval, i, group_size;
struct cgroup_subsys *ss, *failed_ss = NULL;
/* guaranteed to be initialized later, but the compiler needs this */
struct css_set *oldcg;
@@ -2118,7 +2118,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
tsk = leader;
- i = nr_migrating_tasks = 0;
+ i = 0;
do {
struct task_and_cgroup ent;
@@ -2134,11 +2134,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
*/
ent.task = tsk;
ent.cgrp = task_cgroup_from_root(tsk, root);
+ /* nothing to do if this task is already in the cgroup */
+ if (ent.cgrp == cgrp)
+ continue;
retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
BUG_ON(retval != 0);
i++;
- if (ent.cgrp != cgrp)
- nr_migrating_tasks++;
} while_each_thread(leader, tsk);
/* remember the number of threads in the array for later. */
group_size = i;
@@ -2148,7 +2149,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
- if (!nr_migrating_tasks)
+ if (!group_size)
goto out_free_group_list;
/*
@@ -2171,14 +2172,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
INIT_LIST_HEAD(&newcg_list);
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- /* nothing to do if this task is already in the cgroup */
- if (tc->cgrp == cgrp)
- continue;
- /*
- * get old css_set pointer. threadgroup is locked so this is
- * safe against concurrent cgroup_exit() changing this to
- * init_css_set.
- */
oldcg = tc->task->cgroups;
/* if we don't already have it in the list get a new one */
@@ -2194,9 +2187,6 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
*/
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- /* leave current thread as it is if it's already there */
- if (tc->cgrp == cgrp)
- continue;
retval = cgroup_task_migrate(cgrp, tc->cgrp, tc->task, true);
BUG_ON(retval);
}
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
[not found] ` <1324527518-24461-1-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
2011-12-22 4:18 ` Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
@ 2011-12-22 4:18 ` Mandeep Singh Baines
2011-12-22 5:11 ` [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate Li Zefan
3 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Li Zefan, linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Frederic Weisbecker, Mandeep Singh Baines, Oleg Nesterov,
Paul Menage, Tejun Heo, cgroups-u79uwXL29TY76Z2rM5mHXA,
Andrew Morton,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
In cgroup_attach_proc, we indirectly call find_existing_css_set 3
times. It is an expensive call so we want to call it a minimum
of times. This patch only calls it once and stores the result so
that it can be used later on when we call cgroup_task_migrate.
This required modifying cgroup_task_migrate to take the new css_set
(which we obtained from find_css_set) as a parameter. The nice side
effect of this is that cgroup_task_migrate is now identical for
cgroup_attach_task and cgroup_attach_proc. It also now returns a
void since it can never fail.
Changes in V2:
* https://lkml.org/lkml/2011/12/20/372 (Tejun Heo)
* Move find_css_set call into loop which creates the flex array
* Author
* Kill css_set_refs and use group_size instead
* Fix an off-by-one error in counting css_set refs
* Add a retval check in out_list_teardown
Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
---
kernel/cgroup.c | 152 ++++++++++++-------------------------------------------
1 files changed, 32 insertions(+), 120 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 1042b3c..242d89f 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1763,6 +1763,7 @@ EXPORT_SYMBOL_GPL(cgroup_path);
struct task_and_cgroup {
struct task_struct *task;
struct cgroup *cgrp;
+ struct css_set *cg;
};
struct cgroup_taskset {
@@ -1843,11 +1844,10 @@ EXPORT_SYMBOL_GPL(cgroup_taskset_size);
* will already exist. If not set, this function might sleep, and can fail with
* -ENOMEM. Must be called with cgroup_mutex and threadgroup locked.
*/
-static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
- struct task_struct *tsk, bool guarantee)
+static void cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
+ struct task_struct *tsk, struct css_set *newcg)
{
struct css_set *oldcg;
- struct css_set *newcg;
/*
* We are synchronized through threadgroup_lock() against PF_EXITING
@@ -1857,23 +1857,6 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
WARN_ON_ONCE(tsk->flags & PF_EXITING);
oldcg = tsk->cgroups;
- /* locate or allocate a new css_set for this task. */
- if (guarantee) {
- /* we know the css_set we want already exists. */
- struct cgroup_subsys_state *template[CGROUP_SUBSYS_COUNT];
- read_lock(&css_set_lock);
- newcg = find_existing_css_set(oldcg, cgrp, template);
- BUG_ON(!newcg);
- get_css_set(newcg);
- read_unlock(&css_set_lock);
- } else {
- might_sleep();
- /* find_css_set will give us newcg already referenced. */
- newcg = find_css_set(oldcg, cgrp);
- if (!newcg)
- return -ENOMEM;
- }
-
task_lock(tsk);
rcu_assign_pointer(tsk->cgroups, newcg);
task_unlock(tsk);
@@ -1892,7 +1875,6 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
put_css_set(oldcg);
set_bit(CGRP_RELEASABLE, &oldcgrp->flags);
- return 0;
}
/**
@@ -1910,6 +1892,7 @@ int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
struct cgroup *oldcgrp;
struct cgroupfs_root *root = cgrp->root;
struct cgroup_taskset tset = { };
+ struct css_set *newcg;
/* @tsk either already exited or can't exit until the end */
if (tsk->flags & PF_EXITING)
@@ -1939,9 +1922,13 @@ int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
}
}
- retval = cgroup_task_migrate(cgrp, oldcgrp, tsk, false);
- if (retval)
+ newcg = find_css_set(tsk->cgroups, cgrp);
+ if (!newcg) {
+ retval = -ENOMEM;
goto out;
+ }
+
+ cgroup_task_migrate(cgrp, oldcgrp, tsk, newcg);
for_each_subsys(root, ss) {
if (ss->attach)
@@ -1997,66 +1984,6 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
}
EXPORT_SYMBOL_GPL(cgroup_attach_task_all);
-/*
- * cgroup_attach_proc works in two stages, the first of which prefetches all
- * new css_sets needed (to make sure we have enough memory before committing
- * to the move) and stores them in a list of entries of the following type.
- * TODO: possible optimization: use css_set->rcu_head for chaining instead
- */
-struct cg_list_entry {
- struct css_set *cg;
- struct list_head links;
-};
-
-static bool css_set_check_fetched(struct cgroup *cgrp,
- struct task_struct *tsk, struct css_set *cg,
- struct list_head *newcg_list)
-{
- struct css_set *newcg;
- struct cg_list_entry *cg_entry;
- struct cgroup_subsys_state *template[CGROUP_SUBSYS_COUNT];
-
- read_lock(&css_set_lock);
- newcg = find_existing_css_set(cg, cgrp, template);
- read_unlock(&css_set_lock);
-
- /* doesn't exist at all? */
- if (!newcg)
- return false;
- /* see if it's already in the list */
- list_for_each_entry(cg_entry, newcg_list, links)
- if (cg_entry->cg == newcg)
- return true;
-
- /* not found */
- return false;
-}
-
-/*
- * Find the new css_set and store it in the list in preparation for moving the
- * given task to the given cgroup. Returns 0 or -ENOMEM.
- */
-static int css_set_prefetch(struct cgroup *cgrp, struct css_set *cg,
- struct list_head *newcg_list)
-{
- struct css_set *newcg;
- struct cg_list_entry *cg_entry;
-
- /* ensure a new css_set will exist for this thread */
- newcg = find_css_set(cg, cgrp);
- if (!newcg)
- return -ENOMEM;
- /* add it to the list */
- cg_entry = kmalloc(sizeof(struct cg_list_entry), GFP_KERNEL);
- if (!cg_entry) {
- put_css_set(newcg);
- return -ENOMEM;
- }
- cg_entry->cg = newcg;
- list_add(&cg_entry->links, newcg_list);
- return 0;
-}
-
/**
* cgroup_attach_proc - attach all threads in a threadgroup to a cgroup
* @cgrp: the cgroup to attach to
@@ -2070,20 +1997,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
int retval, i, group_size;
struct cgroup_subsys *ss, *failed_ss = NULL;
/* guaranteed to be initialized later, but the compiler needs this */
- struct css_set *oldcg;
struct cgroupfs_root *root = cgrp->root;
/* threadgroup list cursor and array */
struct task_struct *tsk;
struct task_and_cgroup *tc;
struct flex_array *group;
struct cgroup_taskset tset = { };
- /*
- * we need to make sure we have css_sets for all the tasks we're
- * going to move -before- we actually start moving them, so that in
- * case we get an ENOMEM we can bail out before making any changes.
- */
- struct list_head newcg_list;
- struct cg_list_entry *cg_entry, *temp_nobe;
/*
* step 0: in order to do expensive, possibly blocking operations for
@@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
* rcu or tasklist locked. instead, build an array of all threads in the
* group - group_rwsem prevents new threads from appearing, and if
* threads exit, this will just be an over-estimate.
+ *
+ * While creating the list, also make sure css_sets exist for all
+ * threads to be migrated. we use find_css_set, which allocates a new
+ * one if necessary.
*/
group_size = get_nr_threads(leader);
/* flex_array supports very large thread-groups better than kmalloc. */
@@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* nothing to do if this task is already in the cgroup */
if (ent.cgrp == cgrp)
continue;
+ ent.cg = find_css_set(tsk->cgroups, cgrp);
+ if (!ent.cg) {
+ retval = -ENOMEM;
+ group_size = i;
+ goto out_list_teardown;
+ }
retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
BUG_ON(retval != 0);
i++;
@@ -2150,7 +2079,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
if (!group_size)
- goto out_free_group_list;
+ goto out_list_teardown;
/*
* step 1: check that we can legitimately attach to the cgroup.
@@ -2166,34 +2095,18 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
/*
- * step 2: make sure css_sets exist for all threads to be migrated.
- * we use find_css_set, which allocates a new one if necessary.
- */
- INIT_LIST_HEAD(&newcg_list);
- for (i = 0; i < group_size; i++) {
- tc = flex_array_get(group, i);
- oldcg = tc->task->cgroups;
-
- /* if we don't already have it in the list get a new one */
- if (!css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list))
- if (retval = css_set_prefetch(cgrp, oldcg, &newcg_list))
- goto out_list_teardown;
- }
-
- /*
- * step 3: now that we're guaranteed success wrt the css_sets,
+ * step 2: now that we're guaranteed success wrt the css_sets,
* proceed to move all tasks to the new cgroup. There are no
* failure cases after here, so this is the commit point.
*/
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- retval = cgroup_task_migrate(cgrp, tc->cgrp, tc->task, true);
- BUG_ON(retval);
+ cgroup_task_migrate(cgrp, tc->cgrp, tc->task, tc->cg);
}
/* nothing is sensitive to fork() after this point. */
/*
- * step 4: do subsystem attach callbacks.
+ * step 3: do subsystem attach callbacks.
*/
for_each_subsys(root, ss) {
if (ss->attach)
@@ -2201,20 +2114,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
/*
- * step 5: success! and cleanup
+ * step 4: success! and cleanup
*/
synchronize_rcu();
cgroup_wakeup_rmdir_waiter(cgrp);
retval = 0;
-out_list_teardown:
- /* clean up the list of prefetched css_sets. */
- list_for_each_entry_safe(cg_entry, temp_nobe, &newcg_list, links) {
- list_del(&cg_entry->links);
- put_css_set(cg_entry->cg);
- kfree(cg_entry);
- }
out_cancel_attach:
- /* same deal as in cgroup_attach_task */
if (retval) {
for_each_subsys(root, ss) {
if (ss == failed_ss)
@@ -2223,6 +2128,13 @@ out_cancel_attach:
ss->cancel_attach(ss, cgrp, &tset);
}
}
+out_list_teardown:
+ if (retval) {
+ for (i = 0; i < group_size; i++) {
+ tc = flex_array_get(group, i);
+ put_css_set(tc->cg);
+ }
+ }
out_free_group_list:
flex_array_free(group);
return retval;
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
[not found] ` <1324527518-24461-1-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
@ 2011-12-22 4:18 ` Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
` (2 subsequent siblings)
3 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Tejun Heo, Li Zefan, Frederic Weisbecker, linux-kernel
Cc: Mandeep Singh Baines, Tejun Heo, Li Zefan, containers, cgroups,
KAMEZAWA Hiroyuki, Frederic Weisbecker, Oleg Nesterov,
Andrew Morton, Paul Menage
In cgroup_attach_proc, we indirectly call find_existing_css_set 3
times. It is an expensive call so we want to call it a minimum
of times. This patch only calls it once and stores the result so
that it can be used later on when we call cgroup_task_migrate.
This required modifying cgroup_task_migrate to take the new css_set
(which we obtained from find_css_set) as a parameter. The nice side
effect of this is that cgroup_task_migrate is now identical for
cgroup_attach_task and cgroup_attach_proc. It also now returns a
void since it can never fail.
Changes in V2:
* https://lkml.org/lkml/2011/12/20/372 (Tejun Heo)
* Move find_css_set call into loop which creates the flex array
* Author
* Kill css_set_refs and use group_size instead
* Fix an off-by-one error in counting css_set refs
* Add a retval check in out_list_teardown
Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Cc: containers@lists.linux-foundation.org
Cc: cgroups@vger.kernel.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul Menage <paul@paulmenage.org>
---
kernel/cgroup.c | 152 ++++++++++++-------------------------------------------
1 files changed, 32 insertions(+), 120 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 1042b3c..242d89f 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1763,6 +1763,7 @@ EXPORT_SYMBOL_GPL(cgroup_path);
struct task_and_cgroup {
struct task_struct *task;
struct cgroup *cgrp;
+ struct css_set *cg;
};
struct cgroup_taskset {
@@ -1843,11 +1844,10 @@ EXPORT_SYMBOL_GPL(cgroup_taskset_size);
* will already exist. If not set, this function might sleep, and can fail with
* -ENOMEM. Must be called with cgroup_mutex and threadgroup locked.
*/
-static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
- struct task_struct *tsk, bool guarantee)
+static void cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
+ struct task_struct *tsk, struct css_set *newcg)
{
struct css_set *oldcg;
- struct css_set *newcg;
/*
* We are synchronized through threadgroup_lock() against PF_EXITING
@@ -1857,23 +1857,6 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
WARN_ON_ONCE(tsk->flags & PF_EXITING);
oldcg = tsk->cgroups;
- /* locate or allocate a new css_set for this task. */
- if (guarantee) {
- /* we know the css_set we want already exists. */
- struct cgroup_subsys_state *template[CGROUP_SUBSYS_COUNT];
- read_lock(&css_set_lock);
- newcg = find_existing_css_set(oldcg, cgrp, template);
- BUG_ON(!newcg);
- get_css_set(newcg);
- read_unlock(&css_set_lock);
- } else {
- might_sleep();
- /* find_css_set will give us newcg already referenced. */
- newcg = find_css_set(oldcg, cgrp);
- if (!newcg)
- return -ENOMEM;
- }
-
task_lock(tsk);
rcu_assign_pointer(tsk->cgroups, newcg);
task_unlock(tsk);
@@ -1892,7 +1875,6 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
put_css_set(oldcg);
set_bit(CGRP_RELEASABLE, &oldcgrp->flags);
- return 0;
}
/**
@@ -1910,6 +1892,7 @@ int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
struct cgroup *oldcgrp;
struct cgroupfs_root *root = cgrp->root;
struct cgroup_taskset tset = { };
+ struct css_set *newcg;
/* @tsk either already exited or can't exit until the end */
if (tsk->flags & PF_EXITING)
@@ -1939,9 +1922,13 @@ int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
}
}
- retval = cgroup_task_migrate(cgrp, oldcgrp, tsk, false);
- if (retval)
+ newcg = find_css_set(tsk->cgroups, cgrp);
+ if (!newcg) {
+ retval = -ENOMEM;
goto out;
+ }
+
+ cgroup_task_migrate(cgrp, oldcgrp, tsk, newcg);
for_each_subsys(root, ss) {
if (ss->attach)
@@ -1997,66 +1984,6 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
}
EXPORT_SYMBOL_GPL(cgroup_attach_task_all);
-/*
- * cgroup_attach_proc works in two stages, the first of which prefetches all
- * new css_sets needed (to make sure we have enough memory before committing
- * to the move) and stores them in a list of entries of the following type.
- * TODO: possible optimization: use css_set->rcu_head for chaining instead
- */
-struct cg_list_entry {
- struct css_set *cg;
- struct list_head links;
-};
-
-static bool css_set_check_fetched(struct cgroup *cgrp,
- struct task_struct *tsk, struct css_set *cg,
- struct list_head *newcg_list)
-{
- struct css_set *newcg;
- struct cg_list_entry *cg_entry;
- struct cgroup_subsys_state *template[CGROUP_SUBSYS_COUNT];
-
- read_lock(&css_set_lock);
- newcg = find_existing_css_set(cg, cgrp, template);
- read_unlock(&css_set_lock);
-
- /* doesn't exist at all? */
- if (!newcg)
- return false;
- /* see if it's already in the list */
- list_for_each_entry(cg_entry, newcg_list, links)
- if (cg_entry->cg == newcg)
- return true;
-
- /* not found */
- return false;
-}
-
-/*
- * Find the new css_set and store it in the list in preparation for moving the
- * given task to the given cgroup. Returns 0 or -ENOMEM.
- */
-static int css_set_prefetch(struct cgroup *cgrp, struct css_set *cg,
- struct list_head *newcg_list)
-{
- struct css_set *newcg;
- struct cg_list_entry *cg_entry;
-
- /* ensure a new css_set will exist for this thread */
- newcg = find_css_set(cg, cgrp);
- if (!newcg)
- return -ENOMEM;
- /* add it to the list */
- cg_entry = kmalloc(sizeof(struct cg_list_entry), GFP_KERNEL);
- if (!cg_entry) {
- put_css_set(newcg);
- return -ENOMEM;
- }
- cg_entry->cg = newcg;
- list_add(&cg_entry->links, newcg_list);
- return 0;
-}
-
/**
* cgroup_attach_proc - attach all threads in a threadgroup to a cgroup
* @cgrp: the cgroup to attach to
@@ -2070,20 +1997,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
int retval, i, group_size;
struct cgroup_subsys *ss, *failed_ss = NULL;
/* guaranteed to be initialized later, but the compiler needs this */
- struct css_set *oldcg;
struct cgroupfs_root *root = cgrp->root;
/* threadgroup list cursor and array */
struct task_struct *tsk;
struct task_and_cgroup *tc;
struct flex_array *group;
struct cgroup_taskset tset = { };
- /*
- * we need to make sure we have css_sets for all the tasks we're
- * going to move -before- we actually start moving them, so that in
- * case we get an ENOMEM we can bail out before making any changes.
- */
- struct list_head newcg_list;
- struct cg_list_entry *cg_entry, *temp_nobe;
/*
* step 0: in order to do expensive, possibly blocking operations for
@@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
* rcu or tasklist locked. instead, build an array of all threads in the
* group - group_rwsem prevents new threads from appearing, and if
* threads exit, this will just be an over-estimate.
+ *
+ * While creating the list, also make sure css_sets exist for all
+ * threads to be migrated. we use find_css_set, which allocates a new
+ * one if necessary.
*/
group_size = get_nr_threads(leader);
/* flex_array supports very large thread-groups better than kmalloc. */
@@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* nothing to do if this task is already in the cgroup */
if (ent.cgrp == cgrp)
continue;
+ ent.cg = find_css_set(tsk->cgroups, cgrp);
+ if (!ent.cg) {
+ retval = -ENOMEM;
+ group_size = i;
+ goto out_list_teardown;
+ }
retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
BUG_ON(retval != 0);
i++;
@@ -2150,7 +2079,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
if (!group_size)
- goto out_free_group_list;
+ goto out_list_teardown;
/*
* step 1: check that we can legitimately attach to the cgroup.
@@ -2166,34 +2095,18 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
/*
- * step 2: make sure css_sets exist for all threads to be migrated.
- * we use find_css_set, which allocates a new one if necessary.
- */
- INIT_LIST_HEAD(&newcg_list);
- for (i = 0; i < group_size; i++) {
- tc = flex_array_get(group, i);
- oldcg = tc->task->cgroups;
-
- /* if we don't already have it in the list get a new one */
- if (!css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list))
- if (retval = css_set_prefetch(cgrp, oldcg, &newcg_list))
- goto out_list_teardown;
- }
-
- /*
- * step 3: now that we're guaranteed success wrt the css_sets,
+ * step 2: now that we're guaranteed success wrt the css_sets,
* proceed to move all tasks to the new cgroup. There are no
* failure cases after here, so this is the commit point.
*/
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- retval = cgroup_task_migrate(cgrp, tc->cgrp, tc->task, true);
- BUG_ON(retval);
+ cgroup_task_migrate(cgrp, tc->cgrp, tc->task, tc->cg);
}
/* nothing is sensitive to fork() after this point. */
/*
- * step 4: do subsystem attach callbacks.
+ * step 3: do subsystem attach callbacks.
*/
for_each_subsys(root, ss) {
if (ss->attach)
@@ -2201,20 +2114,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
/*
- * step 5: success! and cleanup
+ * step 4: success! and cleanup
*/
synchronize_rcu();
cgroup_wakeup_rmdir_waiter(cgrp);
retval = 0;
-out_list_teardown:
- /* clean up the list of prefetched css_sets. */
- list_for_each_entry_safe(cg_entry, temp_nobe, &newcg_list, links) {
- list_del(&cg_entry->links);
- put_css_set(cg_entry->cg);
- kfree(cg_entry);
- }
out_cancel_attach:
- /* same deal as in cgroup_attach_task */
if (retval) {
for_each_subsys(root, ss) {
if (ss == failed_ss)
@@ -2223,6 +2128,13 @@ out_cancel_attach:
ss->cancel_attach(ss, cgrp, &tset);
}
}
+out_list_teardown:
+ if (retval) {
+ for (i = 0; i < group_size; i++) {
+ tc = flex_array_get(group, i);
+ put_css_set(tc->cg);
+ }
+ }
out_free_group_list:
flex_array_free(group);
return retval;
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
@ 2011-12-22 4:18 ` Mandeep Singh Baines
0 siblings, 0 replies; 30+ messages in thread
From: Mandeep Singh Baines @ 2011-12-22 4:18 UTC (permalink / raw)
To: Tejun Heo, Li Zefan, Frederic Weisbecker,
linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Mandeep Singh Baines, Tejun Heo, Li Zefan,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
cgroups-u79uwXL29TY76Z2rM5mHXA, KAMEZAWA Hiroyuki,
Frederic Weisbecker, Oleg Nesterov, Andrew Morton, Paul Menage
In cgroup_attach_proc, we indirectly call find_existing_css_set 3
times. It is an expensive call so we want to call it a minimum
of times. This patch only calls it once and stores the result so
that it can be used later on when we call cgroup_task_migrate.
This required modifying cgroup_task_migrate to take the new css_set
(which we obtained from find_css_set) as a parameter. The nice side
effect of this is that cgroup_task_migrate is now identical for
cgroup_attach_task and cgroup_attach_proc. It also now returns a
void since it can never fail.
Changes in V2:
* https://lkml.org/lkml/2011/12/20/372 (Tejun Heo)
* Move find_css_set call into loop which creates the flex array
* Author
* Kill css_set_refs and use group_size instead
* Fix an off-by-one error in counting css_set refs
* Add a retval check in out_list_teardown
Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
---
kernel/cgroup.c | 152 ++++++++++++-------------------------------------------
1 files changed, 32 insertions(+), 120 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 1042b3c..242d89f 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1763,6 +1763,7 @@ EXPORT_SYMBOL_GPL(cgroup_path);
struct task_and_cgroup {
struct task_struct *task;
struct cgroup *cgrp;
+ struct css_set *cg;
};
struct cgroup_taskset {
@@ -1843,11 +1844,10 @@ EXPORT_SYMBOL_GPL(cgroup_taskset_size);
* will already exist. If not set, this function might sleep, and can fail with
* -ENOMEM. Must be called with cgroup_mutex and threadgroup locked.
*/
-static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
- struct task_struct *tsk, bool guarantee)
+static void cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
+ struct task_struct *tsk, struct css_set *newcg)
{
struct css_set *oldcg;
- struct css_set *newcg;
/*
* We are synchronized through threadgroup_lock() against PF_EXITING
@@ -1857,23 +1857,6 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
WARN_ON_ONCE(tsk->flags & PF_EXITING);
oldcg = tsk->cgroups;
- /* locate or allocate a new css_set for this task. */
- if (guarantee) {
- /* we know the css_set we want already exists. */
- struct cgroup_subsys_state *template[CGROUP_SUBSYS_COUNT];
- read_lock(&css_set_lock);
- newcg = find_existing_css_set(oldcg, cgrp, template);
- BUG_ON(!newcg);
- get_css_set(newcg);
- read_unlock(&css_set_lock);
- } else {
- might_sleep();
- /* find_css_set will give us newcg already referenced. */
- newcg = find_css_set(oldcg, cgrp);
- if (!newcg)
- return -ENOMEM;
- }
-
task_lock(tsk);
rcu_assign_pointer(tsk->cgroups, newcg);
task_unlock(tsk);
@@ -1892,7 +1875,6 @@ static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
put_css_set(oldcg);
set_bit(CGRP_RELEASABLE, &oldcgrp->flags);
- return 0;
}
/**
@@ -1910,6 +1892,7 @@ int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
struct cgroup *oldcgrp;
struct cgroupfs_root *root = cgrp->root;
struct cgroup_taskset tset = { };
+ struct css_set *newcg;
/* @tsk either already exited or can't exit until the end */
if (tsk->flags & PF_EXITING)
@@ -1939,9 +1922,13 @@ int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
}
}
- retval = cgroup_task_migrate(cgrp, oldcgrp, tsk, false);
- if (retval)
+ newcg = find_css_set(tsk->cgroups, cgrp);
+ if (!newcg) {
+ retval = -ENOMEM;
goto out;
+ }
+
+ cgroup_task_migrate(cgrp, oldcgrp, tsk, newcg);
for_each_subsys(root, ss) {
if (ss->attach)
@@ -1997,66 +1984,6 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
}
EXPORT_SYMBOL_GPL(cgroup_attach_task_all);
-/*
- * cgroup_attach_proc works in two stages, the first of which prefetches all
- * new css_sets needed (to make sure we have enough memory before committing
- * to the move) and stores them in a list of entries of the following type.
- * TODO: possible optimization: use css_set->rcu_head for chaining instead
- */
-struct cg_list_entry {
- struct css_set *cg;
- struct list_head links;
-};
-
-static bool css_set_check_fetched(struct cgroup *cgrp,
- struct task_struct *tsk, struct css_set *cg,
- struct list_head *newcg_list)
-{
- struct css_set *newcg;
- struct cg_list_entry *cg_entry;
- struct cgroup_subsys_state *template[CGROUP_SUBSYS_COUNT];
-
- read_lock(&css_set_lock);
- newcg = find_existing_css_set(cg, cgrp, template);
- read_unlock(&css_set_lock);
-
- /* doesn't exist at all? */
- if (!newcg)
- return false;
- /* see if it's already in the list */
- list_for_each_entry(cg_entry, newcg_list, links)
- if (cg_entry->cg == newcg)
- return true;
-
- /* not found */
- return false;
-}
-
-/*
- * Find the new css_set and store it in the list in preparation for moving the
- * given task to the given cgroup. Returns 0 or -ENOMEM.
- */
-static int css_set_prefetch(struct cgroup *cgrp, struct css_set *cg,
- struct list_head *newcg_list)
-{
- struct css_set *newcg;
- struct cg_list_entry *cg_entry;
-
- /* ensure a new css_set will exist for this thread */
- newcg = find_css_set(cg, cgrp);
- if (!newcg)
- return -ENOMEM;
- /* add it to the list */
- cg_entry = kmalloc(sizeof(struct cg_list_entry), GFP_KERNEL);
- if (!cg_entry) {
- put_css_set(newcg);
- return -ENOMEM;
- }
- cg_entry->cg = newcg;
- list_add(&cg_entry->links, newcg_list);
- return 0;
-}
-
/**
* cgroup_attach_proc - attach all threads in a threadgroup to a cgroup
* @cgrp: the cgroup to attach to
@@ -2070,20 +1997,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
int retval, i, group_size;
struct cgroup_subsys *ss, *failed_ss = NULL;
/* guaranteed to be initialized later, but the compiler needs this */
- struct css_set *oldcg;
struct cgroupfs_root *root = cgrp->root;
/* threadgroup list cursor and array */
struct task_struct *tsk;
struct task_and_cgroup *tc;
struct flex_array *group;
struct cgroup_taskset tset = { };
- /*
- * we need to make sure we have css_sets for all the tasks we're
- * going to move -before- we actually start moving them, so that in
- * case we get an ENOMEM we can bail out before making any changes.
- */
- struct list_head newcg_list;
- struct cg_list_entry *cg_entry, *temp_nobe;
/*
* step 0: in order to do expensive, possibly blocking operations for
@@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
* rcu or tasklist locked. instead, build an array of all threads in the
* group - group_rwsem prevents new threads from appearing, and if
* threads exit, this will just be an over-estimate.
+ *
+ * While creating the list, also make sure css_sets exist for all
+ * threads to be migrated. we use find_css_set, which allocates a new
+ * one if necessary.
*/
group_size = get_nr_threads(leader);
/* flex_array supports very large thread-groups better than kmalloc. */
@@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* nothing to do if this task is already in the cgroup */
if (ent.cgrp == cgrp)
continue;
+ ent.cg = find_css_set(tsk->cgroups, cgrp);
+ if (!ent.cg) {
+ retval = -ENOMEM;
+ group_size = i;
+ goto out_list_teardown;
+ }
retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
BUG_ON(retval != 0);
i++;
@@ -2150,7 +2079,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
/* methods shouldn't be called if no task is actually migrating */
retval = 0;
if (!group_size)
- goto out_free_group_list;
+ goto out_list_teardown;
/*
* step 1: check that we can legitimately attach to the cgroup.
@@ -2166,34 +2095,18 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
/*
- * step 2: make sure css_sets exist for all threads to be migrated.
- * we use find_css_set, which allocates a new one if necessary.
- */
- INIT_LIST_HEAD(&newcg_list);
- for (i = 0; i < group_size; i++) {
- tc = flex_array_get(group, i);
- oldcg = tc->task->cgroups;
-
- /* if we don't already have it in the list get a new one */
- if (!css_set_check_fetched(cgrp, tc->task, oldcg, &newcg_list))
- if (retval = css_set_prefetch(cgrp, oldcg, &newcg_list))
- goto out_list_teardown;
- }
-
- /*
- * step 3: now that we're guaranteed success wrt the css_sets,
+ * step 2: now that we're guaranteed success wrt the css_sets,
* proceed to move all tasks to the new cgroup. There are no
* failure cases after here, so this is the commit point.
*/
for (i = 0; i < group_size; i++) {
tc = flex_array_get(group, i);
- retval = cgroup_task_migrate(cgrp, tc->cgrp, tc->task, true);
- BUG_ON(retval);
+ cgroup_task_migrate(cgrp, tc->cgrp, tc->task, tc->cg);
}
/* nothing is sensitive to fork() after this point. */
/*
- * step 4: do subsystem attach callbacks.
+ * step 3: do subsystem attach callbacks.
*/
for_each_subsys(root, ss) {
if (ss->attach)
@@ -2201,20 +2114,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
}
/*
- * step 5: success! and cleanup
+ * step 4: success! and cleanup
*/
synchronize_rcu();
cgroup_wakeup_rmdir_waiter(cgrp);
retval = 0;
-out_list_teardown:
- /* clean up the list of prefetched css_sets. */
- list_for_each_entry_safe(cg_entry, temp_nobe, &newcg_list, links) {
- list_del(&cg_entry->links);
- put_css_set(cg_entry->cg);
- kfree(cg_entry);
- }
out_cancel_attach:
- /* same deal as in cgroup_attach_task */
if (retval) {
for_each_subsys(root, ss) {
if (ss == failed_ss)
@@ -2223,6 +2128,13 @@ out_cancel_attach:
ss->cancel_attach(ss, cgrp, &tset);
}
}
+out_list_teardown:
+ if (retval) {
+ for (i = 0; i < group_size; i++) {
+ tc = flex_array_get(group, i);
+ put_css_set(tc->cg);
+ }
+ }
out_free_group_list:
flex_array_free(group);
return retval;
--
1.7.3.1
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate
[not found] ` <1324527518-24461-1-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
` (2 preceding siblings ...)
2011-12-22 4:18 ` [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set Mandeep Singh Baines
@ 2011-12-22 5:11 ` Li Zefan
3 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:11 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Frederic Weisbecker,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Tejun Heo,
cgroups-u79uwXL29TY76Z2rM5mHXA, Andrew Morton, Paul Menage
Mandeep Singh Baines wrote:
> We can now assume that the css_set reference held by the task
> will not go away for an exiting task. PF_EXITING state can be
> trusted throughout migration by checking it after locking
> threadgroup.
>
> While at it, renamed css_set_check_fetched to css_set_fetched.
> !css_set_fetched() seems to read better than
> !css_set_check_fetched().
>
> Changes in V4:
> * https://lkml.org/lkml/2011/12/20/368 (Tejun Heo)
> * Fix typo in commit message
> * Undid the rename of css_set_check_fetched
> * https://lkml.org/lkml/2011/12/20/427 (Li Zefan)
> * Fix comment in cgroup_task_migrate()
> Changes in V3:
> * https://lkml.org/lkml/2011/12/20/255 (Frederic Weisbecker)
> * Fixed to put error in retval
> Changes in V2:
> * https://lkml.org/lkml/2011/12/19/289 (Tejun Heo)
> * Updated commit message
>
> Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Reviewed-by: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
> Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
> ---
> kernel/cgroup.c | 28 ++++++++--------------------
> 1 files changed, 8 insertions(+), 20 deletions(-)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate
[not found] ` <1324527518-24461-1-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
@ 2011-12-22 5:11 ` Li Zefan
2011-12-22 4:18 ` Mandeep Singh Baines
` (2 subsequent siblings)
3 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:11 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Tejun Heo, Frederic Weisbecker, linux-kernel, containers,
cgroups, KAMEZAWA Hiroyuki, Oleg Nesterov, Andrew Morton,
Paul Menage
Mandeep Singh Baines wrote:
> We can now assume that the css_set reference held by the task
> will not go away for an exiting task. PF_EXITING state can be
> trusted throughout migration by checking it after locking
> threadgroup.
>
> While at it, renamed css_set_check_fetched to css_set_fetched.
> !css_set_fetched() seems to read better than
> !css_set_check_fetched().
>
> Changes in V4:
> * https://lkml.org/lkml/2011/12/20/368 (Tejun Heo)
> * Fix typo in commit message
> * Undid the rename of css_set_check_fetched
> * https://lkml.org/lkml/2011/12/20/427 (Li Zefan)
> * Fix comment in cgroup_task_migrate()
> Changes in V3:
> * https://lkml.org/lkml/2011/12/20/255 (Frederic Weisbecker)
> * Fixed to put error in retval
> Changes in V2:
> * https://lkml.org/lkml/2011/12/19/289 (Tejun Heo)
> * Updated commit message
>
> Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Reviewed-by: Li Zefan <lizf@cn.fujitsu.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Li Zefan <lizf@cn.fujitsu.com>
> Cc: containers@lists.linux-foundation.org
> Cc: cgroups@vger.kernel.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Oleg Nesterov <oleg@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Paul Menage <paul@paulmenage.org>
> ---
> kernel/cgroup.c | 28 ++++++++--------------------
> 1 files changed, 8 insertions(+), 20 deletions(-)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate
@ 2011-12-22 5:11 ` Li Zefan
0 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:11 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Tejun Heo, Frederic Weisbecker,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
cgroups-u79uwXL29TY76Z2rM5mHXA, KAMEZAWA Hiroyuki, Oleg Nesterov,
Andrew Morton, Paul Menage
Mandeep Singh Baines wrote:
> We can now assume that the css_set reference held by the task
> will not go away for an exiting task. PF_EXITING state can be
> trusted throughout migration by checking it after locking
> threadgroup.
>
> While at it, renamed css_set_check_fetched to css_set_fetched.
> !css_set_fetched() seems to read better than
> !css_set_check_fetched().
>
> Changes in V4:
> * https://lkml.org/lkml/2011/12/20/368 (Tejun Heo)
> * Fix typo in commit message
> * Undid the rename of css_set_check_fetched
> * https://lkml.org/lkml/2011/12/20/427 (Li Zefan)
> * Fix comment in cgroup_task_migrate()
> Changes in V3:
> * https://lkml.org/lkml/2011/12/20/255 (Frederic Weisbecker)
> * Fixed to put error in retval
> Changes in V2:
> * https://lkml.org/lkml/2011/12/19/289 (Tejun Heo)
> * Updated commit message
>
> Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Reviewed-by: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
> Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
> ---
> kernel/cgroup.c | 28 ++++++++--------------------
> 1 files changed, 8 insertions(+), 20 deletions(-)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 2/4] cgroup: remove redundant get/put of task struct
[not found] ` <1324527518-24461-2-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
@ 2011-12-22 5:36 ` Li Zefan
0 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:36 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Frederic Weisbecker,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Tejun Heo,
cgroups-u79uwXL29TY76Z2rM5mHXA, Andrew Morton, Paul Menage
Mandeep Singh Baines wrote:
> threadgroup_lock() guarantees that the target threadgroup will
> remain stable - no new task will be added, no new PF_EXITING
> will be set and exec won't happen.
>
> Changes in V2:
> * https://lkml.org/lkml/2011/12/20/369 (Tejun Heo)
> * Undo incorrect removal of get/put from attach_task_by_pid()
> * Author
> * Remove a comment which is made stale by this change
>
> Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Acked-by: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
> Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
> ---
> kernel/cgroup.c | 11 ++---------
> 1 files changed, 2 insertions(+), 9 deletions(-)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 2/4] cgroup: remove redundant get/put of task struct
[not found] ` <1324527518-24461-2-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
@ 2011-12-22 5:36 ` Li Zefan
0 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:36 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Tejun Heo, Frederic Weisbecker, linux-kernel, containers,
cgroups, KAMEZAWA Hiroyuki, Oleg Nesterov, Andrew Morton,
Paul Menage
Mandeep Singh Baines wrote:
> threadgroup_lock() guarantees that the target threadgroup will
> remain stable - no new task will be added, no new PF_EXITING
> will be set and exec won't happen.
>
> Changes in V2:
> * https://lkml.org/lkml/2011/12/20/369 (Tejun Heo)
> * Undo incorrect removal of get/put from attach_task_by_pid()
> * Author
> * Remove a comment which is made stale by this change
>
> Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
Acked-by: Li Zefan <lizf@cn.fujitsu.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Li Zefan <lizf@cn.fujitsu.com>
> Cc: containers@lists.linux-foundation.org
> Cc: cgroups@vger.kernel.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Oleg Nesterov <oleg@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Paul Menage <paul@paulmenage.org>
> ---
> kernel/cgroup.c | 11 ++---------
> 1 files changed, 2 insertions(+), 9 deletions(-)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 2/4] cgroup: remove redundant get/put of task struct
@ 2011-12-22 5:36 ` Li Zefan
0 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:36 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Tejun Heo, Frederic Weisbecker,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
cgroups-u79uwXL29TY76Z2rM5mHXA, KAMEZAWA Hiroyuki, Oleg Nesterov,
Andrew Morton, Paul Menage
Mandeep Singh Baines wrote:
> threadgroup_lock() guarantees that the target threadgroup will
> remain stable - no new task will be added, no new PF_EXITING
> will be set and exec won't happen.
>
> Changes in V2:
> * https://lkml.org/lkml/2011/12/20/369 (Tejun Heo)
> * Undo incorrect removal of get/put from attach_task_by_pid()
> * Author
> * Remove a comment which is made stale by this change
>
> Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Acked-by: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
> Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
> ---
> kernel/cgroup.c | 11 ++---------
> 1 files changed, 2 insertions(+), 9 deletions(-)
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
[not found] ` <1324527518-24461-4-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
@ 2011-12-22 5:50 ` Li Zefan
0 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:50 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Frederic Weisbecker,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Tejun Heo,
cgroups-u79uwXL29TY76Z2rM5mHXA, Andrew Morton, Paul Menage
> @@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> * rcu or tasklist locked. instead, build an array of all threads in the
> * group - group_rwsem prevents new threads from appearing, and if
> * threads exit, this will just be an over-estimate.
> + *
> + * While creating the list, also make sure css_sets exist for all
> + * threads to be migrated. we use find_css_set, which allocates a new
> + * one if necessary.
> */
> group_size = get_nr_threads(leader);
> /* flex_array supports very large thread-groups better than kmalloc. */
> @@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> /* nothing to do if this task is already in the cgroup */
> if (ent.cgrp == cgrp)
> continue;
> + ent.cg = find_css_set(tsk->cgroups, cgrp);
unfortunately This won't work, because we are holding tasklist_lock.
> + if (!ent.cg) {
> + retval = -ENOMEM;
> + group_size = i;
> + goto out_list_teardown;
> + }
> retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
> BUG_ON(retval != 0);
> i++;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
[not found] ` <1324527518-24461-4-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
@ 2011-12-22 5:50 ` Li Zefan
0 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:50 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Tejun Heo, Frederic Weisbecker, linux-kernel, containers,
cgroups, KAMEZAWA Hiroyuki, Oleg Nesterov, Andrew Morton,
Paul Menage
> @@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> * rcu or tasklist locked. instead, build an array of all threads in the
> * group - group_rwsem prevents new threads from appearing, and if
> * threads exit, this will just be an over-estimate.
> + *
> + * While creating the list, also make sure css_sets exist for all
> + * threads to be migrated. we use find_css_set, which allocates a new
> + * one if necessary.
> */
> group_size = get_nr_threads(leader);
> /* flex_array supports very large thread-groups better than kmalloc. */
> @@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> /* nothing to do if this task is already in the cgroup */
> if (ent.cgrp == cgrp)
> continue;
> + ent.cg = find_css_set(tsk->cgroups, cgrp);
unfortunately This won't work, because we are holding tasklist_lock.
> + if (!ent.cg) {
> + retval = -ENOMEM;
> + group_size = i;
> + goto out_list_teardown;
> + }
> retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
> BUG_ON(retval != 0);
> i++;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
@ 2011-12-22 5:50 ` Li Zefan
0 siblings, 0 replies; 30+ messages in thread
From: Li Zefan @ 2011-12-22 5:50 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Tejun Heo, Frederic Weisbecker,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
cgroups-u79uwXL29TY76Z2rM5mHXA, KAMEZAWA Hiroyuki, Oleg Nesterov,
Andrew Morton, Paul Menage
> @@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> * rcu or tasklist locked. instead, build an array of all threads in the
> * group - group_rwsem prevents new threads from appearing, and if
> * threads exit, this will just be an over-estimate.
> + *
> + * While creating the list, also make sure css_sets exist for all
> + * threads to be migrated. we use find_css_set, which allocates a new
> + * one if necessary.
> */
> group_size = get_nr_threads(leader);
> /* flex_array supports very large thread-groups better than kmalloc. */
> @@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> /* nothing to do if this task is already in the cgroup */
> if (ent.cgrp == cgrp)
> continue;
> + ent.cg = find_css_set(tsk->cgroups, cgrp);
unfortunately This won't work, because we are holding tasklist_lock.
> + if (!ent.cg) {
> + retval = -ENOMEM;
> + group_size = i;
> + goto out_list_teardown;
> + }
> retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
> BUG_ON(retval != 0);
> i++;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
[not found] ` <4EF2C536.7070408-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2011-12-22 9:44 ` Frederic Weisbecker
0 siblings, 0 replies; 30+ messages in thread
From: Frederic Weisbecker @ 2011-12-22 9:44 UTC (permalink / raw)
To: Li Zefan
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Paul Menage,
Tejun Heo, cgroups-u79uwXL29TY76Z2rM5mHXA, Andrew Morton,
Mandeep Singh Baines
On Thu, Dec 22, 2011 at 01:50:46PM +0800, Li Zefan wrote:
> > @@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> > * rcu or tasklist locked. instead, build an array of all threads in the
> > * group - group_rwsem prevents new threads from appearing, and if
> > * threads exit, this will just be an over-estimate.
> > + *
> > + * While creating the list, also make sure css_sets exist for all
> > + * threads to be migrated. we use find_css_set, which allocates a new
> > + * one if necessary.
> > */
> > group_size = get_nr_threads(leader);
> > /* flex_array supports very large thread-groups better than kmalloc. */
> > @@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> > /* nothing to do if this task is already in the cgroup */
> > if (ent.cgrp == cgrp)
> > continue;
> > + ent.cg = find_css_set(tsk->cgroups, cgrp);
>
> unfortunately This won't work, because we are holding tasklist_lock.
I believe we can remove tasklist_lock now (in a seperate patch).
It was there in order to protect while_each_thread() against exec but
now we have threadgroup_lock().
I think we only need to use rcu_read_lock() to protect against concurrent
removal in exit.
> > + if (!ent.cg) {
> > + retval = -ENOMEM;
> > + group_size = i;
> > + goto out_list_teardown;
> > + }
> > retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
> > BUG_ON(retval != 0);
> > i++;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
[not found] ` <4EF2C536.7070408-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
@ 2011-12-22 9:44 ` Frederic Weisbecker
0 siblings, 0 replies; 30+ messages in thread
From: Frederic Weisbecker @ 2011-12-22 9:44 UTC (permalink / raw)
To: Li Zefan
Cc: Mandeep Singh Baines, Tejun Heo, linux-kernel, containers,
cgroups, KAMEZAWA Hiroyuki, Oleg Nesterov, Andrew Morton,
Paul Menage
On Thu, Dec 22, 2011 at 01:50:46PM +0800, Li Zefan wrote:
> > @@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> > * rcu or tasklist locked. instead, build an array of all threads in the
> > * group - group_rwsem prevents new threads from appearing, and if
> > * threads exit, this will just be an over-estimate.
> > + *
> > + * While creating the list, also make sure css_sets exist for all
> > + * threads to be migrated. we use find_css_set, which allocates a new
> > + * one if necessary.
> > */
> > group_size = get_nr_threads(leader);
> > /* flex_array supports very large thread-groups better than kmalloc. */
> > @@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> > /* nothing to do if this task is already in the cgroup */
> > if (ent.cgrp == cgrp)
> > continue;
> > + ent.cg = find_css_set(tsk->cgroups, cgrp);
>
> unfortunately This won't work, because we are holding tasklist_lock.
I believe we can remove tasklist_lock now (in a seperate patch).
It was there in order to protect while_each_thread() against exec but
now we have threadgroup_lock().
I think we only need to use rcu_read_lock() to protect against concurrent
removal in exit.
> > + if (!ent.cg) {
> > + retval = -ENOMEM;
> > + group_size = i;
> > + goto out_list_teardown;
> > + }
> > retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
> > BUG_ON(retval != 0);
> > i++;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
@ 2011-12-22 9:44 ` Frederic Weisbecker
0 siblings, 0 replies; 30+ messages in thread
From: Frederic Weisbecker @ 2011-12-22 9:44 UTC (permalink / raw)
To: Li Zefan
Cc: Mandeep Singh Baines, Tejun Heo,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
cgroups-u79uwXL29TY76Z2rM5mHXA, KAMEZAWA Hiroyuki, Oleg Nesterov,
Andrew Morton, Paul Menage
On Thu, Dec 22, 2011 at 01:50:46PM +0800, Li Zefan wrote:
> > @@ -2091,6 +2010,10 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> > * rcu or tasklist locked. instead, build an array of all threads in the
> > * group - group_rwsem prevents new threads from appearing, and if
> > * threads exit, this will just be an over-estimate.
> > + *
> > + * While creating the list, also make sure css_sets exist for all
> > + * threads to be migrated. we use find_css_set, which allocates a new
> > + * one if necessary.
> > */
> > group_size = get_nr_threads(leader);
> > /* flex_array supports very large thread-groups better than kmalloc. */
> > @@ -2137,6 +2060,12 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> > /* nothing to do if this task is already in the cgroup */
> > if (ent.cgrp == cgrp)
> > continue;
> > + ent.cg = find_css_set(tsk->cgroups, cgrp);
>
> unfortunately This won't work, because we are holding tasklist_lock.
I believe we can remove tasklist_lock now (in a seperate patch).
It was there in order to protect while_each_thread() against exec but
now we have threadgroup_lock().
I think we only need to use rcu_read_lock() to protect against concurrent
removal in exit.
> > + if (!ent.cg) {
> > + retval = -ENOMEM;
> > + group_size = i;
> > + goto out_list_teardown;
> > + }
> > retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
> > BUG_ON(retval != 0);
> > i++;
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 3/4] cgroup: only need to check oldcgrp==newgrp once
[not found] ` <1324527518-24461-3-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
@ 2011-12-22 15:38 ` Tejun Heo
0 siblings, 0 replies; 30+ messages in thread
From: Tejun Heo @ 2011-12-22 15:38 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Frederic Weisbecker,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov,
cgroups-u79uwXL29TY76Z2rM5mHXA, Andrew Morton, Paul Menage
On Wed, Dec 21, 2011 at 08:18:37PM -0800, Mandeep Singh Baines wrote:
> In cgroup_attach_proc it is now sufficient to only check that
> oldcgrp==newcgrp once. Now that we are using threadgroup_lock()
> during the migrations, oldcgrp will not change.
>
> Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
> Reviewed-by: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
> Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
Applied 1-3 to cgroup/for-3.3. I edited out mention of check_fetched
rename in the commit description of the first patch and applied
Acked-by from Li to all three. Li, let's stick to ack/nack.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 3/4] cgroup: only need to check oldcgrp==newgrp once
[not found] ` <1324527518-24461-3-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
@ 2011-12-22 15:38 ` Tejun Heo
0 siblings, 0 replies; 30+ messages in thread
From: Tejun Heo @ 2011-12-22 15:38 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Li Zefan, Frederic Weisbecker, linux-kernel, containers, cgroups,
KAMEZAWA Hiroyuki, Oleg Nesterov, Andrew Morton, Paul Menage
On Wed, Dec 21, 2011 at 08:18:37PM -0800, Mandeep Singh Baines wrote:
> In cgroup_attach_proc it is now sufficient to only check that
> oldcgrp==newcgrp once. Now that we are using threadgroup_lock()
> during the migrations, oldcgrp will not change.
>
> Signed-off-by: Mandeep Singh Baines <msb@chromium.org>
> Reviewed-by: Li Zefan <lizf@cn.fujitsu.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: containers@lists.linux-foundation.org
> Cc: cgroups@vger.kernel.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Frederic Weisbecker <fweisbec@gmail.com>
> Cc: Oleg Nesterov <oleg@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Paul Menage <paul@paulmenage.org>
Applied 1-3 to cgroup/for-3.3. I edited out mention of check_fetched
rename in the commit description of the first patch and applied
Acked-by from Li to all three. Li, let's stick to ack/nack.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 3/4] cgroup: only need to check oldcgrp==newgrp once
@ 2011-12-22 15:38 ` Tejun Heo
0 siblings, 0 replies; 30+ messages in thread
From: Tejun Heo @ 2011-12-22 15:38 UTC (permalink / raw)
To: Mandeep Singh Baines
Cc: Li Zefan, Frederic Weisbecker,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
cgroups-u79uwXL29TY76Z2rM5mHXA, KAMEZAWA Hiroyuki, Oleg Nesterov,
Andrew Morton, Paul Menage
On Wed, Dec 21, 2011 at 08:18:37PM -0800, Mandeep Singh Baines wrote:
> In cgroup_attach_proc it is now sufficient to only check that
> oldcgrp==newcgrp once. Now that we are using threadgroup_lock()
> during the migrations, oldcgrp will not change.
>
> Signed-off-by: Mandeep Singh Baines <msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
> Reviewed-by: Li Zefan <lizf-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
> Cc: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
> Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
> Cc: Frederic Weisbecker <fweisbec-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> Cc: Oleg Nesterov <oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
> Cc: Paul Menage <paul-inf54ven1CmVyaH7bEyXVA@public.gmane.org>
Applied 1-3 to cgroup/for-3.3. I edited out mention of check_fetched
rename in the commit description of the first patch and applied
Acked-by from Li to all three. Li, let's stick to ack/nack.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
2011-12-22 9:44 ` Frederic Weisbecker
(?)
(?)
@ 2011-12-22 15:40 ` Tejun Heo
-1 siblings, 0 replies; 30+ messages in thread
From: Tejun Heo @ 2011-12-22 15:40 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Oleg Nesterov, Paul Menage,
cgroups-u79uwXL29TY76Z2rM5mHXA, Andrew Morton,
Mandeep Singh Baines
On Thu, Dec 22, 2011 at 10:44:39AM +0100, Frederic Weisbecker wrote:
> > > if (ent.cgrp == cgrp)
> > > continue;
> > > + ent.cg = find_css_set(tsk->cgroups, cgrp);
> >
> > unfortunately This won't work, because we are holding tasklist_lock.
>
> I believe we can remove tasklist_lock now (in a seperate patch).
>
> It was there in order to protect while_each_thread() against exec but
> now we have threadgroup_lock().
>
> I think we only need to use rcu_read_lock() to protect against concurrent
> removal in exit.
Yeah, that should work and I really like this patch.
kernel/cgroup.c | 152 ++++++++++++-------------------------------------------
1 files changed, 32 insertions(+), 120 deletions(-)
Let's get it working. :)
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
2011-12-22 9:44 ` Frederic Weisbecker
@ 2011-12-22 15:40 ` Tejun Heo
-1 siblings, 0 replies; 30+ messages in thread
From: Tejun Heo @ 2011-12-22 15:40 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Li Zefan, Mandeep Singh Baines, linux-kernel, containers,
cgroups, KAMEZAWA Hiroyuki, Oleg Nesterov, Andrew Morton,
Paul Menage
On Thu, Dec 22, 2011 at 10:44:39AM +0100, Frederic Weisbecker wrote:
> > > if (ent.cgrp == cgrp)
> > > continue;
> > > + ent.cg = find_css_set(tsk->cgroups, cgrp);
> >
> > unfortunately This won't work, because we are holding tasklist_lock.
>
> I believe we can remove tasklist_lock now (in a seperate patch).
>
> It was there in order to protect while_each_thread() against exec but
> now we have threadgroup_lock().
>
> I think we only need to use rcu_read_lock() to protect against concurrent
> removal in exit.
Yeah, that should work and I really like this patch.
kernel/cgroup.c | 152 ++++++++++++-------------------------------------------
1 files changed, 32 insertions(+), 120 deletions(-)
Let's get it working. :)
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set
@ 2011-12-22 15:40 ` Tejun Heo
0 siblings, 0 replies; 30+ messages in thread
From: Tejun Heo @ 2011-12-22 15:40 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Li Zefan, Mandeep Singh Baines,
linux-kernel-u79uwXL29TY76Z2rM5mHXA,
containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
cgroups-u79uwXL29TY76Z2rM5mHXA, KAMEZAWA Hiroyuki, Oleg Nesterov,
Andrew Morton, Paul Menage
On Thu, Dec 22, 2011 at 10:44:39AM +0100, Frederic Weisbecker wrote:
> > > if (ent.cgrp == cgrp)
> > > continue;
> > > + ent.cg = find_css_set(tsk->cgroups, cgrp);
> >
> > unfortunately This won't work, because we are holding tasklist_lock.
>
> I believe we can remove tasklist_lock now (in a seperate patch).
>
> It was there in order to protect while_each_thread() against exec but
> now we have threadgroup_lock().
>
> I think we only need to use rcu_read_lock() to protect against concurrent
> removal in exit.
Yeah, that should work and I really like this patch.
kernel/cgroup.c | 152 ++++++++++++-------------------------------------------
1 files changed, 32 insertions(+), 120 deletions(-)
Let's get it working. :)
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2011-12-22 15:40 UTC | newest]
Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-12-22 4:18 [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
2011-12-22 4:18 ` [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
2011-12-22 5:50 ` Li Zefan
2011-12-22 5:50 ` Li Zefan
[not found] ` <4EF2C536.7070408-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2011-12-22 9:44 ` Frederic Weisbecker
2011-12-22 9:44 ` Frederic Weisbecker
2011-12-22 9:44 ` Frederic Weisbecker
2011-12-22 15:40 ` Tejun Heo
2011-12-22 15:40 ` Tejun Heo
2011-12-22 15:40 ` Tejun Heo
[not found] ` <1324527518-24461-4-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
2011-12-22 5:50 ` Li Zefan
[not found] ` <1324527518-24461-1-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
2011-12-22 4:18 ` [PATCH 2/4] cgroup: remove redundant get/put of task struct Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
[not found] ` <1324527518-24461-2-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
2011-12-22 5:36 ` Li Zefan
2011-12-22 5:36 ` Li Zefan
2011-12-22 5:36 ` Li Zefan
2011-12-22 4:18 ` [PATCH 3/4] cgroup: only need to check oldcgrp==newgrp once Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
2011-12-22 4:18 ` Mandeep Singh Baines
[not found] ` <1324527518-24461-3-git-send-email-msb-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
2011-12-22 15:38 ` Tejun Heo
2011-12-22 15:38 ` Tejun Heo
2011-12-22 15:38 ` Tejun Heo
2011-12-22 4:18 ` [PATCH 4/4] cgroup: remove extra calls to find_existing_css_set Mandeep Singh Baines
2011-12-22 5:11 ` [PATCH 1/4] cgroup: remove redundant get/put of old css_set from migrate Li Zefan
2011-12-22 5:11 ` Li Zefan
2011-12-22 5:11 ` Li Zefan
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.