All of lore.kernel.org
 help / color / mirror / Atom feed
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
To: Ben Blum <bblum-OM76b2Iv3yLQjUSlxSEPGw@public.gmane.org>
Cc: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	menage-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	ebiederm-aS9lmoZGLiVWk0Htik3J/w@public.gmane.org
Subject: Re: [PATCH v4 2/2] cgroups: make procs file writable
Date: Wed, 4 Aug 2010 10:08:11 +0900	[thread overview]
Message-ID: <20100804100811.199d73ba.kamezawa.hiroyu__21083.9595798918$1280884693$gmane$org@jp.fujitsu.com> (raw)
In-Reply-To: <20100730235902.GC22644-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>

On Fri, 30 Jul 2010 19:59:02 -0400
Ben Blum <bblum-OM76b2Iv3yLQjUSlxSEPGw@public.gmane.org> wrote:

> Makes procs file writable to move all threads by tgid at once
> 
> From: Ben Blum <bblum-OM76b2Iv3yLQjUSlxSEPGw@public.gmane.org>
> 
> This patch adds functionality that enables users to move all threads in a
> threadgroup at once to a cgroup by writing the tgid to the 'cgroup.procs'
> file. This current implementation makes use of a per-threadgroup rwsem that's
> taken for reading in the fork() path to prevent newly forking threads within
> the threadgroup from "escaping" while the move is in progress.
> 
> Signed-off-by: Ben Blum <bblum-OM76b2Iv3yLQjUSlxSEPGw@public.gmane.org>
> ---
>  Documentation/cgroups/cgroups.txt |   13 +
>  kernel/cgroup.c                   |  426 +++++++++++++++++++++++++++++++++----
>  kernel/cgroup_freezer.c           |    4 
>  kernel/cpuset.c                   |    4 
>  kernel/ns_cgroup.c                |    4 
>  kernel/sched.c                    |    4 
>  6 files changed, 405 insertions(+), 50 deletions(-)
> 
> diff --git a/Documentation/cgroups/cgroups.txt b/Documentation/cgroups/cgroups.txt
> index b34823f..5f3c707 100644
> --- a/Documentation/cgroups/cgroups.txt
> +++ b/Documentation/cgroups/cgroups.txt
> @@ -235,7 +235,8 @@ containing the following files describing that cgroup:
>   - cgroup.procs: list of tgids in the cgroup.  This list is not
>     guaranteed to be sorted or free of duplicate tgids, and userspace
>     should sort/uniquify the list if this property is required.
> -   This is a read-only file, for now.
> +   Writing a thread group id into this file moves all threads in that
> +   group into this cgroup.
>   - notify_on_release flag: run the release agent on exit?
>   - release_agent: the path to use for release notifications (this file
>     exists in the top cgroup only)
> @@ -416,6 +417,12 @@ You can attach the current shell task by echoing 0:
>  
>  # echo 0 > tasks
>  
> +You can use the cgroup.procs file instead of the tasks file to move all
> +threads in a threadgroup at once. Echoing the pid of any task in a
> +threadgroup to cgroup.procs causes all tasks in that threadgroup to be
> +be attached to the cgroup. Writing 0 to cgroup.procs moves all tasks
> +in the writing task's threadgroup.
> +
>  2.3 Mounting hierarchies by name
>  --------------------------------
>  
> @@ -564,7 +571,9 @@ called on a fork. If this method returns 0 (success) then this should
>  remain valid while the caller holds cgroup_mutex and it is ensured that either
>  attach() or cancel_attach() will be called in future. If threadgroup is
>  true, then a successful result indicates that all threads in the given
> -thread's threadgroup can be moved together.
> +thread's threadgroup can be moved together. If the subsystem wants to
> +iterate over task->thread_group, it must take rcu_read_lock then check
> +if thread_group_leader(task), returning -EAGAIN if that fails.
>  
>  void cancel_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
>  	       struct task_struct *task, bool threadgroup)
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index f91d7dd..fab8c87 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -1688,6 +1688,76 @@ int cgroup_path(const struct cgroup *cgrp, char *buf, int buflen)
>  }
>  EXPORT_SYMBOL_GPL(cgroup_path);
>  
> +/*
> + * cgroup_task_migrate - move a task from one cgroup to another.
> + *
> + * 'guarantee' is set if the caller promises that a new css_set for the task
> + * will already exit. If not set, this function might sleep, and can fail with
           
           already exist ?

> + * -ENOMEM. Otherwise, it can only fail with -ESRCH.
> + */
> +static int cgroup_task_migrate(struct cgroup *cgrp, struct cgroup *oldcgrp,
> +			       struct task_struct *tsk, bool guarantee)
> +{
> +	struct css_set *oldcg;
> +	struct css_set *newcg;
> +
> +	/*
> +	 * get old css_set. we need to take task_lock and refcount it, because
> +	 * an exiting task can change its css_set to init_css_set and drop its
> +	 * old one without taking cgroup_mutex.
> +	 */
> +	task_lock(tsk);
> +	oldcg = tsk->cgroups;
> +	get_css_set(oldcg);
> +	task_unlock(tsk);
> +
> +	/* locate or allocate a new css_set for this task. */
> +	if (guarantee) {
> +		/* we know the css_set we want already exists. */
> +		struct cgroup_subsys_state *template[CGROUP_SUBSYS_COUNT];
> +		read_lock(&css_set_lock);
> +		newcg = find_existing_css_set(oldcg, cgrp, template);
> +		BUG_ON(!newcg);
> +		get_css_set(newcg);
> +		read_unlock(&css_set_lock);
> +	} else {
> +		might_sleep();
> +		/* find_css_set will give us newcg already referenced. */
> +		newcg = find_css_set(oldcg, cgrp);
> +		if (!newcg) {
> +			put_css_set(oldcg);
> +			return -ENOMEM;
> +		}
> +	}
> +	put_css_set(oldcg);
> +
> +	/* if PF_EXITING is set, the tsk->cgroups pointer is no longer safe. */
> +	task_lock(tsk);
> +	if (tsk->flags & PF_EXITING) {
> +		task_unlock(tsk);
> +		put_css_set(newcg);
> +		return -ESRCH;
> +	}
> +	rcu_assign_pointer(tsk->cgroups, newcg);
> +	task_unlock(tsk);
> +
> +	/* Update the css_set linked lists if we're using them */
> +	write_lock(&css_set_lock);
> +	if (!list_empty(&tsk->cg_list))
> +		list_move(&tsk->cg_list, &newcg->tasks);
> +	write_unlock(&css_set_lock);
> +
> +	/*
> +	 * We just gained a reference on oldcg by taking it from the task. As
> +	 * trading it for newcg is protected by cgroup_mutex, we're safe to drop
> +	 * it here; it will be freed under RCU.
> +	 */
> +	put_css_set(oldcg);
> +
> +	set_bit(CGRP_RELEASABLE, &oldcgrp->flags);
> +	return 0;
> +}
> +
>  /**
>   * cgroup_attach_task - attach task 'tsk' to cgroup 'cgrp'
>   * @cgrp: the cgroup the task is attaching to
> @@ -1698,11 +1768,9 @@ EXPORT_SYMBOL_GPL(cgroup_path);
>   */
>  int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
>  {
> -	int retval = 0;
> +	int retval;
>  	struct cgroup_subsys *ss, *failed_ss = NULL;
>  	struct cgroup *oldcgrp;
> -	struct css_set *cg;
> -	struct css_set *newcg;
>  	struct cgroupfs_root *root = cgrp->root;
>  
>  	/* Nothing to do if the task is already in that cgroup */
> @@ -1726,46 +1794,16 @@ int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
>  		}
>  	}
>  
> -	task_lock(tsk);
> -	cg = tsk->cgroups;
> -	get_css_set(cg);
> -	task_unlock(tsk);
> -	/*
> -	 * Locate or allocate a new css_set for this task,
> -	 * based on its final set of cgroups
> -	 */
> -	newcg = find_css_set(cg, cgrp);
> -	put_css_set(cg);
> -	if (!newcg) {
> -		retval = -ENOMEM;
> -		goto out;
> -	}
> -
> -	task_lock(tsk);
> -	if (tsk->flags & PF_EXITING) {
> -		task_unlock(tsk);
> -		put_css_set(newcg);
> -		retval = -ESRCH;
> +	retval = cgroup_task_migrate(cgrp, oldcgrp, tsk, false);
> +	if (retval)
>  		goto out;
> -	}
> -	rcu_assign_pointer(tsk->cgroups, newcg);
> -	task_unlock(tsk);
> -
> -	/* Update the css_set linked lists if we're using them */
> -	write_lock(&css_set_lock);
> -	if (!list_empty(&tsk->cg_list)) {
> -		list_del(&tsk->cg_list);
> -		list_add(&tsk->cg_list, &newcg->tasks);
> -	}
> -	write_unlock(&css_set_lock);
>  
>  	for_each_subsys(root, ss) {
>  		if (ss->attach)
>  			ss->attach(ss, cgrp, oldcgrp, tsk, false);
>  	}
> -	set_bit(CGRP_RELEASABLE, &oldcgrp->flags);
> +

Hmm. By this, we call ss->attach(ss, cgrp, oldcgrp, tsk, false) after
makring CGRP_RELEASABLE+synchronize_rcu() to oldcgroup...is it safe ?
And why move it before attach() ?



>  	synchronize_rcu();
> -	put_css_set(cg);
>  
>  	/*
>  	 * wake up rmdir() waiter. the rmdir should fail since the cgroup
> @@ -1791,49 +1829,341 @@ out:
>  }
>  
>  /*
> - * Attach task with pid 'pid' to cgroup 'cgrp'. Call with cgroup_mutex
> - * held. May take task_lock of task
> + * cgroup_attach_proc works in two stages, the first of which prefetches all
> + * new css_sets needed (to make sure we have enough memory before committing
> + * to the move) and stores them in a list of entries of the following type.
> + * TODO: possible optimization: use css_set->rcu_head for chaining instead
> + */
> +struct cg_list_entry {
> +	struct css_set *cg;
> +	struct list_head links;
> +};
> +
> +static bool css_set_check_fetched(struct cgroup *cgrp,
> +				  struct task_struct *tsk, struct css_set *cg,
> +				  struct list_head *newcg_list)
> +{
> +	struct css_set *newcg;
> +	struct cg_list_entry *cg_entry;
> +	struct cgroup_subsys_state *template[CGROUP_SUBSYS_COUNT];
> +
> +	read_lock(&css_set_lock);
> +	newcg = find_existing_css_set(cg, cgrp, template);
> +	if (newcg)
> +		get_css_set(newcg);
> +	read_unlock(&css_set_lock);
> +
> +	/* doesn't exist at all? */
> +	if (!newcg)
> +		return false;
> +	/* see if it's already in the list */
> +	list_for_each_entry(cg_entry, newcg_list, links) {
> +		if (cg_entry->cg == newcg) {
> +			put_css_set(newcg);
> +			return true;
> +		}
> +	}
> +
> +	/* not found */
> +	put_css_set(newcg);
> +	return false;
> +}
> +
> +/*
> + * Find the new css_set and store it in the list in preparation for moving the
> + * given task to the given cgroup. Returns 0 or -ENOMEM.
>   */
> -static int attach_task_by_pid(struct cgroup *cgrp, u64 pid)
> +static int css_set_prefetch(struct cgroup *cgrp, struct css_set *cg,
> +			    struct list_head *newcg_list)
> +{
> +	struct css_set *newcg;
> +	struct cg_list_entry *cg_entry;
> +
> +	/* ensure a new css_set will exist for this thread */
> +	newcg = find_css_set(cg, cgrp);
> +	if (!newcg)
> +		return -ENOMEM;
> +	/* add it to the list */
> +	cg_entry = kmalloc(sizeof(struct cg_list_entry), GFP_KERNEL);
> +	if (!cg_entry) {
> +		put_css_set(newcg);
> +		return -ENOMEM;
> +	}
> +	cg_entry->cg = newcg;
> +	list_add(&cg_entry->links, newcg_list);
> +	return 0;
> +}
> +
> +/**
> + * cgroup_attach_proc - attach all threads in a threadgroup to a cgroup
> + * @cgrp: the cgroup to attach to
> + * @leader: the threadgroup leader task_struct of the group to be attached
> + *
> + * Call holding cgroup_mutex. Will take task_lock of each thread in leader's
> + * threadgroup individually in turn.
> + */
> +int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
> +{
> +	int retval;
> +	struct cgroup_subsys *ss, *failed_ss = NULL;
> +	struct cgroup *oldcgrp;
> +	struct css_set *oldcg;
> +	struct cgroupfs_root *root = cgrp->root;
> +	/* threadgroup list cursor */
> +	struct task_struct *tsk;
> +	/*
> +	 * we need to make sure we have css_sets for all the tasks we're
> +	 * going to move -before- we actually start moving them, so that in
> +	 * case we get an ENOMEM we can bail out before making any changes.
> +	 */
> +	struct list_head newcg_list;
> +	struct cg_list_entry *cg_entry, *temp_nobe;
> +
> +	/* check that we can legitimately attach to the cgroup. */
> +	for_each_subsys(root, ss) {
> +		if (ss->can_attach) {
> +			retval = ss->can_attach(ss, cgrp, leader, true);
> +			if (retval) {
> +				failed_ss = ss;
> +				goto out;
> +			}
> +		}
> +	}

Then, we cannot do attach limitaion control per thread ? (This just check leader.)
Is it ok for all subsys ?


> +
> +	/*
> +	 * step 1: make sure css_sets exist for all threads to be migrated.
> +	 * we use find_css_set, which allocates a new one if necessary.
> +	 */
> +	INIT_LIST_HEAD(&newcg_list);
> +	oldcgrp = task_cgroup_from_root(leader, root);
> +	if (cgrp != oldcgrp) {
> +		/* get old css_set */
> +		task_lock(leader);
> +		if (leader->flags & PF_EXITING) {
> +			task_unlock(leader);
> +			goto prefetch_loop;
> +		}
Why do we continue here ? not -ESRCH ?


> +		oldcg = leader->cgroups;
> +		get_css_set(oldcg);
> +		task_unlock(leader);
> +		/* acquire new one */
> +		retval = css_set_prefetch(cgrp, oldcg, &newcg_list);
> +		put_css_set(oldcg);
> +		if (retval)
> +			goto list_teardown;
> +	}
> +prefetch_loop:
> +	rcu_read_lock();
> +	/* sanity check - if we raced with de_thread, we must abort */
> +	if (!thread_group_leader(leader)) {
> +		retval = -EAGAIN;
> +		goto list_teardown;
> +	}

EAGAIN ? ESRCH ? or EBUSY ?

> +	/*
> +	 * if we need to fetch a new css_set for this task, we must exit the
> +	 * rcu_read section because allocating it can sleep. afterwards, we'll
> +	 * need to restart iteration on the threadgroup list - the whole thing
> +	 * will be O(nm) in the number of threads and css_sets; as the typical
> +	 * case has only one css_set for all of them, usually O(n). which ones
> +	 * we need allocated won't change as long as we hold cgroup_mutex.
> +	 */
> +	list_for_each_entry_rcu(tsk, &leader->thread_group, thread_group) {
> +		/* nothing to do if this task is already in the cgroup */
> +		oldcgrp = task_cgroup_from_root(tsk, root);
> +		if (cgrp == oldcgrp)
> +			continue;
> +		/* get old css_set pointer */
> +		task_lock(tsk);
> +		if (tsk->flags & PF_EXITING) {
> +			/* ignore this task if it's going away */
> +			task_unlock(tsk);

It's going away but seems to exist for a while....then, "continue" is safe
for keeping consistency ?


> +			continue;
> +		}
> +		oldcg = tsk->cgroups;
> +		get_css_set(oldcg);
> +		task_unlock(tsk);
> +		/* see if the new one for us is already in the list? */
> +		if (css_set_check_fetched(cgrp, tsk, oldcg, &newcg_list)) {
> +			/* was already there, nothing to do. */
> +			put_css_set(oldcg);
> +		} else {
> +			/* we don't already have it. get new one. */
> +			rcu_read_unlock();
> +			retval = css_set_prefetch(cgrp, oldcg, &newcg_list);
> +			put_css_set(oldcg);
> +			if (retval)
> +				goto list_teardown;
> +			/* begin iteration again. */
> +			goto prefetch_loop;

Hmm ? Why do we need to restart from the 1st entry ?
(maybe because of rcu_read_unlock() ?)
Does this function work well if the process has 10000+ threads ?

How about this logic ?
==

	/* At first, find out necessary things */
	rcu_read_lock();
	list_for_each_entry_rcu() {
		oldcgrp = task_cgroup_from_root(tsk, root);
		if (oldcgrp == cgrp)
			continue;
		task_lock(task);
		if (task->flags & PF_EXITING) {
			task_unlock(task);
			continue;
		}
		oldcg = tsk->cgroups;
		get_css_set(oldcg);
		task_unlock(task);
		read_lock(&css_set_lock);
		newcg = find_existing_css_set(oldcgrp cgrp, template);
		if (newcg)
			remember_this_newcg(newcg, &found_cg_array); {
			put_css_set(oldcg);
		} else
			remember_need_to_allocate(oldcg, &need_to_allocate_array);
	}
	rcu_read_unlock();
	/* Sort all cg_list found and drop doubly counted ones, drop refcnt if necessary */
	sort_and_unique(found_cg_array);
	/* Sort all cg_list not found and drop doubly counted ones, drop refcnt if necessary */
	sort_and_unique(need_to_allocate_array);
	/* Allocate new ones */
	newly_allocated_array = allocate_new_cg_lists(need_to_allocate_array);
	drop_refcnt_of_old_cgs(need_to_allocate_array);

	/* Now we have all necessary cg_list */

> +		}
> +	}
> +	rcu_read_unlock();
> +
> +	/*
> +	 * step 2: now that we're guaranteed success wrt the css_sets, proceed
> +	 * to move all tasks to the new cgroup. we need to lock against possible
> +	 * races with fork(). note: we can safely access leader->signal because
> +	 * attach_task_by_pid takes a reference on leader, which guarantees that
> +	 * the signal_struct will stick around. threadgroup_fork_lock must be
> +	 * taken outside of tasklist_lock to match the order in the fork path.
> +	 */
> +	BUG_ON(!leader->signal);
> +	down_write(&leader->signal->threadgroup_fork_lock);
> +	read_lock(&tasklist_lock);
> +	/* sanity check - if we raced with de_thread, we must abort */
> +	if (!thread_group_leader(leader)) {
> +		retval = -EAGAIN;
> +		read_unlock(&tasklist_lock);
> +		up_write(&leader->signal->threadgroup_fork_lock);
> +		goto list_teardown;
> +	}
> +	/*
> +	 * No failure cases left, so this is the commit point.
> +	 *
> +	 * If the leader is already there, skip moving him. Note: even if the
> +	 * leader is PF_EXITING, we still move all other threads; if everybody
> +	 * is PF_EXITING, we end up doing nothing, which is ok.
> +	 */
> +	oldcgrp = task_cgroup_from_root(leader, root);
> +	if (cgrp != oldcgrp) {
> +		retval = cgroup_task_migrate(cgrp, oldcgrp, leader, true);
> +		BUG_ON(retval != 0 && retval != -ESRCH);
> +	}
> +	/* Now iterate over each thread in the group. */
> +	list_for_each_entry_rcu(tsk, &leader->thread_group, thread_group) {
> +		BUG_ON(tsk->signal != leader->signal);
> +		/* leave current thread as it is if it's already there */
> +		oldcgrp = task_cgroup_from_root(tsk, root);
> +		if (cgrp == oldcgrp)
> +			continue;
> +		/* we don't care whether these threads are exiting */
> +		retval = cgroup_task_migrate(cgrp, oldcgrp, tsk, true);
> +		BUG_ON(retval != 0 && retval != -ESRCH);
> +	}
> +
> +	/*
> +	 * step 3: attach whole threadgroup to each subsystem
> +	 * TODO: if ever a subsystem needs to know the oldcgrp for each task
> +	 * being moved, this call will need to be reworked to communicate that.
> +	 */
> +	for_each_subsys(root, ss) {
> +		if (ss->attach)
> +			ss->attach(ss, cgrp, oldcgrp, leader, true);
> +	}
> +	/* holding these until here keeps us safe from exec() and fork(). */
> +	read_unlock(&tasklist_lock);
> +	up_write(&leader->signal->threadgroup_fork_lock);
> +
> +	/*
> +	 * step 4: success! and cleanup
> +	 */
> +	synchronize_rcu();
> +	cgroup_wakeup_rmdir_waiter(cgrp);
> +	retval = 0;
> +list_teardown:
> +	/* clean up the list of prefetched css_sets. */
> +	list_for_each_entry_safe(cg_entry, temp_nobe, &newcg_list, links) {
> +		list_del(&cg_entry->links);
> +		put_css_set(cg_entry->cg);
> +		kfree(cg_entry);
> +	}
> +out:
> +	if (retval) {
> +		/* same deal as in cgroup_attach_task, with threadgroup=true */
> +		for_each_subsys(root, ss) {
> +			if (ss == failed_ss)
> +				break;
> +			if (ss->cancel_attach)
> +				ss->cancel_attach(ss, cgrp, leader, true);
> +		}
> +	}
> +	return retval;
> +}
> +
> +/*
> + * Find the task_struct of the task to attach by vpid and pass it along to the
> + * function to attach either it or all tasks in its threadgroup. Will take
> + * cgroup_mutex; may take task_lock of task.
> + */
> +static int attach_task_by_pid(struct cgroup *cgrp, u64 pid, bool threadgroup)
>  {
>  	struct task_struct *tsk;
>  	const struct cred *cred = current_cred(), *tcred;
>  	int ret;
>  
> +	if (!cgroup_lock_live_group(cgrp))
> +		return -ENODEV;
> +
>  	if (pid) {
>  		rcu_read_lock();
>  		tsk = find_task_by_vpid(pid);
> -		if (!tsk || tsk->flags & PF_EXITING) {
> +		if (!tsk) {
> +			rcu_read_unlock();
> +			cgroup_unlock();
> +			return -ESRCH;
> +		}
> +		if (threadgroup) {
> +			/*
> +			 * it is safe to find group_leader because tsk was found
> +			 * in the tid map, meaning it can't have been unhashed
> +			 * by someone in de_thread changing the leadership.
> +			 */
> +			tsk = tsk->group_leader;
> +			BUG_ON(!thread_group_leader(tsk));
> +		} else if (tsk->flags & PF_EXITING) {
> +			/* optimization for the single-task-only case */
>  			rcu_read_unlock();
> +			cgroup_unlock();
>  			return -ESRCH;
>  		}
>  
> +		/*
> +		 * even if we're attaching all tasks in the thread group, we
> +		 * only need to check permissions on one of them.
> +		 */
>  		tcred = __task_cred(tsk);
>  		if (cred->euid &&
>  		    cred->euid != tcred->uid &&
>  		    cred->euid != tcred->suid) {
>  			rcu_read_unlock();
> +			cgroup_unlock();
>  			return -EACCES;
>  		}
>  		get_task_struct(tsk);
>  		rcu_read_unlock();
>  	} else {
> -		tsk = current;
> +		if (threadgroup)
> +			tsk = current->group_leader;
> +		else

I'm not sure but "group_leader" is safe to access here ?

> +			tsk = current;
>  		get_task_struct(tsk);
>  	}
>  
> -	ret = cgroup_attach_task(cgrp, tsk);
> +	if (threadgroup)
> +		ret = cgroup_attach_proc(cgrp, tsk);
> +	else
> +		ret = cgroup_attach_task(cgrp, tsk);
>  	put_task_struct(tsk);
> +	cgroup_unlock();
>  	return ret;
>  }
>  
>  static int cgroup_tasks_write(struct cgroup *cgrp, struct cftype *cft, u64 pid)
>  {
> +	return attach_task_by_pid(cgrp, pid, false);
> +}
> +
> +static int cgroup_procs_write(struct cgroup *cgrp, struct cftype *cft, u64 tgid)
> +{
>  	int ret;
> -	if (!cgroup_lock_live_group(cgrp))
> -		return -ENODEV;
> -	ret = attach_task_by_pid(cgrp, pid);
> -	cgroup_unlock();
> +	do {
> +		/*
> +		 * attach_proc fails with -EAGAIN if threadgroup leadership
> +		 * changes in the middle of the operation, in which case we need
> +		 * to find the task_struct for the new leader and start over.
> +		 */
> +		ret = attach_task_by_pid(cgrp, tgid, true);
> +	} while (ret == -EAGAIN);
>  	return ret;
>  }
>  
> @@ -3168,9 +3498,9 @@ static struct cftype files[] = {
>  	{
>  		.name = CGROUP_FILE_GENERIC_PREFIX "procs",
>  		.open = cgroup_procs_open,
> -		/* .write_u64 = cgroup_procs_write, TODO */
> +		.write_u64 = cgroup_procs_write,
>  		.release = cgroup_pidlist_release,
> -		.mode = S_IRUGO,
> +		.mode = S_IRUGO | S_IWUSR,
>  	},
>  	{
>  		.name = "notify_on_release",
> diff --git a/kernel/cgroup_freezer.c b/kernel/cgroup_freezer.c
> index ce71ed5..daf0249 100644
> --- a/kernel/cgroup_freezer.c
> +++ b/kernel/cgroup_freezer.c
> @@ -190,6 +190,10 @@ static int freezer_can_attach(struct cgroup_subsys *ss,
>  		struct task_struct *c;
>  
>  		rcu_read_lock();
> +		if (!thread_group_leader(task)) {
> +			rcu_read_unlock();
> +			return -EAGAIN;
> +		}
>  		list_for_each_entry_rcu(c, &task->thread_group, thread_group) {
>  			if (is_task_frozen_enough(c)) {
>  				rcu_read_unlock();
> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index b23c097..3d7c978 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1404,6 +1404,10 @@ static int cpuset_can_attach(struct cgroup_subsys *ss, struct cgroup *cont,
>  		struct task_struct *c;
>  
>  		rcu_read_lock();
> +		if (!thread_group_leader(tsk)) {
> +			rcu_read_unlock();
> +			return -EAGAIN;
> +		}
>  		list_for_each_entry_rcu(c, &tsk->thread_group, thread_group) {
>  			ret = security_task_setscheduler(c, 0, NULL);
>  			if (ret) {
> diff --git a/kernel/ns_cgroup.c b/kernel/ns_cgroup.c
> index 2a5dfec..ecd15d2 100644
> --- a/kernel/ns_cgroup.c
> +++ b/kernel/ns_cgroup.c
> @@ -59,6 +59,10 @@ static int ns_can_attach(struct cgroup_subsys *ss, struct cgroup *new_cgroup,
>  	if (threadgroup) {
>  		struct task_struct *c;
>  		rcu_read_lock();
> +		if (!thread_group_leader(task)) {
> +			rcu_read_unlock();
> +			return -EAGAIN;
> +		}
>  		list_for_each_entry_rcu(c, &task->thread_group, thread_group) {
>  			if (!cgroup_is_descendant(new_cgroup, c)) {
>  				rcu_read_unlock();
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 70fa78d..df53f53 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -8721,6 +8721,10 @@ cpu_cgroup_can_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
>  	if (threadgroup) {
>  		struct task_struct *c;
>  		rcu_read_lock();
> +		if (!thread_group_leader(tsk)) {
> +			rcu_read_unlock();
> +			return -EAGAIN;
> +		}
>  		list_for_each_entry_rcu(c, &tsk->thread_group, thread_group) {
>  			retval = cpu_cgroup_can_attach_task(cgrp, c);
>  			if (retval) {


Thanks,
-Kame

  parent reply	other threads:[~2010-08-04  1:08 UTC|newest]

Thread overview: 185+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-30 23:56 [PATCH v4 0/2] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Ben Blum
2010-07-30 23:59 ` [PATCH v4 2/2] cgroups: make procs file writable Ben Blum
2010-08-04  1:08   ` KAMEZAWA Hiroyuki
2010-08-04  4:28     ` Ben Blum
     [not found]     ` <20100804100811.199d73ba.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2010-08-04  4:28       ` Ben Blum
2010-08-04  4:30       ` Paul Menage
2010-08-04  4:30     ` Paul Menage
     [not found]       ` <AANLkTikMofFGHSwF2QrdcAsit+hU6ihndhK5cod8duwS-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-08-04  4:38         ` Ben Blum
2010-08-04  4:38           ` Ben Blum
2010-08-04  4:46           ` Paul Menage
     [not found]           ` <20100804043849.GC11950-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-04  4:46             ` Paul Menage
     [not found]   ` <20100730235902.GC22644-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-04  1:08     ` KAMEZAWA Hiroyuki [this message]
     [not found] ` <20100730235649.GA22644-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-07-30 23:57   ` [PATCH v4 1/2] cgroups: read-write lock CLONE_THREAD forking per threadgroup Ben Blum
2010-07-30 23:57     ` Ben Blum
2010-08-04  3:44     ` Paul Menage
     [not found]       ` <AANLkTikpNG2Y3S3AyxAbCkMynKu1u5yKPrw=bh+uy=9R-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-08-04  4:33         ` Ben Blum
2010-08-04  4:33           ` Ben Blum
     [not found]           ` <20100804043328.GB11950-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-04  4:34             ` Paul Menage
2010-08-04  4:34               ` Paul Menage
2010-08-06  6:02               ` Ben Blum
2010-08-06  7:08                 ` KAMEZAWA Hiroyuki
     [not found]                 ` <20100806060224.GA1351-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-06  7:08                   ` KAMEZAWA Hiroyuki
     [not found]               ` <AANLkTi=dhym3c+XJVjoObROcw=mz2Y+a2R5oMdePK3Ng-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-08-06  6:02                 ` Ben Blum
2010-08-04 16:34         ` Brian K. White
2010-08-04 16:34           ` Brian K. White
     [not found]     ` <20100730235754.GB22644-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-04  3:44       ` Paul Menage
2010-07-30 23:59   ` [PATCH v4 2/2] cgroups: make procs file writable Ben Blum
2010-08-03 19:58   ` [PATCH v4 0/2] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Andrew Morton
2010-08-11  5:46   ` [PATCH v5 0/3] " Ben Blum
2010-08-11  5:46     ` Ben Blum
2010-08-11  5:47     ` [PATCH v5 1/3] cgroups: read-write lock CLONE_THREAD forking per threadgroup Ben Blum
2010-08-23 23:35       ` Paul Menage
     [not found]       ` <20100811054711.GB8743-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-23 23:35         ` Paul Menage
     [not found]     ` <20100811054604.GA8743-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-11  5:47       ` Ben Blum
2010-08-11  5:48       ` [PATCH v5 2/3] cgroups: add can_attach callback for checking all threads in a group Ben Blum
2010-08-11  5:48         ` Ben Blum
2010-08-23 23:31         ` Paul Menage
     [not found]         ` <20100811054814.GC8743-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-23 23:31           ` Paul Menage
2010-08-11  5:48       ` [PATCH v5 3/3] cgroups: make procs file writable Ben Blum
2010-12-24  8:22       ` [PATCH v6 0/3] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Ben Blum
2010-12-24  8:22         ` Ben Blum
     [not found]         ` <20101224082226.GA13872-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-24  8:23           ` [PATCH v6 1/3] cgroups: read-write lock CLONE_THREAD forking per threadgroup Ben Blum
2010-12-24  8:23             ` Ben Blum
2010-12-24  8:24           ` [PATCH v6 2/3] cgroups: add can_attach callback for checking all threads in a group Ben Blum
2010-12-24  8:24             ` Ben Blum
2010-12-24  8:24           ` [PATCH v6 3/3] cgroups: make procs file writable Ben Blum
2010-12-24  8:24             ` Ben Blum
2011-01-12 23:26             ` Paul E. McKenney
     [not found]             ` <20101224082445.GD13872-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-01-12 23:26               ` Paul E. McKenney
2010-12-26 12:09           ` [PATCH v7 0/3] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Ben Blum
2010-12-26 12:09         ` Ben Blum
2010-12-26 12:09           ` [PATCH v7 1/3] cgroups: read-write lock CLONE_THREAD forking per threadgroup Ben Blum
2011-01-24  8:38             ` Paul Menage
2011-01-24 21:05             ` Andrew Morton
2011-02-04 21:25               ` Ben Blum
     [not found]                 ` <20110204212515.GA5916-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-02-04 21:36                   ` Andrew Morton
2011-02-04 21:36                 ` Andrew Morton
     [not found]                   ` <20110204133657.78aeebe3.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2011-02-04 21:43                     ` Ben Blum
2011-02-04 21:43                       ` Ben Blum
2011-02-14  5:31               ` Paul Menage
     [not found]               ` <20110124130529.903d9832.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2011-02-04 21:25                 ` Ben Blum
2011-02-14  5:31                 ` Paul Menage
     [not found]             ` <20101226120951.GB28529-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-01-24  8:38               ` Paul Menage
2011-01-24 21:05               ` Andrew Morton
2010-12-26 12:11           ` [PATCH v7 2/3] cgroups: add atomic-context per-thread subsystem callbacks Ben Blum
     [not found]             ` <20101226121100.GC28529-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-01-24  8:38               ` Paul Menage
2011-01-24  8:38             ` Paul Menage
2011-01-24 15:32               ` Ben Blum
     [not found]               ` <AANLkTimytfrDnr_5SzBUFQu0SaGdAWDC0p38hiFiHrtU-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-01-24 15:32                 ` Ben Blum
2010-12-26 12:12           ` [PATCH v7 3/3] cgroups: make procs file writable Ben Blum
     [not found]           ` <20101226120919.GA28529-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-26 12:09             ` [PATCH v7 1/3] cgroups: read-write lock CLONE_THREAD forking per threadgroup Ben Blum
2010-12-26 12:11             ` [PATCH v7 2/3] cgroups: add atomic-context per-thread subsystem callbacks Ben Blum
2010-12-26 12:12             ` [PATCH v7 3/3] cgroups: make procs file writable Ben Blum
2011-02-08  1:35             ` [PATCH v8 0/3] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Ben Blum
2011-02-08  1:35           ` Ben Blum
2011-02-08  1:37             ` [PATCH v8 1/3] cgroups: read-write lock CLONE_THREAD forking per threadgroup Ben Blum
2011-03-03 17:54               ` Paul Menage
     [not found]               ` <20110208013741.GD31569-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-03-03 17:54                 ` Paul Menage
     [not found]             ` <20110208013542.GC31569-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-02-08  1:37               ` Ben Blum
2011-02-08  1:39               ` [PATCH v8 2/3] cgroups: add per-thread subsystem callbacks Ben Blum
2011-02-08  1:39                 ` Ben Blum
2011-03-03 17:59                 ` Paul Menage
     [not found]                 ` <20110208013915.GE31569-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-03-03 17:59                   ` Paul Menage
2011-02-08  1:39               ` [PATCH v8 3/3] cgroups: make procs file writable Ben Blum
2011-02-08  1:39                 ` Ben Blum
2011-02-16 19:22                 ` [PATCH v8 4/3] cgroups: use flex_array in attach_proc Ben Blum
     [not found]                   ` <20110216192200.GA11980-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-03-03 17:48                     ` Paul Menage
2011-03-03 17:48                   ` Paul Menage
2011-03-22  5:15                     ` Ben Blum
     [not found]                       ` <20110322051553.GB11447-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-03-22  5:19                         ` [PATCH v8.5 " Ben Blum
2011-03-22  5:19                       ` Ben Blum
     [not found]                     ` <AANLkTinKTqBnjLKkv93UxyWoPL-2vyXP=LUvRz8JTC2K-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-03-22  5:15                       ` [PATCH v8 " Ben Blum
2011-03-03 18:38                 ` [PATCH v8 3/3] cgroups: make procs file writable Paul Menage
     [not found]                   ` <AANLkTinEnNsu8=PEktXL_EECzGYqsgdf+uogGxe7k4W+-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-03-10  6:18                     ` Ben Blum
2011-03-10  6:18                   ` Ben Blum
2011-03-10 20:01                     ` Paul Menage
     [not found]                       ` <AANLkTikkmfwk0nV0p=omz2ddrw+ZqWF1Lx3EfO6dTjEQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-03-15 21:13                         ` Ben Blum
2011-03-15 21:13                           ` Ben Blum
2011-03-18 16:54                           ` Paul Menage
     [not found]                             ` <AANLkTim4z_x_UQE__f5t73Dimja8PTTXTKKgj2phv6FY-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-03-22  5:18                               ` [PATCH v8.5 " Ben Blum
2011-03-22  5:18                                 ` Ben Blum
     [not found]                                 ` <20110322051841.GA12055-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-03-29 23:27                                   ` Paul Menage
2011-03-29 23:27                                 ` Paul Menage
2011-03-29 23:39                                   ` Andrew Morton
     [not found]                                   ` <BANLkTikMgd5HvMyC1BTGzAtj_=Jk=wZm+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-03-29 23:39                                     ` Andrew Morton
     [not found]                           ` <20110315211353.GA9992-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-03-18 16:54                             ` [PATCH v8 " Paul Menage
2011-03-22  5:08                     ` Ben Blum
     [not found]                     ` <20110310061831.GA23736-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-03-10 20:01                       ` Paul Menage
2011-03-22  5:08                       ` Ben Blum
     [not found]                 ` <20110208013950.GF31569-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-02-16 19:22                   ` [PATCH v8 4/3] cgroups: use flex_array in attach_proc Ben Blum
2011-03-03 18:38                   ` [PATCH v8 3/3] cgroups: make procs file writable Paul Menage
2011-02-09 23:10               ` [PATCH v8 0/3] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Andrew Morton
2011-04-06 19:44               ` [PATCH v8.75 0/4] " Ben Blum
2011-02-09 23:10             ` [PATCH v8 0/3] " Andrew Morton
2011-02-10  1:02               ` KAMEZAWA Hiroyuki
     [not found]                 ` <20110210100210.adf09c49.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2011-02-10  1:36                   ` Ben Blum
2011-02-14  6:12                   ` Paul Menage
2011-02-10  1:36                 ` Ben Blum
2011-02-14  6:12                 ` Paul Menage
2011-02-14  6:12               ` Paul Menage
     [not found]               ` <20110209151046.89e03dcd.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2011-02-10  1:02                 ` KAMEZAWA Hiroyuki
2011-02-14  6:12                 ` Paul Menage
2011-04-06 19:44             ` [PATCH v8.75 0/4] " Ben Blum
2011-04-06 19:45               ` [PATCH v8.75 1/4] cgroups: read-write lock CLONE_THREAD forking per threadgroup Ben Blum
     [not found]               ` <20110406194420.GC10792-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-04-06 19:45                 ` Ben Blum
2011-04-06 19:46                 ` [PATCH v8.75 2/4] cgroups: add per-thread subsystem callbacks Ben Blum
2011-04-06 19:46                 ` [PATCH v8.75 3/4] cgroups: make procs file writable Ben Blum
2011-04-06 19:47                 ` [PATCH v8.75 4/4] cgroups: use flex_array in attach_proc Ben Blum
2011-04-12 23:25                 ` [PATCH v8.75 0/4] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Andrew Morton
2011-04-06 19:46               ` [PATCH v8.75 2/4] cgroups: add per-thread subsystem callbacks Ben Blum
2011-04-06 19:46               ` [PATCH v8.75 3/4] cgroups: make procs file writable Ben Blum
2011-04-06 19:47               ` [PATCH v8.75 4/4] cgroups: use flex_array in attach_proc Ben Blum
2011-04-12 23:25               ` [PATCH v8.75 0/4] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Andrew Morton
2011-04-12 23:59                 ` Ben Blum
2011-04-13  2:07                 ` Li Zefan
     [not found]                 ` <20110412162516.4120c441.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2011-04-12 23:59                   ` Ben Blum
2011-04-13  2:07                   ` Li Zefan
2010-08-11  5:48     ` [PATCH v5 3/3] cgroups: make procs file writable Ben Blum
     [not found]       ` <20100811054851.GD8743-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-08-24 18:08         ` Paul Menage
2010-08-24 18:08       ` Paul Menage
     [not found]         ` <AANLkTimRM8rDe+u7fTy853RK=1mnLJMK57Tci2OLPR7L-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-10-08 21:57           ` Paul Menage
     [not found]             ` <AANLkTim7HW0wNyqOPePFXmEMV8hx_fMKNMTAsSwkRzZX-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-12-16  6:34               ` Paul Menage
     [not found]                 ` <AANLkTin7aK5uEFi0U+iU_9=cbfRTHfDzKsbWupn73fSL-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2010-12-16  8:26                   ` Andrew Morton
     [not found]                     ` <20101216002603.6741874a.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2010-12-24  3:33                       ` Ben Blum
     [not found]                         ` <20101224033352.GA7804-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-24 10:49                           ` David Rientjes
     [not found]                             ` <alpine.DEB.2.00.1012240245040.775-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2010-12-24 11:45                               ` Ben Blum
     [not found]                                 ` <20101224114500.GA18036-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-24 11:53                                   ` Andrew Morton
     [not found]                                     ` <20101224035331.b907b410.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2010-12-24 12:08                                       ` Ben Blum
     [not found]                                         ` <20101224120853.GA18518-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-24 21:24                                           ` Ben Blum
     [not found]                                             ` <20101224212452.GA27275-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-24 21:34                                               ` David Rientjes
     [not found]                                                 ` <alpine.DEB.2.00.1012241333010.13509-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2010-12-24 23:09                                                   ` Ben Blum
     [not found]                                                     ` <20101224230901.GA30136-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-26 21:48                                                       ` David Rientjes
     [not found]                                                         ` <alpine.DEB.2.00.1012261345340.23173-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2010-12-27  0:12                                                           ` Ben Blum
     [not found]                                                             ` <20101227001233.GA10951-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-27 10:31                                                               ` David Rientjes
     [not found]                                                                 ` <alpine.DEB.2.00.1012270227010.3960-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2010-12-27 10:37                                                                   ` Ben Blum
     [not found]                                                                     ` <20101227103701.GC20986-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-27 10:53                                                                       ` David Rientjes
     [not found]                                                                         ` <alpine.DEB.2.00.1012270240400.3960-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2010-12-27 11:00                                                                           ` Ben Blum
     [not found]                                                                             ` <20101227110050.GF20986-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-27 11:03                                                                               ` David Rientjes
2010-12-29  1:39                                                                           ` Li Zefan
     [not found]                                                                             ` <4D1A913C.5080702-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2010-12-30  0:26                                                                               ` David Rientjes
     [not found]                                                                                 ` <alpine.DEB.2.00.1012291624210.6040-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2010-12-30  4:02                                                                                   ` Li Zefan
     [not found]                                                                                     ` <4D1C0464.5090801-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2010-12-30  4:24                                                                                       ` David Rientjes
     [not found]                                                                                         ` <alpine.DEB.2.00.1012292019540.27634-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2010-12-30  4:38                                                                                           ` Li Zefan
     [not found]                                                                                             ` <4D1C0CC6.4090107-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2010-12-30  5:49                                                                                               ` David Rientjes
     [not found]                                                                                                 ` <alpine.DEB.2.00.1012292149000.29486-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2010-12-30  6:12                                                                                                   ` Li Zefan
     [not found]                                                                                                     ` <4D1C22D2.9090007-BthXqXjhjHXQFUHtdCDX3A@public.gmane.org>
2010-12-30 18:25                                                                                                       ` David Rientjes
2010-12-24 21:32                                           ` David Rientjes
2010-12-25  2:55                           ` Ben Blum
     [not found]                             ` <20101225025508.GA649-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-27  0:53                               ` Daisuke Nishimura
     [not found]                                 ` <20101227095353.48d95687.nishimura-YQH0OdQVrdy45+QrQBaojngSJqDPrsil@public.gmane.org>
2010-12-27  1:15                                   ` KAMEZAWA Hiroyuki
2010-12-27  4:22                                   ` Ben Blum
     [not found]                                     ` <20101227042254.GA15417-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-27  7:00                                       ` KAMEZAWA Hiroyuki
     [not found]                                         ` <20101227160041.07bff52a.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2010-12-27  7:21                                           ` Ben Blum
     [not found]                                             ` <20101227072123.GA19652-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-27  7:42                                               ` KAMEZAWA Hiroyuki
     [not found]                                                 ` <20101227164207.b09318be.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2010-12-27  8:42                                                   ` Ben Blum
     [not found]                                                     ` <20101227084257.GA20986-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2010-12-27  9:18                                                       ` KAMEZAWA Hiroyuki
     [not found]                                                         ` <20101227181801.095e9a23.kamezawa.hiroyu-+CUm20s59erQFUHtdCDX3A@public.gmane.org>
2010-12-27 10:12                                                           ` Ben Blum
     [not found]                                                             ` <20101227101228.GB20986-dJQ2lsn+DImqwBT9kiuFm8WGCVk0P7UB@public.gmane.org>
2011-01-04  0:57                                                               ` KAMEZAWA Hiroyuki
2010-12-28  2:43                                       ` [RFC][BUGFIX] memcg: fix dead lock between cpuset and memcg (Re: [PATCH v5 3/3] cgroups: make procs file writable) Daisuke Nishimura
2010-12-25  4:24                           ` [PATCH v5 3/3] cgroups: make procs file writable Ben Blum
2010-08-03 19:58 ` [PATCH v4 0/2] cgroups: implement moving a threadgroup's threads atomically with cgroup.procs Andrew Morton
     [not found]   ` <20100803125827.0822e6ab.akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
2010-08-03 23:45     ` KAMEZAWA Hiroyuki
2010-08-04  2:00     ` Li Zefan
2010-08-04  2:00       ` Li Zefan
2010-08-03 23:45   ` KAMEZAWA Hiroyuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='20100804100811.199d73ba.kamezawa.hiroyu__21083.9595798918$1280884693$gmane$org@jp.fujitsu.com' \
    --to=kamezawa.hiroyu-+cum20s59erqfuhtdcdx3a@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=bblum-OM76b2Iv3yLQjUSlxSEPGw@public.gmane.org \
    --cc=containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \
    --cc=ebiederm-aS9lmoZGLiVWk0Htik3J/w@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=menage-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=oleg-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.