linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC 0/3] get rid of mm_struct::owner
@ 2015-05-26 11:50 Michal Hocko
  2015-05-26 11:50 ` [RFC 1/3] memcg: restructure mem_cgroup_can_attach() Michal Hocko
                   ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Michal Hocko @ 2015-05-26 11:50 UTC (permalink / raw)
  To: linux-mm
  Cc: Johannes Weiner, Oleg Nesterov, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

Hi,
this small series drops IMO awkward mm_struct::owner field which is
used to track task which owns the mm_struct and which is then used for
mm->mem_cgroup mapping. The motivation for the change and drawback
(namely user visible change of behavior) is described in the patch 3.

The first patch is a trivial cleanup by Tejun
(http://marc.info/?l=linux-mm&m=143197860820270) and I have added it
here just to prevent from conflicts with his changes.

Patch 2 is preparatory and it shouldn't cause any functional changes.
It simply replaces mc.to as an indicator of the charge migration
during task move by using mc.moving_task because we need to have mc.to
available even when the charges are not migrated.

I am sending this as an RFC because of the user visible aspect of the
change. I am not convinced that there is a strong usecase to justify
keeping mm->owner but I would like to hear back first.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [RFC 1/3] memcg: restructure mem_cgroup_can_attach()
  2015-05-26 11:50 [RFC 0/3] get rid of mm_struct::owner Michal Hocko
@ 2015-05-26 11:50 ` Michal Hocko
  2015-05-26 11:50 ` [RFC 2/3] memcg: Use mc.moving_task as the indication for charge moving Michal Hocko
  2015-05-26 11:50 ` [RFC 3/3] memcg: get rid of mm_struct::owner Michal Hocko
  2 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-05-26 11:50 UTC (permalink / raw)
  To: linux-mm
  Cc: Johannes Weiner, Oleg Nesterov, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

From: Tejun Heo <tj@kernel.org>

Restructure it to lower nesting level and help the planned threadgroup
leader iteration changes.

This is pure reorganization.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
---
 mm/memcontrol.c | 61 ++++++++++++++++++++++++++++++---------------------------
 1 file changed, 32 insertions(+), 29 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5fd273d22714..f3d92cf0caf4 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5001,10 +5001,12 @@ static void mem_cgroup_clear_mc(void)
 static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 				 struct cgroup_taskset *tset)
 {
-	struct task_struct *p = cgroup_taskset_first(tset);
-	int ret = 0;
 	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+	struct mem_cgroup *from;
+	struct task_struct *p;
+	struct mm_struct *mm;
 	unsigned long move_flags;
+	int ret = 0;
 
 	/*
 	 * We are now commited to this value whatever it is. Changes in this
@@ -5012,36 +5014,37 @@ static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 	 * So we need to save it, and keep it going.
 	 */
 	move_flags = READ_ONCE(memcg->move_charge_at_immigrate);
-	if (move_flags) {
-		struct mm_struct *mm;
-		struct mem_cgroup *from = mem_cgroup_from_task(p);
+	if (!move_flags)
+		return 0;
 
-		VM_BUG_ON(from == memcg);
+	p = cgroup_taskset_first(tset);
+	from = mem_cgroup_from_task(p);
 
-		mm = get_task_mm(p);
-		if (!mm)
-			return 0;
-		/* We move charges only when we move a owner of the mm */
-		if (mm->owner == p) {
-			VM_BUG_ON(mc.from);
-			VM_BUG_ON(mc.to);
-			VM_BUG_ON(mc.precharge);
-			VM_BUG_ON(mc.moved_charge);
-			VM_BUG_ON(mc.moved_swap);
-
-			spin_lock(&mc.lock);
-			mc.from = from;
-			mc.to = memcg;
-			mc.flags = move_flags;
-			spin_unlock(&mc.lock);
-			/* We set mc.moving_task later */
-
-			ret = mem_cgroup_precharge_mc(mm);
-			if (ret)
-				mem_cgroup_clear_mc();
-		}
-		mmput(mm);
+	VM_BUG_ON(from == memcg);
+
+	mm = get_task_mm(p);
+	if (!mm)
+		return 0;
+	/* We move charges only when we move a owner of the mm */
+	if (mm->owner == p) {
+		VM_BUG_ON(mc.from);
+		VM_BUG_ON(mc.to);
+		VM_BUG_ON(mc.precharge);
+		VM_BUG_ON(mc.moved_charge);
+		VM_BUG_ON(mc.moved_swap);
+
+		spin_lock(&mc.lock);
+		mc.from = from;
+		mc.to = memcg;
+		mc.flags = move_flags;
+		spin_unlock(&mc.lock);
+		/* We set mc.moving_task later */
+
+		ret = mem_cgroup_precharge_mc(mm);
+		if (ret)
+			mem_cgroup_clear_mc();
 	}
+	mmput(mm);
 	return ret;
 }
 
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 2/3] memcg: Use mc.moving_task as the indication for charge moving
  2015-05-26 11:50 [RFC 0/3] get rid of mm_struct::owner Michal Hocko
  2015-05-26 11:50 ` [RFC 1/3] memcg: restructure mem_cgroup_can_attach() Michal Hocko
@ 2015-05-26 11:50 ` Michal Hocko
  2015-05-26 11:50 ` [RFC 3/3] memcg: get rid of mm_struct::owner Michal Hocko
  2 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-05-26 11:50 UTC (permalink / raw)
  To: linux-mm
  Cc: Johannes Weiner, Oleg Nesterov, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

move_charge_struct::to not being NULL has been used to indicate whether
the currently ongoing move operation should migrate the charges. The follow up
patch will require mc.to being initialized even when we do not migrate
charges so replace the check by checking mc.moving_task which is set
only when the migration is requested. Also replace the open coded check
by a helper function (mc_move_charge).

mem_cgroup_clear_mc has to be called unconditionally now because it
has to clean up from and to pointers. __mem_cgroup_clear_mc does the
migration specific cleanup so it still checks for mc_move_charge.

Signed-off-by: Michal Hocko <mhocko@suse.cz>
---
 mm/memcontrol.c | 63 +++++++++++++++++++++++++++++++--------------------------
 1 file changed, 34 insertions(+), 29 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index f3d92cf0caf4..4d905209f00f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4984,14 +4984,22 @@ static void __mem_cgroup_clear_mc(void)
 	wake_up_all(&mc.waitq);
 }
 
+static bool mc_move_charge(void)
+{
+	/* moving_task is configured only if the charge is really moved */
+	return mc.moving_task != NULL;
+}
+
 static void mem_cgroup_clear_mc(void)
 {
+	bool move_charge = mc_move_charge();
 	/*
 	 * we must clear moving_task before waking up waiters at the end of
 	 * task migration.
 	 */
 	mc.moving_task = NULL;
-	__mem_cgroup_clear_mc();
+	if (move_charge)
+		__mem_cgroup_clear_mc();
 	spin_lock(&mc.lock);
 	mc.from = NULL;
 	mc.to = NULL;
@@ -5008,15 +5016,6 @@ static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 	unsigned long move_flags;
 	int ret = 0;
 
-	/*
-	 * We are now commited to this value whatever it is. Changes in this
-	 * tunable will only affect upcoming migrations, not the current one.
-	 * So we need to save it, and keep it going.
-	 */
-	move_flags = READ_ONCE(memcg->move_charge_at_immigrate);
-	if (!move_flags)
-		return 0;
-
 	p = cgroup_taskset_first(tset);
 	from = mem_cgroup_from_task(p);
 
@@ -5025,21 +5024,29 @@ static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 	mm = get_task_mm(p);
 	if (!mm)
 		return 0;
-	/* We move charges only when we move a owner of the mm */
-	if (mm->owner == p) {
-		VM_BUG_ON(mc.from);
-		VM_BUG_ON(mc.to);
-		VM_BUG_ON(mc.precharge);
-		VM_BUG_ON(mc.moved_charge);
-		VM_BUG_ON(mc.moved_swap);
-
-		spin_lock(&mc.lock);
-		mc.from = from;
-		mc.to = memcg;
-		mc.flags = move_flags;
-		spin_unlock(&mc.lock);
-		/* We set mc.moving_task later */
 
+	VM_BUG_ON(mc.from);
+	VM_BUG_ON(mc.to);
+	VM_BUG_ON(mc.precharge);
+	VM_BUG_ON(mc.moved_charge);
+	VM_BUG_ON(mc.moved_swap);
+
+	spin_lock(&mc.lock);
+	mc.from = from;
+	mc.to = memcg;
+	mc.flags = move_flags;
+	spin_unlock(&mc.lock);
+	/* We set mc.moving_task later */
+
+	/*
+	 * We are now commited to this value whatever it is. Changes in this
+	 * tunable will only affect upcoming migrations, not the current one.
+	 * So we need to save it, and keep it going.
+	 */
+	move_flags = READ_ONCE(memcg->move_charge_at_immigrate);
+
+	/* We move charges only when we move a owner of the mm */
+	if (move_flags && mm->owner == p) {
 		ret = mem_cgroup_precharge_mc(mm);
 		if (ret)
 			mem_cgroup_clear_mc();
@@ -5051,8 +5058,7 @@ static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 static void mem_cgroup_cancel_attach(struct cgroup_subsys_state *css,
 				     struct cgroup_taskset *tset)
 {
-	if (mc.to)
-		mem_cgroup_clear_mc();
+	mem_cgroup_clear_mc();
 }
 
 static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
@@ -5198,12 +5204,11 @@ static void mem_cgroup_move_task(struct cgroup_subsys_state *css,
 	struct mm_struct *mm = get_task_mm(p);
 
 	if (mm) {
-		if (mc.to)
+		if (mc_move_charge())
 			mem_cgroup_move_charge(mm);
 		mmput(mm);
 	}
-	if (mc.to)
-		mem_cgroup_clear_mc();
+	mem_cgroup_clear_mc();
 }
 #else	/* !CONFIG_MMU */
 static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 11:50 [RFC 0/3] get rid of mm_struct::owner Michal Hocko
  2015-05-26 11:50 ` [RFC 1/3] memcg: restructure mem_cgroup_can_attach() Michal Hocko
  2015-05-26 11:50 ` [RFC 2/3] memcg: Use mc.moving_task as the indication for charge moving Michal Hocko
@ 2015-05-26 11:50 ` Michal Hocko
  2015-05-26 14:10   ` Johannes Weiner
  2015-05-26 16:36   ` Oleg Nesterov
  2 siblings, 2 replies; 20+ messages in thread
From: Michal Hocko @ 2015-05-26 11:50 UTC (permalink / raw)
  To: linux-mm
  Cc: Johannes Weiner, Oleg Nesterov, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

mm_struct::owner keeps track of the task which is in charge for the
specific mm. This is usually the thread group leader of the task but
there are more exotic cases where this doesn't hold.

The most prominent one is when separate tasks (not in the same thread
group) share the address space (by using clone with CLONE_VM without
CLONE_THREAD). The first task will be the owner until it exits.
mm_update_next_owner will then try to find a new owner - a task which
points to the same mm_struct. There is no guarantee a new owner will
be the thread group leader though because the leader might have
exited. Even though such a thread will be still around waiting for the
remaining threads from its group, it's mm will be NULL so it cannot be
chosen.

cgroup migration code, however assumes only group leaders when migrating
via cgroup.procs (which will be the only mode in the unified hierarchy
API) while mem_cgroup_can_attach only those tasks which are owner
of the mm. So we might end up with tasks which cannot be migrated.
mm_update_next_owner could be tweaked to try harder and use a group
leader whenever possible but this will never be 100% because all the
leaders might be dead.  It seems that getting rid of the mm->owner
sounds like a better option.

The whole concept of the mm owner is a bit artificial and too tricky to
get right. All the memcg code needs is to find struct mem_cgroup from
a given mm_struct and there are only two events when the association
is either built or changed
	- a new mm is created - dup_mm - when the memcg is inherited
	  from the oldmm
	- task associated with the mm is moved to another memcg
So it is much more easier to bind mm_struct with the mem_cgroup directly
rather than indirectly via a task. This is exactly what this patch does.

mm_set_memcg and mm_drop_memcg are exported for the core kernel to bind
an old memcg during dup_mm and releasing that memcg in mmput after the
last reference is dropped and no task sees the mm anymore. We have to be
careful and take a reference to the memcg->css so that it doesn't vanish
from under our feet.
mm_move_memcg is then used during the task migration to change the
association. This is done in mem_cgroup_move_task before charges get
moved because mem_cgroup_can_attach is too early and other controllers
might fail and we would have to handle the rollback. The race between
can_attach and attach is harmless and it existed even before.

mm->memcg conforms to standard mem_cgroup locking rules. It has to be
used inside rcu_read_{un}lock() and a reference has to be taken before the
unlock if the memcg is supposed to be used outside. mm_move_memcg will
make sure that all the preexisting users will finish before it drops the
reference to the old memcg.

Finally mem_cgroup_can_attach will allow task migration only for the
thread group leaders to conform with cgroup core requirements.

Please note that this patch introduces a USER VISIBLE CHANGE OF BEHAVIOR.
Without mm->owner _all_ tasks associated with the mm_struct would
initiate memcg migration while previously only owner of the mm_struct
could do that. The original behavior was awkward though because the user
task didn't have any means to find out the current owner (esp. after
mm_update_next_owner) so the migration behavior was not well defined
in general.
New cgroup API (unified hierarchy) will discontinue tasks file which
means that migrating threads will no longer be possible. In such a case
having CLONE_VM without CLONE_THREAD could emulate the thread behavior
but this patch prevents from isolating memcg controllers from others.
Nevertheless I am not convinced such a use case would really deserve
complications on the memcg code side.

Signed-off-by: Michal Hocko <mhocko@suse.cz>
---
 fs/exec.c                  |  1 -
 include/linux/memcontrol.h | 14 ++++++-
 include/linux/mm_types.h   | 12 +-----
 kernel/exit.c              | 89 -----------------------------------------
 kernel/fork.c              | 10 +----
 mm/debug.c                 |  4 +-
 mm/memcontrol.c            | 99 +++++++++++++++++++++++++++++++++++-----------
 7 files changed, 93 insertions(+), 136 deletions(-)

diff --git a/fs/exec.c b/fs/exec.c
index 02bfd980a40c..2cd4def4b1d6 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -867,7 +867,6 @@ static int exec_mmap(struct mm_struct *mm)
 		up_read(&old_mm->mmap_sem);
 		BUG_ON(active_mm != old_mm);
 		setmax_mm_hiwater_rss(&tsk->signal->maxrss, old_mm);
-		mm_update_next_owner(old_mm);
 		mmput(old_mm);
 		return 0;
 	}
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 6c8918114804..315ec1e58acb 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -67,6 +67,8 @@ enum mem_cgroup_events_index {
 };
 
 #ifdef CONFIG_MEMCG
+void mm_drop_memcg(struct mm_struct *mm);
+void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg);
 void mem_cgroup_events(struct mem_cgroup *memcg,
 		       enum mem_cgroup_events_index idx,
 		       unsigned int nr);
@@ -92,7 +94,6 @@ bool mem_cgroup_is_descendant(struct mem_cgroup *memcg,
 bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg);
 
 extern struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page);
-extern struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
 
 extern struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg);
 extern struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css);
@@ -104,7 +105,12 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
 	bool match = false;
 
 	rcu_read_lock();
-	task_memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
+	/*
+	 * rcu_dereference would be better but mem_cgroup is not a complete
+	 * type here
+	 */
+	task_memcg = READ_ONCE(mm->memcg);
+	smp_read_barrier_depends();
 	if (task_memcg)
 		match = mem_cgroup_is_descendant(task_memcg, memcg);
 	rcu_read_unlock();
@@ -195,6 +201,10 @@ void mem_cgroup_split_huge_fixup(struct page *head);
 #else /* CONFIG_MEMCG */
 struct mem_cgroup;
 
+void mm_drop_memcg(struct mm_struct *mm)
+{}
+void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
+{}
 static inline void mem_cgroup_events(struct mem_cgroup *memcg,
 				     enum mem_cgroup_events_index idx,
 				     unsigned int nr)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index f6266742ce1f..93dc8cb9c636 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -426,17 +426,7 @@ struct mm_struct {
 	struct kioctx_table __rcu	*ioctx_table;
 #endif
 #ifdef CONFIG_MEMCG
-	/*
-	 * "owner" points to a task that is regarded as the canonical
-	 * user/owner of this mm. All of the following must be true in
-	 * order for it to be changed:
-	 *
-	 * current == mm->owner
-	 * current->mm != mm
-	 * new_owner->mm == mm
-	 * new_owner->alloc_lock is held
-	 */
-	struct task_struct __rcu *owner;
+	struct mem_cgroup __rcu *memcg;
 #endif
 
 	/* store ref to file /proc/<pid>/exe symlink points to */
diff --git a/kernel/exit.c b/kernel/exit.c
index 4089c2fd373e..8f3e5b4c58ce 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -292,94 +292,6 @@ kill_orphaned_pgrp(struct task_struct *tsk, struct task_struct *parent)
 	}
 }
 
-#ifdef CONFIG_MEMCG
-/*
- * A task is exiting.   If it owned this mm, find a new owner for the mm.
- */
-void mm_update_next_owner(struct mm_struct *mm)
-{
-	struct task_struct *c, *g, *p = current;
-
-retry:
-	/*
-	 * If the exiting or execing task is not the owner, it's
-	 * someone else's problem.
-	 */
-	if (mm->owner != p)
-		return;
-	/*
-	 * The current owner is exiting/execing and there are no other
-	 * candidates.  Do not leave the mm pointing to a possibly
-	 * freed task structure.
-	 */
-	if (atomic_read(&mm->mm_users) <= 1) {
-		mm->owner = NULL;
-		return;
-	}
-
-	read_lock(&tasklist_lock);
-	/*
-	 * Search in the children
-	 */
-	list_for_each_entry(c, &p->children, sibling) {
-		if (c->mm == mm)
-			goto assign_new_owner;
-	}
-
-	/*
-	 * Search in the siblings
-	 */
-	list_for_each_entry(c, &p->real_parent->children, sibling) {
-		if (c->mm == mm)
-			goto assign_new_owner;
-	}
-
-	/*
-	 * Search through everything else, we should not get here often.
-	 */
-	for_each_process(g) {
-		if (g->flags & PF_KTHREAD)
-			continue;
-		for_each_thread(g, c) {
-			if (c->mm == mm)
-				goto assign_new_owner;
-			if (c->mm)
-				break;
-		}
-	}
-	read_unlock(&tasklist_lock);
-	/*
-	 * We found no owner yet mm_users > 1: this implies that we are
-	 * most likely racing with swapoff (try_to_unuse()) or /proc or
-	 * ptrace or page migration (get_task_mm()).  Mark owner as NULL.
-	 */
-	mm->owner = NULL;
-	return;
-
-assign_new_owner:
-	BUG_ON(c == p);
-	get_task_struct(c);
-	/*
-	 * The task_lock protects c->mm from changing.
-	 * We always want mm->owner->mm == mm
-	 */
-	task_lock(c);
-	/*
-	 * Delay read_unlock() till we have the task_lock()
-	 * to ensure that c does not slip away underneath us
-	 */
-	read_unlock(&tasklist_lock);
-	if (c->mm != mm) {
-		task_unlock(c);
-		put_task_struct(c);
-		goto retry;
-	}
-	mm->owner = c;
-	task_unlock(c);
-	put_task_struct(c);
-}
-#endif /* CONFIG_MEMCG */
-
 /*
  * Turn us into a lazy TLB process if we
  * aren't already..
@@ -433,7 +345,6 @@ static void exit_mm(struct task_struct *tsk)
 	up_read(&mm->mmap_sem);
 	enter_lazy_tlb(mm, current);
 	task_unlock(tsk);
-	mm_update_next_owner(mm);
 	mmput(mm);
 	if (test_thread_flag(TIF_MEMDIE))
 		exit_oom_victim();
diff --git a/kernel/fork.c b/kernel/fork.c
index 556cc64ae0c4..075688b2cae5 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -570,13 +570,6 @@ static void mm_init_aio(struct mm_struct *mm)
 #endif
 }
 
-static void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
-{
-#ifdef CONFIG_MEMCG
-	mm->owner = p;
-#endif
-}
-
 static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p)
 {
 	mm->mmap = NULL;
@@ -596,7 +589,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p)
 	spin_lock_init(&mm->page_table_lock);
 	mm_init_cpumask(mm);
 	mm_init_aio(mm);
-	mm_init_owner(mm, p);
 	mmu_notifier_mm_init(mm);
 	clear_tlb_flush_pending(mm);
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS
@@ -702,6 +694,7 @@ void mmput(struct mm_struct *mm)
 		}
 		if (mm->binfmt)
 			module_put(mm->binfmt->module);
+		mm_drop_memcg(mm);
 		mmdrop(mm);
 	}
 }
@@ -925,6 +918,7 @@ static struct mm_struct *dup_mm(struct task_struct *tsk)
 	if (mm->binfmt && !try_module_get(mm->binfmt->module))
 		goto free_pt;
 
+	mm_set_memcg(mm, oldmm->memcg);
 	return mm;
 
 free_pt:
diff --git a/mm/debug.c b/mm/debug.c
index 3eb3ac2fcee7..d0347a168651 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -184,7 +184,7 @@ void dump_mm(const struct mm_struct *mm)
 		"ioctx_table %p\n"
 #endif
 #ifdef CONFIG_MEMCG
-		"owner %p "
+		"memcg %p "
 #endif
 		"exe_file %p\n"
 #ifdef CONFIG_MMU_NOTIFIER
@@ -218,7 +218,7 @@ void dump_mm(const struct mm_struct *mm)
 		mm->ioctx_table,
 #endif
 #ifdef CONFIG_MEMCG
-		mm->owner,
+		mm->memcg,
 #endif
 		mm->exe_file,
 #ifdef CONFIG_MMU_NOTIFIER
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 4d905209f00f..950875eb7d89 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -469,6 +469,46 @@ static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
 	return mem_cgroup_from_css(css);
 }
 
+static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
+{
+	if (!p->mm)
+		return NULL;
+	return rcu_dereference(p->mm->memcg);
+}
+
+void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
+{
+	if (memcg)
+		css_get(&memcg->css);
+	rcu_assign_pointer(mm->memcg, memcg);
+}
+
+void mm_drop_memcg(struct mm_struct *mm)
+{
+	/*
+	 * This is the last reference to mm so nobody can see
+	 * this memcg
+	 */
+	if (mm->memcg)
+		css_put(&mm->memcg->css);
+}
+
+static void mm_move_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
+{
+	struct mem_cgroup *old_memcg;
+
+	mm_set_memcg(mm, memcg);
+
+	/*
+	 * wait for all current users of the old memcg before we
+	 * release the reference.
+	 */
+	old_memcg = mm->memcg;
+	synchronize_rcu();
+	if (old_memcg)
+		css_put(&old_memcg->css);
+}
+
 /* Writing them here to avoid exposing memcg's inner layout */
 #if defined(CONFIG_INET) && defined(CONFIG_MEMCG_KMEM)
 
@@ -953,19 +993,6 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page)
 	}
 }
 
-struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
-{
-	/*
-	 * mm_update_next_owner() may clear mm->owner to NULL
-	 * if it races with swapoff, page migration, etc.
-	 * So this can be called with p == NULL.
-	 */
-	if (unlikely(!p))
-		return NULL;
-
-	return mem_cgroup_from_css(task_css(p, memory_cgrp_id));
-}
-
 static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
 {
 	struct mem_cgroup *memcg = NULL;
@@ -980,7 +1007,7 @@ static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
 		if (unlikely(!mm))
 			memcg = root_mem_cgroup;
 		else {
-			memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
+			memcg = rcu_dereference(mm->memcg);
 			if (unlikely(!memcg))
 				memcg = root_mem_cgroup;
 		}
@@ -1157,7 +1184,7 @@ void __mem_cgroup_count_vm_event(struct mm_struct *mm, enum vm_event_item idx)
 	struct mem_cgroup *memcg;
 
 	rcu_read_lock();
-	memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
+	memcg = rcu_dereference(mm->memcg);
 	if (unlikely(!memcg))
 		goto out;
 
@@ -2674,7 +2701,7 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
 }
 
 /*
- * We need to verify if the allocation against current->mm->owner's memcg is
+ * We need to verify if the allocation against current->mm->memcg is
  * possible for the given order. But the page is not allocated yet, so we'll
  * need a further commit step to do the final arrangements.
  *
@@ -4993,6 +5020,7 @@ static bool mc_move_charge(void)
 static void mem_cgroup_clear_mc(void)
 {
 	bool move_charge = mc_move_charge();
+	struct mem_cgroup *from;
 	/*
 	 * we must clear moving_task before waking up waiters at the end of
 	 * task migration.
@@ -5000,16 +5028,21 @@ static void mem_cgroup_clear_mc(void)
 	mc.moving_task = NULL;
 	if (move_charge)
 		__mem_cgroup_clear_mc();
+
 	spin_lock(&mc.lock);
+	from = mc.from;
 	mc.from = NULL;
 	mc.to = NULL;
 	spin_unlock(&mc.lock);
+
+	/* drops the reference from mem_cgroup_can_attach */
+	css_put(&from->css);
 }
 
 static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 				 struct cgroup_taskset *tset)
 {
-	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+	struct mem_cgroup *to = mem_cgroup_from_css(css);
 	struct mem_cgroup *from;
 	struct task_struct *p;
 	struct mm_struct *mm;
@@ -5017,14 +5050,27 @@ static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 	int ret = 0;
 
 	p = cgroup_taskset_first(tset);
-	from = mem_cgroup_from_task(p);
-
-	VM_BUG_ON(from == memcg);
+	if (!thread_group_leader(p))
+		return 0;
 
 	mm = get_task_mm(p);
 	if (!mm)
 		return 0;
 
+	/*
+	 * tasks' cgroup might be different from the one p->mm is associated
+	 * with because CLONE_VM is allowed without CLONE_THREAD. The task is
+	 * moving so we have to migrate from the memcg associated with its
+	 * address space.
+	 * Keep the reference until the whole migration is done - until
+	 * mem_cgroup_clear_mc
+	 */
+	from = get_mem_cgroup_from_mm(mm);
+	if (from == to) {
+		css_put(&from->css);
+		goto out;
+	}
+
 	VM_BUG_ON(mc.from);
 	VM_BUG_ON(mc.to);
 	VM_BUG_ON(mc.precharge);
@@ -5033,7 +5079,7 @@ static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 
 	spin_lock(&mc.lock);
 	mc.from = from;
-	mc.to = memcg;
+	mc.to = to;
 	mc.flags = move_flags;
 	spin_unlock(&mc.lock);
 	/* We set mc.moving_task later */
@@ -5043,14 +5089,15 @@ static int mem_cgroup_can_attach(struct cgroup_subsys_state *css,
 	 * tunable will only affect upcoming migrations, not the current one.
 	 * So we need to save it, and keep it going.
 	 */
-	move_flags = READ_ONCE(memcg->move_charge_at_immigrate);
+	move_flags = READ_ONCE(to->move_charge_at_immigrate);
 
 	/* We move charges only when we move a owner of the mm */
-	if (move_flags && mm->owner == p) {
+	if (move_flags) {
 		ret = mem_cgroup_precharge_mc(mm);
 		if (ret)
 			mem_cgroup_clear_mc();
 	}
+out:
 	mmput(mm);
 	return ret;
 }
@@ -5204,6 +5251,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys_state *css,
 	struct mm_struct *mm = get_task_mm(p);
 
 	if (mm) {
+		/*
+		 * Commit to a new memcg. mc.to points to the destination
+		 * memcg even when the current charges are not moved.
+		 */
+		mm_move_memcg(mm, mc.to);
+
 		if (mc_move_charge())
 			mem_cgroup_move_charge(mm);
 		mmput(mm);
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 11:50 ` [RFC 3/3] memcg: get rid of mm_struct::owner Michal Hocko
@ 2015-05-26 14:10   ` Johannes Weiner
  2015-05-26 15:11     ` Michal Hocko
  2015-05-28 21:07     ` Tejun Heo
  2015-05-26 16:36   ` Oleg Nesterov
  1 sibling, 2 replies; 20+ messages in thread
From: Johannes Weiner @ 2015-05-26 14:10 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Oleg Nesterov, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On Tue, May 26, 2015 at 01:50:06PM +0200, Michal Hocko wrote:
> Please note that this patch introduces a USER VISIBLE CHANGE OF BEHAVIOR.
> Without mm->owner _all_ tasks associated with the mm_struct would
> initiate memcg migration while previously only owner of the mm_struct
> could do that. The original behavior was awkward though because the user
> task didn't have any means to find out the current owner (esp. after
> mm_update_next_owner) so the migration behavior was not well defined
> in general.
> New cgroup API (unified hierarchy) will discontinue tasks file which
> means that migrating threads will no longer be possible. In such a case
> having CLONE_VM without CLONE_THREAD could emulate the thread behavior
> but this patch prevents from isolating memcg controllers from others.
> Nevertheless I am not convinced such a use case would really deserve
> complications on the memcg code side.

I think such a change is okay.  The memcg semantics of moving threads
with the same mm into separate groups have always been arbitrary.  No
reasonable behavior can be expected of this, so what sane real life
usecase would rely on it?

> @@ -104,7 +105,12 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
>  	bool match = false;
>  
>  	rcu_read_lock();
> -	task_memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
> +	/*
> +	 * rcu_dereference would be better but mem_cgroup is not a complete
> +	 * type here
> +	 */
> +	task_memcg = READ_ONCE(mm->memcg);
> +	smp_read_barrier_depends();
>  	if (task_memcg)
>  		match = mem_cgroup_is_descendant(task_memcg, memcg);
>  	rcu_read_unlock();

This function has only one user in rmap.  If you inline it there, you
can use rcu_dereference() and get rid of the specialness & comment.

> @@ -195,6 +201,10 @@ void mem_cgroup_split_huge_fixup(struct page *head);
>  #else /* CONFIG_MEMCG */
>  struct mem_cgroup;
>  
> +void mm_drop_memcg(struct mm_struct *mm)
> +{}
> +void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
> +{}

static inline?

> @@ -292,94 +292,6 @@ kill_orphaned_pgrp(struct task_struct *tsk, struct task_struct *parent)
>  	}
>  }
>  
> -#ifdef CONFIG_MEMCG
> -/*
> - * A task is exiting.   If it owned this mm, find a new owner for the mm.
> - */
> -void mm_update_next_owner(struct mm_struct *mm)
> -{
> -	struct task_struct *c, *g, *p = current;
> -
> -retry:
> -	/*
> -	 * If the exiting or execing task is not the owner, it's
> -	 * someone else's problem.
> -	 */
> -	if (mm->owner != p)
> -		return;
> -	/*
> -	 * The current owner is exiting/execing and there are no other
> -	 * candidates.  Do not leave the mm pointing to a possibly
> -	 * freed task structure.
> -	 */
> -	if (atomic_read(&mm->mm_users) <= 1) {
> -		mm->owner = NULL;
> -		return;
> -	}
> -
> -	read_lock(&tasklist_lock);
> -	/*
> -	 * Search in the children
> -	 */
> -	list_for_each_entry(c, &p->children, sibling) {
> -		if (c->mm == mm)
> -			goto assign_new_owner;
> -	}
> -
> -	/*
> -	 * Search in the siblings
> -	 */
> -	list_for_each_entry(c, &p->real_parent->children, sibling) {
> -		if (c->mm == mm)
> -			goto assign_new_owner;
> -	}
> -
> -	/*
> -	 * Search through everything else, we should not get here often.
> -	 */
> -	for_each_process(g) {
> -		if (g->flags & PF_KTHREAD)
> -			continue;
> -		for_each_thread(g, c) {
> -			if (c->mm == mm)
> -				goto assign_new_owner;
> -			if (c->mm)
> -				break;
> -		}
> -	}
> -	read_unlock(&tasklist_lock);
> -	/*
> -	 * We found no owner yet mm_users > 1: this implies that we are
> -	 * most likely racing with swapoff (try_to_unuse()) or /proc or
> -	 * ptrace or page migration (get_task_mm()).  Mark owner as NULL.
> -	 */
> -	mm->owner = NULL;
> -	return;
> -
> -assign_new_owner:
> -	BUG_ON(c == p);
> -	get_task_struct(c);
> -	/*
> -	 * The task_lock protects c->mm from changing.
> -	 * We always want mm->owner->mm == mm
> -	 */
> -	task_lock(c);
> -	/*
> -	 * Delay read_unlock() till we have the task_lock()
> -	 * to ensure that c does not slip away underneath us
> -	 */
> -	read_unlock(&tasklist_lock);
> -	if (c->mm != mm) {
> -		task_unlock(c);
> -		put_task_struct(c);
> -		goto retry;
> -	}
> -	mm->owner = c;
> -	task_unlock(c);
> -	put_task_struct(c);
> -}
> -#endif /* CONFIG_MEMCG */

w00t!

> @@ -469,6 +469,46 @@ static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
>  	return mem_cgroup_from_css(css);
>  }
>  
> +static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
> +{
> +	if (!p->mm)
> +		return NULL;
> +	return rcu_dereference(p->mm->memcg);
> +}
> +
> +void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
> +{
> +	if (memcg)
> +		css_get(&memcg->css);
> +	rcu_assign_pointer(mm->memcg, memcg);
> +}
> +
> +void mm_drop_memcg(struct mm_struct *mm)
> +{
> +	/*
> +	 * This is the last reference to mm so nobody can see
> +	 * this memcg
> +	 */
> +	if (mm->memcg)
> +		css_put(&mm->memcg->css);
> +}

This is really simple and obvious and has only one caller, it would be
better to inline this into mmput().  The comment would also be easier
to understand in conjunction with the mmdrop() in the callsite:

	if (mm->memcg)
		css_put(&mm->memcg->css);
	/* We could reset mm->memcg, but this will free the mm: */
	mmdrop(mm);

The same goes for mm_set_memcg, there is no real need for obscuring a
simple get-and-store.

> +static void mm_move_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
> +{
> +	struct mem_cgroup *old_memcg;
> +
> +	mm_set_memcg(mm, memcg);
> +
> +	/*
> +	 * wait for all current users of the old memcg before we
> +	 * release the reference.
> +	 */
> +	old_memcg = mm->memcg;
> +	synchronize_rcu();
> +	if (old_memcg)
> +		css_put(&old_memcg->css);
> +}

I'm not sure why we need that synchronize_rcu() in here, the css is
itself protected by RCU and a failing tryget will prevent you from
taking it outside a RCU-locked region.

Aside from that, there is again exactly one place that performs this
operation.  Please inline it into mem_cgroup_move_task().

> @@ -5204,6 +5251,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys_state *css,
>  	struct mm_struct *mm = get_task_mm(p);
>  
>  	if (mm) {
> +		/*
> +		 * Commit to a new memcg. mc.to points to the destination
> +		 * memcg even when the current charges are not moved.
> +		 */
> +		mm_move_memcg(mm, mc.to);
> +
>  		if (mc_move_charge())
>  			mem_cgroup_move_charge(mm);
>  		mmput(mm);

It's a little weird to use mc.to when not moving charges, as "mc"
stands for "move charge".  Why not derive the destination from @css,
just like can_attach does?  It's a mere cast.  That also makes patch
#2 in your series unnecessary.

Otherwise, the patch looks great to me.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 14:10   ` Johannes Weiner
@ 2015-05-26 15:11     ` Michal Hocko
  2015-05-26 17:20       ` Johannes Weiner
  2015-05-28 21:07     ` Tejun Heo
  1 sibling, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2015-05-26 15:11 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: linux-mm, Oleg Nesterov, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML,
	Greg Thelen

[CCing Greg who I forgot to add the to list - sorry about that. The
thread starts here: http://marc.info/?l=linux-mm&m=143264102317318&w=2]

On Tue 26-05-15 10:10:11, Johannes Weiner wrote:
> On Tue, May 26, 2015 at 01:50:06PM +0200, Michal Hocko wrote:
> > Please note that this patch introduces a USER VISIBLE CHANGE OF BEHAVIOR.
> > Without mm->owner _all_ tasks associated with the mm_struct would
> > initiate memcg migration while previously only owner of the mm_struct
> > could do that. The original behavior was awkward though because the user
> > task didn't have any means to find out the current owner (esp. after
> > mm_update_next_owner) so the migration behavior was not well defined
> > in general.
> > New cgroup API (unified hierarchy) will discontinue tasks file which
> > means that migrating threads will no longer be possible. In such a case
> > having CLONE_VM without CLONE_THREAD could emulate the thread behavior
> > but this patch prevents from isolating memcg controllers from others.
> > Nevertheless I am not convinced such a use case would really deserve
> > complications on the memcg code side.
> 
> I think such a change is okay.  The memcg semantics of moving threads
> with the same mm into separate groups have always been arbitrary.  No
> reasonable behavior can be expected of this, so what sane real life
> usecase would rely on it?

I can imagine that threads would go to different cgroups because of
other controllers (e.g. cpu or cpuset).
AFAIR google was doing threads distribution.

> > @@ -104,7 +105,12 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
> >  	bool match = false;
> >  
> >  	rcu_read_lock();
> > -	task_memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
> > +	/*
> > +	 * rcu_dereference would be better but mem_cgroup is not a complete
> > +	 * type here
> > +	 */
> > +	task_memcg = READ_ONCE(mm->memcg);
> > +	smp_read_barrier_depends();
> >  	if (task_memcg)
> >  		match = mem_cgroup_is_descendant(task_memcg, memcg);
> >  	rcu_read_unlock();
> 
> This function has only one user in rmap.  If you inline it there, you
> can use rcu_dereference() and get rid of the specialness & comment.

I am not sure I understand. struct mem_cgroup is defined in
mm/memcontrol.c so mm/rmap.c will not see it. Or do you suggest pulling
struct mem_cgroup out into a header with all the dependencies?

> > @@ -195,6 +201,10 @@ void mem_cgroup_split_huge_fixup(struct page *head);
> >  #else /* CONFIG_MEMCG */
> >  struct mem_cgroup;
> >  
> > +void mm_drop_memcg(struct mm_struct *mm)
> > +{}
> > +void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
> > +{}
> 
> static inline?

Of course. Fixed.
 
[...]
> > @@ -469,6 +469,46 @@ static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
> >  	return mem_cgroup_from_css(css);
> >  }
> >  
> > +static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
> > +{
> > +	if (!p->mm)
> > +		return NULL;
> > +	return rcu_dereference(p->mm->memcg);
> > +}
> > +
> > +void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
> > +{
> > +	if (memcg)
> > +		css_get(&memcg->css);
> > +	rcu_assign_pointer(mm->memcg, memcg);
> > +}
> > +
> > +void mm_drop_memcg(struct mm_struct *mm)
> > +{
> > +	/*
> > +	 * This is the last reference to mm so nobody can see
> > +	 * this memcg
> > +	 */
> > +	if (mm->memcg)
> > +		css_put(&mm->memcg->css);
> > +}
> 
> This is really simple and obvious and has only one caller, it would be
> better to inline this into mmput().  The comment would also be easier
> to understand in conjunction with the mmdrop() in the callsite:

Same case as rmap.c.

> 
> 	if (mm->memcg)
> 		css_put(&mm->memcg->css);
> 	/* We could reset mm->memcg, but this will free the mm: */
> 	mmdrop(mm);

I like your comment more. I will update it

> 
> The same goes for mm_set_memcg, there is no real need for obscuring a
> simple get-and-store.
> 
> > +static void mm_move_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
> > +{
> > +	struct mem_cgroup *old_memcg;
> > +
> > +	mm_set_memcg(mm, memcg);
> > +
> > +	/*
> > +	 * wait for all current users of the old memcg before we
> > +	 * release the reference.
> > +	 */
> > +	old_memcg = mm->memcg;

Doh. Last minute changes... This is incorrect, of course, because I am
dropping the new memcg reference. Fixed

> > +	synchronize_rcu();
> > +	if (old_memcg)
> > +		css_put(&old_memcg->css);
> > +}
> 
> I'm not sure why we need that synchronize_rcu() in here, the css is
> itself protected by RCU and a failing tryget will prevent you from
> taking it outside a RCU-locked region.

Yeah, you are right. Removed.

> Aside from that, there is again exactly one place that performs this
> operation.  Please inline it into mem_cgroup_move_task().

OK, I will inline it there.

> > @@ -5204,6 +5251,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys_state *css,
> >  	struct mm_struct *mm = get_task_mm(p);
> >  
> >  	if (mm) {
> > +		/*
> > +		 * Commit to a new memcg. mc.to points to the destination
> > +		 * memcg even when the current charges are not moved.
> > +		 */
> > +		mm_move_memcg(mm, mc.to);
> > +
> >  		if (mc_move_charge())
> >  			mem_cgroup_move_charge(mm);
> >  		mmput(mm);
> 
> It's a little weird to use mc.to when not moving charges, as "mc"
> stands for "move charge".  Why not derive the destination from @css,
> just like can_attach does?  It's a mere cast.  That also makes patch
> #2 in your series unnecessary.

Good idea!

> Otherwise, the patch looks great to me.

Thanks for the review. Changes based on your feedback:
---
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 315ec1e58acb..50cf88c0249d 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -201,10 +201,12 @@ void mem_cgroup_split_huge_fixup(struct page *head);
 #else /* CONFIG_MEMCG */
 struct mem_cgroup;
 
-void mm_drop_memcg(struct mm_struct *mm)
-{}
-void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
-{}
+static inline void mm_drop_memcg(struct mm_struct *mm)
+{
+}
+static inline void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
+{
+}
 static inline void mem_cgroup_events(struct mem_cgroup *memcg,
 				     enum mem_cgroup_events_index idx,
 				     unsigned int nr)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 950875eb7d89..2c5c336aca6e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -486,29 +486,13 @@ void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
 void mm_drop_memcg(struct mm_struct *mm)
 {
 	/*
-	 * This is the last reference to mm so nobody can see
-	 * this memcg
+	 * We could reset mm->memcg, but the mm goes away as this is the
+	 * last reference.
 	 */
 	if (mm->memcg)
 		css_put(&mm->memcg->css);
 }
 
-static void mm_move_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
-{
-	struct mem_cgroup *old_memcg;
-
-	mm_set_memcg(mm, memcg);
-
-	/*
-	 * wait for all current users of the old memcg before we
-	 * release the reference.
-	 */
-	old_memcg = mm->memcg;
-	synchronize_rcu();
-	if (old_memcg)
-		css_put(&old_memcg->css);
-}
-
 /* Writing them here to avoid exposing memcg's inner layout */
 #if defined(CONFIG_INET) && defined(CONFIG_MEMCG_KMEM)
 
@@ -5252,10 +5236,15 @@ static void mem_cgroup_move_task(struct cgroup_subsys_state *css,
 
 	if (mm) {
 		/*
-		 * Commit to a new memcg. mc.to points to the destination
-		 * memcg even when the current charges are not moved.
+		 * Commit to the target memcg even when we do not move
+		 * charges.
 		 */
-		mm_move_memcg(mm, mc.to);
+		struct mem_cgroup *old_memcg = READ_ONCE(mm->memcg);
+		struct mem_cgroup *new_memcg = mem_cgroup_from_css(css);
+
+		mm_set_memcg(mm, new_memcg);
+		if (old_memcg)
+			css_put(&old_memcg->css);
 
 		if (mc_move_charge())
 			mem_cgroup_move_charge(mm);
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 11:50 ` [RFC 3/3] memcg: get rid of mm_struct::owner Michal Hocko
  2015-05-26 14:10   ` Johannes Weiner
@ 2015-05-26 16:36   ` Oleg Nesterov
  2015-05-26 17:22     ` Michal Hocko
  1 sibling, 1 reply; 20+ messages in thread
From: Oleg Nesterov @ 2015-05-26 16:36 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Johannes Weiner, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On 05/26, Michal Hocko wrote:
>
> @@ -426,17 +426,7 @@ struct mm_struct {
>  	struct kioctx_table __rcu	*ioctx_table;
>  #endif
>  #ifdef CONFIG_MEMCG
> -	/*
> -	 * "owner" points to a task that is regarded as the canonical
> -	 * user/owner of this mm. All of the following must be true in
> -	 * order for it to be changed:
> -	 *
> -	 * current == mm->owner
> -	 * current->mm != mm
> -	 * new_owner->mm == mm
> -	 * new_owner->alloc_lock is held
> -	 */
> -	struct task_struct __rcu *owner;
> +	struct mem_cgroup __rcu *memcg;

Yes, thanks, this is what I tried to suggest ;)

But I can't review this series. Simply because I know nothing about
memcs. I don't even know how to use it.

Just one question,

> +static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
> +{
> +	if (!p->mm)
> +		return NULL;
> +	return rcu_dereference(p->mm->memcg);
> +}

Probably I missed something, but it seems that the callers do not
expect it can return NULL. Perhaps sock_update_memcg() is fine, but
task_in_mem_cgroup() calls it when find_lock_task_mm() fails, and in
this case ->mm is NULL.

And in fact I can't understand what mem_cgroup_from_task() actually
means, with or without these changes.

And another question. I can't understand what happens when a task
execs... IOW, could you confirm that exec_mmap() does not need
mm_set_memcg(mm, oldmm->memcg) ?

Oleg.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 15:11     ` Michal Hocko
@ 2015-05-26 17:20       ` Johannes Weiner
  2015-05-27 14:48         ` Michal Hocko
  0 siblings, 1 reply; 20+ messages in thread
From: Johannes Weiner @ 2015-05-26 17:20 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Oleg Nesterov, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML,
	Greg Thelen

On Tue, May 26, 2015 at 05:11:49PM +0200, Michal Hocko wrote:
> On Tue 26-05-15 10:10:11, Johannes Weiner wrote:
> > On Tue, May 26, 2015 at 01:50:06PM +0200, Michal Hocko wrote:
> > > @@ -104,7 +105,12 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
> > >  	bool match = false;
> > >  
> > >  	rcu_read_lock();
> > > -	task_memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
> > > +	/*
> > > +	 * rcu_dereference would be better but mem_cgroup is not a complete
> > > +	 * type here
> > > +	 */
> > > +	task_memcg = READ_ONCE(mm->memcg);
> > > +	smp_read_barrier_depends();
> > >  	if (task_memcg)
> > >  		match = mem_cgroup_is_descendant(task_memcg, memcg);
> > >  	rcu_read_unlock();
> > 
> > This function has only one user in rmap.  If you inline it there, you
> > can use rcu_dereference() and get rid of the specialness & comment.
> 
> I am not sure I understand. struct mem_cgroup is defined in
> mm/memcontrol.c so mm/rmap.c will not see it. Or do you suggest pulling
> struct mem_cgroup out into a header with all the dependencies?

Yes, I think that would be preferrable.  It's weird that we have such
a major data structure that is used all over the mm-code but only in
the shape of pointers to an incomplete type.  It forces a bad style of
code that uses uninlinable callbacks and accessors for even the most
basic things.  There are a few functions in memcontrol.c that could
instead be static inlines or should even be implemented as part of the
code that is using them, such as mem_cgroup_get_lru_size(),
mem_cgroup_is_descendant, mem_cgroup_inactive_anon_is_low(),
mem_cgroup_lruvec_online(), mem_cgroup_swappiness(),
mem_cgroup_select_victim_node(), mem_cgroup_update_page_stat(), and
mem_cgroup_events().  Your new functions fall into the same category.

> @@ -486,29 +486,13 @@ void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
>  void mm_drop_memcg(struct mm_struct *mm)
>  {
>  	/*
> -	 * This is the last reference to mm so nobody can see
> -	 * this memcg
> +	 * We could reset mm->memcg, but the mm goes away as this is the
> +	 * last reference.
>  	 */
>  	if (mm->memcg)
>  		css_put(&mm->memcg->css);
>  }

This function is supposed to be an API call to disassociate a mm from
its memcg, but it actually doesn't do that and will leave a dangling
pointer based on assumptions it makes about how and when the caller
invokes it.  That's bad.  It's a subtle optimization with dependencies
spread across two moving parts.  The result is very fragile code which
will break things in non-obvious ways when the caller changes later on.

And what's left standing is silly too: a memcg-specific API to call
css_put(), even though struct cgroup_subsys_state and css_put() are
public API already.

Both these things are a negative side effect of struct mem_cgroup
being semi-private.  Memcg pointers are everywhere, yet we need a
public interface indirection for every simple dereference.

> @@ -5252,10 +5236,15 @@ static void mem_cgroup_move_task(struct cgroup_subsys_state *css,
>  
>  	if (mm) {
>  		/*
> -		 * Commit to a new memcg. mc.to points to the destination
> -		 * memcg even when the current charges are not moved.
> +		 * Commit to the target memcg even when we do not move
> +		 * charges.
>  		 */
> -		mm_move_memcg(mm, mc.to);
> +		struct mem_cgroup *old_memcg = READ_ONCE(mm->memcg);
> +		struct mem_cgroup *new_memcg = mem_cgroup_from_css(css);
> +
> +		mm_set_memcg(mm, new_memcg);
> +		if (old_memcg)
> +			css_put(&old_memcg->css);

"Commit" is a problematic choice of words because of its existing
meaning in memcg of associating a page with a pre-reserved charge.

I'm not sure a comment is actually necessary here.  Reassigning
mm->memcg when moving a process pretty straight forward IMO.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 16:36   ` Oleg Nesterov
@ 2015-05-26 17:22     ` Michal Hocko
  2015-05-26 17:38       ` Oleg Nesterov
  0 siblings, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2015-05-26 17:22 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-mm, Johannes Weiner, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On Tue 26-05-15 18:36:46, Oleg Nesterov wrote:
> On 05/26, Michal Hocko wrote:
> >
> > @@ -426,17 +426,7 @@ struct mm_struct {
> >  	struct kioctx_table __rcu	*ioctx_table;
> >  #endif
> >  #ifdef CONFIG_MEMCG
> > -	/*
> > -	 * "owner" points to a task that is regarded as the canonical
> > -	 * user/owner of this mm. All of the following must be true in
> > -	 * order for it to be changed:
> > -	 *
> > -	 * current == mm->owner
> > -	 * current->mm != mm
> > -	 * new_owner->mm == mm
> > -	 * new_owner->alloc_lock is held
> > -	 */
> > -	struct task_struct __rcu *owner;
> > +	struct mem_cgroup __rcu *memcg;
> 
> Yes, thanks, this is what I tried to suggest ;)
> 
> But I can't review this series. Simply because I know nothing about
> memcs. I don't even know how to use it.
> 
> Just one question,
> 
> > +static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
> > +{
> > +	if (!p->mm)
> > +		return NULL;
> > +	return rcu_dereference(p->mm->memcg);
> > +}
> 
> Probably I missed something, but it seems that the callers do not
> expect it can return NULL.

This hasn't changed by this patch. mem_cgroup_from_task was allowed to
return NULL even before. I've just made it static because it doesn't
have any external users anymore. I will double check whether we can ever
get NULL there in the real life. We have this code like that for quite
some time. Maybe this is just a heritage from the past...

> Perhaps sock_update_memcg() is fine, but
> task_in_mem_cgroup() calls it when find_lock_task_mm() fails, and in
> this case ->mm is NULL.
> 
> And in fact I can't understand what mem_cgroup_from_task() actually
> means, with or without these changes.

It performs task_struct->mem_cgroup mapping. We cannot use cgroup
mapping here because the charges are bound to mm_struct rather than
task.

> And another question. I can't understand what happens when a task
> execs... IOW, could you confirm that exec_mmap() does not need
> mm_set_memcg(mm, oldmm->memcg) ?

Right you are! Fixed thanks!
---
diff --git a/fs/exec.c b/fs/exec.c
index 2cd4def4b1d6..ea00d5a47aad 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -867,6 +867,7 @@ static int exec_mmap(struct mm_struct *mm)
 		up_read(&old_mm->mmap_sem);
 		BUG_ON(active_mm != old_mm);
 		setmax_mm_hiwater_rss(&tsk->signal->maxrss, old_mm);
+		mm_set_memcg(mm, old_mm->memcg);
 		mmput(old_mm);
 		return 0;
 	}
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 17:22     ` Michal Hocko
@ 2015-05-26 17:38       ` Oleg Nesterov
  2015-05-27  9:43         ` Michal Hocko
  0 siblings, 1 reply; 20+ messages in thread
From: Oleg Nesterov @ 2015-05-26 17:38 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Johannes Weiner, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On 05/26, Michal Hocko wrote:
>
> On Tue 26-05-15 18:36:46, Oleg Nesterov wrote:
> >
> > > +static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
> > > +{
> > > +	if (!p->mm)
> > > +		return NULL;
> > > +	return rcu_dereference(p->mm->memcg);
> > > +}
> >
> > Probably I missed something, but it seems that the callers do not
> > expect it can return NULL.
>
> This hasn't changed by this patch. mem_cgroup_from_task was allowed to
> return NULL even before. I've just made it static because it doesn't
> have any external users anymore.

I see, but it could only return NULL if mem_cgroup_from_css() returns
NULL. Now it returns NULL for sure if the caller is task_in_mem_cgroup(),

	// called when task->mm == NULL

	task_memcg = mem_cgroup_from_task(task);
	css_get(&task_memcg->css);

and this css_get() doesn't look nice if task_memcg == NULL ;)

> I will double check

Yes, please. Perhaps I missed something.

> > And in fact I can't understand what mem_cgroup_from_task() actually
> > means, with or without these changes.
>
> It performs task_struct->mem_cgroup mapping. We cannot use cgroup
> mapping here because the charges are bound to mm_struct rather than
> task.

Sure, this is what I can understand. I meant... OK, lets ignore
"without these changes", because without these changes there are
much more oddities ;) With these changes only ->mm == NULL case
looks unclear.

And btw,

	if (!p->mm)
		return NULL;
	return rcu_dereference(p->mm->memcg);

perhaps this needs a comment. It is not clear what protects ->mm.
But. After this series "p" is always current (if ->mm != NULL), so
this is fine.

Nevermind. Please forget. I feel this needs a bit of cleanup, but
we can always do this later.

Oleg.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 17:38       ` Oleg Nesterov
@ 2015-05-27  9:43         ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-05-27  9:43 UTC (permalink / raw)
  To: Oleg Nesterov
  Cc: linux-mm, Johannes Weiner, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On Tue 26-05-15 19:38:22, Oleg Nesterov wrote:
> On 05/26, Michal Hocko wrote:
> >
> > On Tue 26-05-15 18:36:46, Oleg Nesterov wrote:
> > >
> > > > +static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
> > > > +{
> > > > +	if (!p->mm)
> > > > +		return NULL;
> > > > +	return rcu_dereference(p->mm->memcg);
> > > > +}
> > >
> > > Probably I missed something, but it seems that the callers do not
> > > expect it can return NULL.
> >
> > This hasn't changed by this patch. mem_cgroup_from_task was allowed to
> > return NULL even before. I've just made it static because it doesn't
> > have any external users anymore.
> 
> I see, but it could only return NULL if mem_cgroup_from_css() returns
> NULL. Now it returns NULL for sure if the caller is task_in_mem_cgroup(),
> 
> 	// called when task->mm == NULL
> 
> 	task_memcg = mem_cgroup_from_task(task);
> 	css_get(&task_memcg->css);
> 
> and this css_get() doesn't look nice if task_memcg == NULL ;)

You are right of course. mem_cgroup_from_task is indeed weird. I will
add the diff below to the original patch and try to get rid of this
weird interface in a follow up patch.

> > I will double check
> 
> Yes, please. Perhaps I missed something.
> 
> > > And in fact I can't understand what mem_cgroup_from_task() actually
> > > means, with or without these changes.
> >
> > It performs task_struct->mem_cgroup mapping. We cannot use cgroup
> > mapping here because the charges are bound to mm_struct rather than
> > task.
> 
> Sure, this is what I can understand. I meant... OK, lets ignore
> "without these changes", because without these changes there are
> much more oddities ;) With these changes only ->mm == NULL case
> looks unclear.
> 
> And btw,
> 
> 	if (!p->mm)
> 		return NULL;
> 	return rcu_dereference(p->mm->memcg);
> 
> perhaps this needs a comment. It is not clear what protects ->mm.
> But. After this series "p" is always current (if ->mm != NULL), so
> this is fine.
> 
> Nevermind. Please forget. I feel this needs a bit of cleanup, but
> we can always do this later.

Yes I will rather do that in a separate patch. Thanks!

This will go into to patch because I have indeed change the semantic of
this function and I haven't realized the subtle difference.
---
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index aa85d5dfbe0e..ab00b6ae84e2 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -471,9 +471,14 @@ static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
 
 static struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
 {
-	if (!p->mm)
-		return NULL;
-	return rcu_dereference(p->mm->memcg);
+	if (p->mm)
+		return rcu_dereference(p->mm->memcg);
+
+	/*
+	 * If the process doesn't have mm struct anymore we have to fallback
+	 * to the task_css.
+	 */
+	return mem_cgroup_from_css(task_css(p, memory_cgrp_id));
 }
 
 void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 17:20       ` Johannes Weiner
@ 2015-05-27 14:48         ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-05-27 14:48 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: linux-mm, Oleg Nesterov, Tejun Heo, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML,
	Greg Thelen

On Tue 26-05-15 13:20:19, Johannes Weiner wrote:
> On Tue, May 26, 2015 at 05:11:49PM +0200, Michal Hocko wrote:
> > On Tue 26-05-15 10:10:11, Johannes Weiner wrote:
> > > On Tue, May 26, 2015 at 01:50:06PM +0200, Michal Hocko wrote:
> > > > @@ -104,7 +105,12 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
> > > >  	bool match = false;
> > > >  
> > > >  	rcu_read_lock();
> > > > -	task_memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
> > > > +	/*
> > > > +	 * rcu_dereference would be better but mem_cgroup is not a complete
> > > > +	 * type here
> > > > +	 */
> > > > +	task_memcg = READ_ONCE(mm->memcg);
> > > > +	smp_read_barrier_depends();
> > > >  	if (task_memcg)
> > > >  		match = mem_cgroup_is_descendant(task_memcg, memcg);
> > > >  	rcu_read_unlock();
> > > 
> > > This function has only one user in rmap.  If you inline it there, you
> > > can use rcu_dereference() and get rid of the specialness & comment.
> > 
> > I am not sure I understand. struct mem_cgroup is defined in
> > mm/memcontrol.c so mm/rmap.c will not see it. Or do you suggest pulling
> > struct mem_cgroup out into a header with all the dependencies?
> 
> Yes, I think that would be preferrable.  It's weird that we have such
> a major data structure that is used all over the mm-code but only in
> the shape of pointers to an incomplete type.  It forces a bad style of
> code that uses uninlinable callbacks and accessors for even the most
> basic things.  There are a few functions in memcontrol.c that could
> instead be static inlines or should even be implemented as part of the
> code that is using them, such as

Fair enough. I was afraid of dependencies between networking and memcg
header files but it seems that only struct cg_proto is really needed for
tcp kmem controller and that one doesn't depend on any socket specific
stuff. So we are good here. 

> mem_cgroup_get_lru_size(),
> mem_cgroup_is_descendant, mem_cgroup_inactive_anon_is_low(),
> mem_cgroup_lruvec_online(), mem_cgroup_swappiness(),
> mem_cgroup_select_victim_node(), mem_cgroup_update_page_stat(), and
> mem_cgroup_events().  Your new functions fall into the same category.

Let me try how this will end up. Hopefully the code will not grow too
much.

> > @@ -486,29 +486,13 @@ void mm_set_memcg(struct mm_struct *mm, struct mem_cgroup *memcg)
> >  void mm_drop_memcg(struct mm_struct *mm)
> >  {
> >  	/*
> > -	 * This is the last reference to mm so nobody can see
> > -	 * this memcg
> > +	 * We could reset mm->memcg, but the mm goes away as this is the
> > +	 * last reference.
> >  	 */
> >  	if (mm->memcg)
> >  		css_put(&mm->memcg->css);
> >  }
> 
> This function is supposed to be an API call to disassociate a mm from
> its memcg, but it actually doesn't do that and will leave a dangling
> pointer based on assumptions it makes about how and when the caller
> invokes it.  That's bad.  It's a subtle optimization with dependencies
> spread across two moving parts.  The result is very fragile code which
> will break things in non-obvious ways when the caller changes later on.

Fair point. The optimization is not really worth it and I will add
explicit NULLing because I would prefer to keep the function as well as
mm_set_memcg because this is easier to track and at least mm_set_memcg
needs to be called from two places (as pointed out by Oleg) and I would
really like prevent from duplication.

> And what's left standing is silly too: a memcg-specific API to call
> css_put(), even though struct cgroup_subsys_state and css_put() are
> public API already.
> 
> Both these things are a negative side effect of struct mem_cgroup
> being semi-private.  Memcg pointers are everywhere, yet we need a
> public interface indirection for every simple dereference.
> 
> > @@ -5252,10 +5236,15 @@ static void mem_cgroup_move_task(struct cgroup_subsys_state *css,
> >  
> >  	if (mm) {
> >  		/*
> > -		 * Commit to a new memcg. mc.to points to the destination
> > -		 * memcg even when the current charges are not moved.
> > +		 * Commit to the target memcg even when we do not move
> > +		 * charges.
> >  		 */
> > -		mm_move_memcg(mm, mc.to);
> > +		struct mem_cgroup *old_memcg = READ_ONCE(mm->memcg);
> > +		struct mem_cgroup *new_memcg = mem_cgroup_from_css(css);
> > +
> > +		mm_set_memcg(mm, new_memcg);
> > +		if (old_memcg)
> > +			css_put(&old_memcg->css);
> 
> "Commit" is a problematic choice of words because of its existing
> meaning in memcg of associating a page with a pre-reserved charge.
> 
> I'm not sure a comment is actually necessary here.  Reassigning
> mm->memcg when moving a process pretty straight forward IMO.

OK, will remove it.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-26 14:10   ` Johannes Weiner
  2015-05-26 15:11     ` Michal Hocko
@ 2015-05-28 21:07     ` Tejun Heo
  2015-05-29 12:08       ` Michal Hocko
  1 sibling, 1 reply; 20+ messages in thread
From: Tejun Heo @ 2015-05-28 21:07 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Michal Hocko, linux-mm, Oleg Nesterov, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

Hello, Johannes, Michal.

On Tue, May 26, 2015 at 10:10:11AM -0400, Johannes Weiner wrote:
> On Tue, May 26, 2015 at 01:50:06PM +0200, Michal Hocko wrote:
> > Please note that this patch introduces a USER VISIBLE CHANGE OF BEHAVIOR.
> > Without mm->owner _all_ tasks associated with the mm_struct would
> > initiate memcg migration while previously only owner of the mm_struct
> > could do that. The original behavior was awkward though because the user
> > task didn't have any means to find out the current owner (esp. after
> > mm_update_next_owner) so the migration behavior was not well defined
> > in general.
> > New cgroup API (unified hierarchy) will discontinue tasks file which
> > means that migrating threads will no longer be possible. In such a case
> > having CLONE_VM without CLONE_THREAD could emulate the thread behavior
> > but this patch prevents from isolating memcg controllers from others.
> > Nevertheless I am not convinced such a use case would really deserve
> > complications on the memcg code side.
> 
> I think such a change is okay.  The memcg semantics of moving threads
> with the same mm into separate groups have always been arbitrary.  No
> reasonable behavior can be expected of this, so what sane real life
> usecase would rely on it?

I suppose that making mm always follow the threadgroup leader should
be fine, right?  While this wouldn't make any difference in the
unified hierarchy, I think this would make more sense for traditional
hierarchies.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-28 21:07     ` Tejun Heo
@ 2015-05-29 12:08       ` Michal Hocko
  2015-05-29 13:10         ` Tejun Heo
  0 siblings, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2015-05-29 12:08 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Johannes Weiner, linux-mm, Oleg Nesterov, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On Thu 28-05-15 17:07:42, Tejun Heo wrote:
> Hello, Johannes, Michal.
> 
> On Tue, May 26, 2015 at 10:10:11AM -0400, Johannes Weiner wrote:
> > On Tue, May 26, 2015 at 01:50:06PM +0200, Michal Hocko wrote:
> > > Please note that this patch introduces a USER VISIBLE CHANGE OF BEHAVIOR.
> > > Without mm->owner _all_ tasks associated with the mm_struct would
> > > initiate memcg migration while previously only owner of the mm_struct
> > > could do that. The original behavior was awkward though because the user
> > > task didn't have any means to find out the current owner (esp. after
> > > mm_update_next_owner) so the migration behavior was not well defined
> > > in general.
> > > New cgroup API (unified hierarchy) will discontinue tasks file which
> > > means that migrating threads will no longer be possible. In such a case
> > > having CLONE_VM without CLONE_THREAD could emulate the thread behavior
> > > but this patch prevents from isolating memcg controllers from others.
> > > Nevertheless I am not convinced such a use case would really deserve
> > > complications on the memcg code side.
> > 
> > I think such a change is okay.  The memcg semantics of moving threads
> > with the same mm into separate groups have always been arbitrary.  No
> > reasonable behavior can be expected of this, so what sane real life
> > usecase would rely on it?
> 
> I suppose that making mm always follow the threadgroup leader should
> be fine, right? 

That is the plan.

> While this wouldn't make any difference in the unified hierarchy,

Just to make sure I understand. "wouldn't make any difference" because
the API is not backward compatible right?

> I think this would make more sense for traditional hierarchies.

Yes I believe so.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-29 12:08       ` Michal Hocko
@ 2015-05-29 13:10         ` Tejun Heo
  2015-05-29 13:45           ` Michal Hocko
  0 siblings, 1 reply; 20+ messages in thread
From: Tejun Heo @ 2015-05-29 13:10 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, linux-mm, Oleg Nesterov, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On Fri, May 29, 2015 at 02:08:38PM +0200, Michal Hocko wrote:
> > I suppose that making mm always follow the threadgroup leader should
> > be fine, right? 
> 
> That is the plan.

Cool.

> > While this wouldn't make any difference in the unified hierarchy,
> 
> Just to make sure I understand. "wouldn't make any difference" because
> the API is not backward compatible right?

Hmm... because it's always per-process.  If any thread is going, the
whole process is going together.

> > I think this would make more sense for traditional hierarchies.
> 
> Yes I believe so.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-29 13:10         ` Tejun Heo
@ 2015-05-29 13:45           ` Michal Hocko
  2015-05-29 14:07             ` Tejun Heo
  0 siblings, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2015-05-29 13:45 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Johannes Weiner, linux-mm, Oleg Nesterov, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On Fri 29-05-15 09:10:55, Tejun Heo wrote:
> On Fri, May 29, 2015 at 02:08:38PM +0200, Michal Hocko wrote:
> > > I suppose that making mm always follow the threadgroup leader should
> > > be fine, right? 
> > 
> > That is the plan.
> 
> Cool.
> 
> > > While this wouldn't make any difference in the unified hierarchy,
> > 
> > Just to make sure I understand. "wouldn't make any difference" because
> > the API is not backward compatible right?
> 
> Hmm... because it's always per-process.  If any thread is going, the
> whole process is going together.

Sure but we are talking about processes here. They just happen to share
mm. And this is exactly the behavior change I am talking about... With
the owner you could emulate "threads" with this patch you cannot
anymore. IMO we shouldn't allow for that but just reading the original
commit message (cf475ad28ac35) which has added mm->owner:
"
It also allows several control groups that are virtually grouped by
mm_struct, to exist independent of the memory controller i.e., without
adding mem_cgroup's for each controller, to mm_struct.
"
suggests it might have been intentional. That being said, I think it was
a mistake back at the time and we should move on to a saner model. But I
also believe we should be really vocal when the user visible behavior
changes. If somebody really asks for the previous behavior I would
insist on a _strong_ usecase.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-29 13:45           ` Michal Hocko
@ 2015-05-29 14:07             ` Tejun Heo
  2015-05-29 14:57               ` Michal Hocko
  0 siblings, 1 reply; 20+ messages in thread
From: Tejun Heo @ 2015-05-29 14:07 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, linux-mm, Oleg Nesterov, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

Hello,

On Fri, May 29, 2015 at 03:45:53PM +0200, Michal Hocko wrote:
> Sure but we are talking about processes here. They just happen to share
> mm. And this is exactly the behavior change I am talking about... With

Are we talking about CLONE_VM w/o CLONE_THREAD?  ie. two threadgroups
sharing the same VM?

> the owner you could emulate "threads" with this patch you cannot
> anymore. IMO we shouldn't allow for that but just reading the original
> commit message (cf475ad28ac35) which has added mm->owner:
> "
> It also allows several control groups that are virtually grouped by
> mm_struct, to exist independent of the memory controller i.e., without
> adding mem_cgroup's for each controller, to mm_struct.
> "
> suggests it might have been intentional. That being said, I think it was

I think he's talking about implmenting different controllers which may
want to add their own css pointer in mm_struct now wouldn't need to as
the mm is tagged with the owning task from which membership of all
controllers can be derived.  I don't think that's something we need to
worry about.  We haven't seen even a suggestion for such a controller
and even if that happens we'd be better off adding a separate field
for the new controller.

> a mistake back at the time and we should move on to a saner model. But I
> also believe we should be really vocal when the user visible behavior
> changes. If somebody really asks for the previous behavior I would
> insist on a _strong_ usecase.

I'm a bit lost on what's cleared defined is actually changing.  It's
not like userland had firm control over mm->owner.  It was already a
crapshoot, no?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-29 14:07             ` Tejun Heo
@ 2015-05-29 14:57               ` Michal Hocko
  2015-05-29 15:23                 ` Tejun Heo
  0 siblings, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2015-05-29 14:57 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Johannes Weiner, linux-mm, Oleg Nesterov, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On Fri 29-05-15 10:07:37, Tejun Heo wrote:
> Hello,
> 
> On Fri, May 29, 2015 at 03:45:53PM +0200, Michal Hocko wrote:
> > Sure but we are talking about processes here. They just happen to share
> > mm. And this is exactly the behavior change I am talking about... With
> 
> Are we talking about CLONE_VM w/o CLONE_THREAD?  ie. two threadgroups
> sharing the same VM?

yes.

> > the owner you could emulate "threads" with this patch you cannot
> > anymore. IMO we shouldn't allow for that but just reading the original
> > commit message (cf475ad28ac35) which has added mm->owner:
> > "
> > It also allows several control groups that are virtually grouped by
> > mm_struct, to exist independent of the memory controller i.e., without
> > adding mem_cgroup's for each controller, to mm_struct.
> > "
> > suggests it might have been intentional. That being said, I think it was
> 
> I think he's talking about implmenting different controllers which may
> want to add their own css pointer in mm_struct now wouldn't need to as
> the mm is tagged with the owning task from which membership of all
> controllers can be derived.  I don't think that's something we need to
> worry about.  We haven't seen even a suggestion for such a controller
> and even if that happens we'd be better off adding a separate field
> for the new controller.

Maybe I've just misunderstood. My understandig was that tasks sharing
the mm could live in different cgroups while the memory would be bound
by a shared memcg.

> > a mistake back at the time and we should move on to a saner model. But I
> > also believe we should be really vocal when the user visible behavior
> > changes. If somebody really asks for the previous behavior I would
> > insist on a _strong_ usecase.
> 
> I'm a bit lost on what's cleared defined is actually changing.  It's
> not like userland had firm control over mm->owner.  It was already a
> crapshoot, no?

OK so you creat a task A (leader) which clones several tasks Pn with
CLONE_VM without CLONE_THREAD. Moving A around would control memcg
membership while Pn could be moved around freely to control membership
in other controllers (e.g. cpu to control shares). So it is something
like moving threads separately.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-29 14:57               ` Michal Hocko
@ 2015-05-29 15:23                 ` Tejun Heo
  2015-05-29 15:26                   ` Michal Hocko
  0 siblings, 1 reply; 20+ messages in thread
From: Tejun Heo @ 2015-05-29 15:23 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, linux-mm, Oleg Nesterov, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

Hello,

On Fri, May 29, 2015 at 04:57:39PM +0200, Michal Hocko wrote:
> > > "
> > > It also allows several control groups that are virtually grouped by
> > > mm_struct, to exist independent of the memory controller i.e., without
> > > adding mem_cgroup's for each controller, to mm_struct.
> > > "
> > > suggests it might have been intentional. That being said, I think it was
> > 
> > I think he's talking about implmenting different controllers which may
> > want to add their own css pointer in mm_struct now wouldn't need to as
> > the mm is tagged with the owning task from which membership of all
> > controllers can be derived.  I don't think that's something we need to
> > worry about.  We haven't seen even a suggestion for such a controller
> > and even if that happens we'd be better off adding a separate field
> > for the new controller.
> 
> Maybe I've just misunderstood. My understandig was that tasks sharing
> the mm could live in different cgroups while the memory would be bound
> by a shared memcg.

Hmm.... it specifically goes into explaining that it's about having
different controllers sharing the owner field.

 "i.e., without adding mem_cgroup's for each controller, to mm_struct."

It seems fairly clear to me.

> > I'm a bit lost on what's cleared defined is actually changing.  It's
> > not like userland had firm control over mm->owner.  It was already a
> > crapshoot, no?
> 
> OK so you creat a task A (leader) which clones several tasks Pn with
> CLONE_VM without CLONE_THREAD. Moving A around would control memcg
> membership while Pn could be moved around freely to control membership
> in other controllers (e.g. cpu to control shares). So it is something
> like moving threads separately.

Sure, it'd behave clearly in certain cases but then again you'd have
cases where how mm->owner changes isn't clear at all when seen from
the userland.  e.g. When the original owner goes away, the assignment
of the next owner is essentially arbitrary.  That's what I meant by
saying it was already a crapshoot.  We should definitely document the
change but this isn't likely to be an issue.  CLONE_VM &&
!CLONE_THREAD is an extreme corner case to begin with and even the
behavior there wasn't all that clearly defined.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 3/3] memcg: get rid of mm_struct::owner
  2015-05-29 15:23                 ` Tejun Heo
@ 2015-05-29 15:26                   ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-05-29 15:26 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Johannes Weiner, linux-mm, Oleg Nesterov, Vladimir Davydov,
	KAMEZAWA Hiroyuki, KOSAKI Motohiro, Andrew Morton, LKML

On Fri 29-05-15 11:23:28, Tejun Heo wrote:
> Hello,
> 
> On Fri, May 29, 2015 at 04:57:39PM +0200, Michal Hocko wrote:
[...]
> > OK so you creat a task A (leader) which clones several tasks Pn with
> > CLONE_VM without CLONE_THREAD. Moving A around would control memcg
> > membership while Pn could be moved around freely to control membership
> > in other controllers (e.g. cpu to control shares). So it is something
> > like moving threads separately.
> 
> Sure, it'd behave clearly in certain cases but then again you'd have
> cases where how mm->owner changes isn't clear at all when seen from
> the userland. 

Sure. I am definitely _not_ advocating this use case! As said before, I
consider it abuse. It is just fair to point out this is a user visible
change IMO.

> e.g. When the original owner goes away, the assignment
> of the next owner is essentially arbitrary.  That's what I meant by
> saying it was already a crapshoot.  We should definitely document the
> change but this isn't likely to be an issue.  CLONE_VM &&
> !CLONE_THREAD is an extreme corner case to begin with and even the
> behavior there wasn't all that clearly defined.

That is the line of argumentation in my changelog ;)

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2015-05-29 15:26 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-26 11:50 [RFC 0/3] get rid of mm_struct::owner Michal Hocko
2015-05-26 11:50 ` [RFC 1/3] memcg: restructure mem_cgroup_can_attach() Michal Hocko
2015-05-26 11:50 ` [RFC 2/3] memcg: Use mc.moving_task as the indication for charge moving Michal Hocko
2015-05-26 11:50 ` [RFC 3/3] memcg: get rid of mm_struct::owner Michal Hocko
2015-05-26 14:10   ` Johannes Weiner
2015-05-26 15:11     ` Michal Hocko
2015-05-26 17:20       ` Johannes Weiner
2015-05-27 14:48         ` Michal Hocko
2015-05-28 21:07     ` Tejun Heo
2015-05-29 12:08       ` Michal Hocko
2015-05-29 13:10         ` Tejun Heo
2015-05-29 13:45           ` Michal Hocko
2015-05-29 14:07             ` Tejun Heo
2015-05-29 14:57               ` Michal Hocko
2015-05-29 15:23                 ` Tejun Heo
2015-05-29 15:26                   ` Michal Hocko
2015-05-26 16:36   ` Oleg Nesterov
2015-05-26 17:22     ` Michal Hocko
2015-05-26 17:38       ` Oleg Nesterov
2015-05-27  9:43         ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).