All of lore.kernel.org
 help / color / mirror / Atom feed
From: Balbir Singh <bsingharora@gmail.com>
To: ebiederm@xmission.com (Eric W. Biederman)
Cc: Michal Hocko <mhocko@kernel.org>,
	Kirill Tkhai <ktkhai@virtuozzo.com>,
	akpm@linux-foundation.org, peterz@infradead.org, oleg@redhat.com,
	viro@zeniv.linux.org.uk, mingo@kernel.org,
	paulmck@linux.vnet.ibm.com, keescook@chromium.org,
	riel@redhat.com, tglx@linutronix.de,
	kirill.shutemov@linux.intel.com, marcos.souza.org@gmail.com,
	hoeun.ryu@gmail.com, pasha.tatashin@oracle.com,
	gs051095@gmail.com, dhowells@redhat.com, rppt@linux.vnet.ibm.com,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC][PATCH] memcg: Replace mm->owner with mm->memcg
Date: Thu, 3 May 2018 09:59:52 +1000	[thread overview]
Message-ID: <20180503095952.70bffde1@balbir.ozlabs.ibm.com> (raw)
In-Reply-To: <87h8nr2sa3.fsf_-_@xmission.com>

On Tue, 01 May 2018 12:35:16 -0500
ebiederm@xmission.com (Eric W. Biederman) wrote:

> Recently it was reported that mm_update_next_owner could get into
> cases where it was executing it's fallback for_each_process part of
> the loop and thus taking up a lot of time.
> 
> To deal with this replace mm->owner with mm->memcg.  This just reduces
> the complexity of everything.  As much as possible I have maintained
> the current semantics.  There are two siginificant exceptions.  During
> fork the memcg of the process calling fork is charged rather than
> init_css_set.  During memory cgroup migration the charges are migrated
> not if the process is the owner of the mm, but if the process being
> migrated has the same memory cgroup as the mm.
> 
> I believe it was a bug if init_css_set is charged for memory activity
> during fork, and the old behavior was simply a consequence of the new
> task not having tsk->cgroup not initialized to it's proper cgroup.

That does sound like a bug, I guess we've not seen it because we did
not track any slab allocations initially.

> 
> Durhing cgroup migration only thread group leaders are allowed to
> migrate.  Which means in practice there should only be one.  Linux
> tasks created with CLONE_VM are the only exception, but the common
> cases are already ruled out.  Processes created with vfork have a
> suspended parent and can do nothing but call exec so they should never
> show up.  Threads of the same cgroup are not the thread group leader
> so also should not show up.  That leaves the old LinuxThreads library
> which is probably out of use by now, and someone doing something very
> creative with cgroups, and rolling their own threads with CLONE_VM.
> So in practice I don't think the difference charge migration will
> affect anyone.
> 
> To ensure that mm->memcg is updated appropriately I have implemented
> cgroup "attach" and "fork" methods.  This ensures that at those
> points the mm pointed to the task has the appropriate memory cgroup.
> 
> For simplicity instead of introducing a new mm lock I simply use
> exchange on the pointer where the mm->memcg is updated to get
> atomic updates.
> 
> Looking at the history effectively this change is a revert.  The
> reason given for adding mm->owner is so that multiple cgroups can be
> attached to the same mm.  In the last 8 years a second user of
> mm->owner has not appeared.  A feature that has never used, makes the
> code more complicated and has horrible worst case performance should
> go.

The idea was to track the mm to the right cgroup, we did find that
the mm could be confused as belonging to two cgroups. tsk->cgroup is
not sufficient and if when the tgid left, we needed an owner to track
where the current allocations were. But this is from 8 year old history,
I don't have my notes anymore :)

> 
> Fixes: cf475ad28ac3 ("cgroups: add an owner to the mm_struct")
> Reported-by:  Kirill Tkhai <ktkhai@virtuozzo.com>
> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
> ---
>  fs/exec.c                  |  1 -
>  include/linux/memcontrol.h | 11 ++++--
>  include/linux/mm_types.h   | 12 +------
>  include/linux/sched/mm.h   |  8 -----
>  kernel/exit.c              | 89 ----------------------------------------------
>  kernel/fork.c              | 17 +++++++--
>  mm/memcontrol.c            | 86 ++++++++++++++++++++++++++++++++++----------
>  7 files changed, 90 insertions(+), 134 deletions(-)
> 
> diff --git a/fs/exec.c b/fs/exec.c
> index 183059c427b9..a8be9318d1a8 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -1040,7 +1040,6 @@ static int exec_mmap(struct mm_struct *mm)
>  		up_read(&old_mm->mmap_sem);
>  		BUG_ON(active_mm != old_mm);
>  		setmax_mm_hiwater_rss(&tsk->signal->maxrss, old_mm);
> -		mm_update_next_owner(old_mm);
>  		mmput(old_mm);
>  		return 0;
>  	}
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index d99b71bc2c66..147e04bfcaee 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -341,7 +341,6 @@ static inline struct lruvec *mem_cgroup_lruvec(struct pglist_data *pgdat,
>  struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *);
> 
>  bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg);
> -struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
> 
>  static inline
>  struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
> @@ -402,6 +401,8 @@ static inline bool mem_cgroup_is_descendant(struct mem_cgroup *memcg,
>  	return cgroup_is_descendant(memcg->css.cgroup, root->css.cgroup);
>  }
> 
> +void mm_update_memcg(struct mm_struct *mm, struct mem_cgroup *new);
> +
>  static inline bool mm_match_cgroup(struct mm_struct *mm,
>  				   struct mem_cgroup *memcg)
>  {
> @@ -409,7 +410,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
>  	bool match = false;
> 
>  	rcu_read_lock();
> -	task_memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
> +	task_memcg = rcu_dereference(mm->memcg);
>  	if (task_memcg)
>  		match = mem_cgroup_is_descendant(task_memcg, memcg);
>  	rcu_read_unlock();
> @@ -693,7 +694,7 @@ static inline void count_memcg_event_mm(struct mm_struct *mm,
>  		return;
> 
>  	rcu_read_lock();
> -	memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
> +	memcg = rcu_dereference(mm->memcg);
>  	if (likely(memcg)) {
>  		count_memcg_events(memcg, idx, 1);
>  		if (idx == OOM_KILL)
> @@ -781,6 +782,10 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page,
>  	return &pgdat->lruvec;
>  }
> 
> +static inline void mm_update_memcg(struct mm_struct *mm, struct mem_cgroup *new)
> +{
> +}
> +
>  static inline bool mm_match_cgroup(struct mm_struct *mm,
>  		struct mem_cgroup *memcg)
>  {
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 21612347d311..ea5efd40a5d1 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -443,17 +443,7 @@ struct mm_struct {
>  	struct kioctx_table __rcu	*ioctx_table;
>  #endif
>  #ifdef CONFIG_MEMCG
> -	/*
> -	 * "owner" points to a task that is regarded as the canonical
> -	 * user/owner of this mm. All of the following must be true in
> -	 * order for it to be changed:
> -	 *
> -	 * current == mm->owner
> -	 * current->mm != mm
> -	 * new_owner->mm == mm
> -	 * new_owner->alloc_lock is held
> -	 */
> -	struct task_struct __rcu *owner;
> +	struct mem_cgroup __rcu	*memcg;
>  #endif
>  	struct user_namespace *user_ns;
> 
> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> index 2c570cd934af..cc8e68d36fc2 100644
> --- a/include/linux/sched/mm.h
> +++ b/include/linux/sched/mm.h
> @@ -95,14 +95,6 @@ extern struct mm_struct *mm_access(struct task_struct *task, unsigned int mode);
>  /* Remove the current tasks stale references to the old mm_struct */
>  extern void mm_release(struct task_struct *, struct mm_struct *);
> 
> -#ifdef CONFIG_MEMCG
> -extern void mm_update_next_owner(struct mm_struct *mm);
> -#else
> -static inline void mm_update_next_owner(struct mm_struct *mm)
> -{
> -}
> -#endif /* CONFIG_MEMCG */
> -
>  #ifdef CONFIG_MMU
>  extern void arch_pick_mmap_layout(struct mm_struct *mm,
>  				  struct rlimit *rlim_stack);
> diff --git a/kernel/exit.c b/kernel/exit.c
> index c3c7ac560114..be967d2da0ce 100644
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -399,94 +399,6 @@ kill_orphaned_pgrp(struct task_struct *tsk, struct task_struct *parent)
>  	}
>  }
> 
> -#ifdef CONFIG_MEMCG
> -/*
> - * A task is exiting.   If it owned this mm, find a new owner for the mm.
> - */
> -void mm_update_next_owner(struct mm_struct *mm)
> -{
> -	struct task_struct *c, *g, *p = current;
> -
> -retry:
> -	/*
> -	 * If the exiting or execing task is not the owner, it's
> -	 * someone else's problem.
> -	 */
> -	if (mm->owner != p)
> -		return;
> -	/*
> -	 * The current owner is exiting/execing and there are no other
> -	 * candidates.  Do not leave the mm pointing to a possibly
> -	 * freed task structure.
> -	 */
> -	if (atomic_read(&mm->mm_users) <= 1) {
> -		mm->owner = NULL;
> -		return;
> -	}
> -
> -	read_lock(&tasklist_lock);
> -	/*
> -	 * Search in the children
> -	 */
> -	list_for_each_entry(c, &p->children, sibling) {
> -		if (c->mm == mm)
> -			goto assign_new_owner;
> -	}
> -
> -	/*
> -	 * Search in the siblings
> -	 */
> -	list_for_each_entry(c, &p->real_parent->children, sibling) {
> -		if (c->mm == mm)
> -			goto assign_new_owner;
> -	}
> -
> -	/*
> -	 * Search through everything else, we should not get here often.
> -	 */
> -	for_each_process(g) {
> -		if (g->flags & PF_KTHREAD)
> -			continue;
> -		for_each_thread(g, c) {
> -			if (c->mm == mm)
> -				goto assign_new_owner;
> -			if (c->mm)
> -				break;
> -		}
> -	}
> -	read_unlock(&tasklist_lock);
> -	/*
> -	 * We found no owner yet mm_users > 1: this implies that we are
> -	 * most likely racing with swapoff (try_to_unuse()) or /proc or
> -	 * ptrace or page migration (get_task_mm()).  Mark owner as NULL.
> -	 */
> -	mm->owner = NULL;
> -	return;
> -
> -assign_new_owner:
> -	BUG_ON(c == p);
> -	get_task_struct(c);
> -	/*
> -	 * The task_lock protects c->mm from changing.
> -	 * We always want mm->owner->mm == mm
> -	 */
> -	task_lock(c);
> -	/*
> -	 * Delay read_unlock() till we have the task_lock()
> -	 * to ensure that c does not slip away underneath us
> -	 */
> -	read_unlock(&tasklist_lock);
> -	if (c->mm != mm) {
> -		task_unlock(c);
> -		put_task_struct(c);
> -		goto retry;
> -	}
> -	mm->owner = c;
> -	task_unlock(c);
> -	put_task_struct(c);
> -}
> -#endif /* CONFIG_MEMCG */
> -
>  /*
>   * Turn us into a lazy TLB process if we
>   * aren't already..
> @@ -540,7 +452,6 @@ static void exit_mm(void)
>  	up_read(&mm->mmap_sem);
>  	enter_lazy_tlb(mm, current);
>  	task_unlock(current);
> -	mm_update_next_owner(mm);
>  	mmput(mm);
>  	if (test_thread_flag(TIF_MEMDIE))
>  		exit_oom_victim();
> diff --git a/kernel/fork.c b/kernel/fork.c
> index a5d21c42acfc..f284acf22aad 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -868,10 +868,19 @@ static void mm_init_aio(struct mm_struct *mm)
>  #endif
>  }
> 
> -static void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
> +static void mm_init_memcg(struct mm_struct *mm)
>  {
>  #ifdef CONFIG_MEMCG
> -	mm->owner = p;
> +	struct cgroup_subsys_state *css;
> +
> +	/* Ensure mm->memcg is initialized */
> +	mm->memcg = NULL;
> +
> +	rcu_read_lock();
> +	css = task_css(current, memory_cgrp_id);
> +	if (css && css_tryget(css))
> +		mm_update_memcg(mm, mem_cgroup_from_css(css));
> +	rcu_read_unlock();
>  #endif
>  }
> 
> @@ -901,7 +910,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>  	spin_lock_init(&mm->page_table_lock);
>  	mm_init_cpumask(mm);
>  	mm_init_aio(mm);
> -	mm_init_owner(mm, p);
> +	mm_init_memcg(mm);
>  	RCU_INIT_POINTER(mm->exe_file, NULL);
>  	mmu_notifier_mm_init(mm);
>  	hmm_mm_init(mm);
> @@ -931,6 +940,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>  fail_nocontext:
>  	mm_free_pgd(mm);
>  fail_nopgd:
> +	mm_update_memcg(mm, NULL);
>  	free_mm(mm);
>  	return NULL;
>  }
> @@ -968,6 +978,7 @@ static inline void __mmput(struct mm_struct *mm)
>  	}
>  	if (mm->binfmt)
>  		module_put(mm->binfmt->module);
> +	mm_update_memcg(mm, NULL);
>  	mmdrop(mm);
>  }
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 2bd3df3d101a..5dce8a7fa65b 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -664,20 +664,6 @@ static void memcg_check_events(struct mem_cgroup *memcg, struct page *page)
>  	}
>  }
> 
> -struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p)
> -{
> -	/*
> -	 * mm_update_next_owner() may clear mm->owner to NULL
> -	 * if it races with swapoff, page migration, etc.
> -	 * So this can be called with p == NULL.
> -	 */
> -	if (unlikely(!p))
> -		return NULL;
> -
> -	return mem_cgroup_from_css(task_css(p, memory_cgrp_id));
> -}
> -EXPORT_SYMBOL(mem_cgroup_from_task);
> -
>  static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
>  {
>  	struct mem_cgroup *memcg = NULL;
> @@ -692,7 +678,7 @@ static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
>  		if (unlikely(!mm))
>  			memcg = root_mem_cgroup;
>  		else {
> -			memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
> +			memcg = rcu_dereference(mm->memcg);
>  			if (unlikely(!memcg))
>  				memcg = root_mem_cgroup;
>  		}
> @@ -1011,7 +997,7 @@ bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg)
>  		 * killed to prevent needlessly killing additional tasks.
>  		 */
>  		rcu_read_lock();
> -		task_memcg = mem_cgroup_from_task(task);
> +		task_memcg = mem_cgroup_from_css(task_css(task, memory_cgrp_id));
>  		css_get(&task_memcg->css);
>  		rcu_read_unlock();
>  	}
> @@ -4827,15 +4813,16 @@ static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
>  	if (!move_flags)
>  		return 0;
> 
> -	from = mem_cgroup_from_task(p);
> +	from = mem_cgroup_from_css(task_css(p, memory_cgrp_id));
> 
>  	VM_BUG_ON(from == memcg);
> 
>  	mm = get_task_mm(p);
>  	if (!mm)
>  		return 0;
> +
>  	/* We move charges only when we move a owner of the mm */
> -	if (mm->owner == p) {
> +	if (mm->memcg == from) {
>  		VM_BUG_ON(mc.from);
>  		VM_BUG_ON(mc.to);
>  		VM_BUG_ON(mc.precharge);
> @@ -4859,6 +4846,59 @@ static int mem_cgroup_can_attach(struct cgroup_taskset *tset)
>  	return ret;
>  }
> 
> +/**
> + * mm_update_memcg - Update the memory cgroup of a mm_struct
> + * @mm: mm struct
> + * @new: new memory cgroup value
> + *
> + * Called whenever mm->memcg needs to change.   Consumes a reference
> + * to new (unless new is NULL).   The reference to the old memory
> + * cgroup is decreased.
> + */
> +void mm_update_memcg(struct mm_struct *mm, struct mem_cgroup *new)
> +{
> +	/* This is the only place where mm->memcg is changed */
> +	struct mem_cgroup *old;
> +
> +	old = xchg(&mm->memcg, new);
> +	if (old)
> +		css_put(&old->css);
> +}
> +
> +static void task_update_memcg(struct task_struct *tsk, struct mem_cgroup *new)
> +{
> +	struct mm_struct *mm;
> +	task_lock(tsk);
> +	mm = tsk->mm;
> +	if (mm && !(tsk->flags & PF_KTHREAD))
> +		mm_update_memcg(mm, new);
> +	task_unlock(tsk);
> +}
> +
> +static void mem_cgroup_attach(struct cgroup_taskset *tset)
> +{
> +	struct cgroup_subsys_state *css;
> +	struct task_struct *tsk;
> +
> +	cgroup_taskset_for_each(tsk, css, tset) {
> +		struct mem_cgroup *new = mem_cgroup_from_css(css);
> +		css_get(css);
> +		task_update_memcg(tsk, new);

I'd have to go back and check and I think your comment refers to this,
but we don't expect non tgid tasks to show up here? My concern is I can't
find the guaratee that task_update_memcg(tsk, new) is not

1. Duplicated for each thread in the process or attached to the mm
2. Do not update mm->memcg to point to different places, so the one
that sticks is the one that updated things last.


Balbir Singh

  parent reply	other threads:[~2018-05-03  0:00 UTC|newest]

Thread overview: 96+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-26 11:00 [PATCH 0/4] exit: Make unlikely case in mm_update_next_owner() more scalable Kirill Tkhai
2018-04-26 11:00 ` [PATCH 1/4] exit: Move read_unlock() up in mm_update_next_owner() Kirill Tkhai
2018-04-26 15:01   ` Oleg Nesterov
2018-04-26 11:00 ` [PATCH 2/4] exit: Use rcu instead of get_task_struct() " Kirill Tkhai
2018-04-26 11:00 ` [PATCH 3/4] exit: Rename assign_new_owner label " Kirill Tkhai
2018-04-26 11:01 ` [PATCH 4/4] exit: Lockless iteration over task list " Kirill Tkhai
2018-04-26 12:35   ` Andrea Parri
2018-04-26 13:52     ` Kirill Tkhai
2018-04-26 15:20       ` Peter Zijlstra
2018-04-26 15:56         ` Kirill Tkhai
2018-04-26 15:20       ` Peter Zijlstra
2018-04-26 16:04         ` Kirill Tkhai
2018-04-26 15:29       ` Andrea Parri
2018-04-26 16:11         ` Kirill Tkhai
2018-04-26 13:07 ` [PATCH 0/4] exit: Make unlikely case in mm_update_next_owner() more scalable Michal Hocko
2018-04-26 13:52   ` Oleg Nesterov
2018-04-26 14:07   ` Kirill Tkhai
2018-04-26 15:10     ` Oleg Nesterov
2018-04-26 16:19   ` Eric W. Biederman
2018-04-26 19:28     ` Michal Hocko
2018-04-27  7:08       ` Michal Hocko
2018-04-27 18:05         ` Eric W. Biederman
2018-05-01 17:22           ` Eric W. Biederman
2018-05-01 17:35             ` [RFC][PATCH] memcg: Replace mm->owner with mm->memcg Eric W. Biederman
2018-05-02  8:47               ` Michal Hocko
2018-05-02 13:20                 ` Johannes Weiner
2018-05-02 14:05                   ` Eric W. Biederman
2018-05-02 19:21                   ` [PATCH] " Eric W. Biederman
2018-05-02 21:04                     ` Andrew Morton
2018-05-02 21:35                       ` Eric W. Biederman
2018-05-03 13:33                     ` Oleg Nesterov
2018-05-03 14:39                       ` Eric W. Biederman
2018-05-04 14:20                         ` Oleg Nesterov
2018-05-04 14:36                           ` Eric W. Biederman
2018-05-04 14:54                             ` Oleg Nesterov
2018-05-04 15:49                               ` Eric W. Biederman
2018-05-04 16:22                                 ` Oleg Nesterov
2018-05-04 16:40                                   ` Eric W. Biederman
2018-05-04 17:26                                     ` [PATCH 0/2] mm->owner to mm->memcg fixes Eric W. Biederman
2018-05-04 17:26                                       ` [PATCH 1/2] memcg: Update the mm->memcg maintenance to work when !CONFIG_MMU Eric W. Biederman
2018-05-04 17:27                                       ` [PATCH 2/2] memcg: Close the race between migration and installing bprm->mm as mm Eric W. Biederman
2018-05-09 14:51                                         ` Oleg Nesterov
2018-05-10  3:00                                           ` Eric W. Biederman
2018-05-10 12:14                                       ` [PATCH 0/2] mm->owner to mm->memcg fixes Michal Hocko
2018-05-10 12:18                                         ` Michal Hocko
2018-05-22 12:57                                         ` Michal Hocko
2018-05-23 19:46                                           ` Eric W. Biederman
2018-05-24 11:10                                             ` Michal Hocko
2018-05-24 21:16                                               ` Andrew Morton
2018-05-24 23:37                                                 ` Andrea Parri
2018-05-30 12:17                                                 ` Michal Hocko
2018-05-31 18:41                                                   ` Eric W. Biederman
2018-06-01  1:57                                                     ` [PATCH] memcg: Replace mm->owner with mm->memcg Eric W. Biederman
2018-06-01 14:52                                                       ` [RFC][PATCH 0/2] memcg: Require every task that uses an mm to migrate together Eric W. Biederman
2018-06-01 14:53                                                         ` [RFC][PATCH 1/2] memcg: Ensure every task that uses an mm is in the same memory cgroup Eric W. Biederman
2018-06-01 16:50                                                           ` Tejun Heo
2018-06-01 18:11                                                             ` Eric W. Biederman
2018-06-01 19:16                                                               ` Tejun Heo
2018-06-04 13:01                                                                 ` Michal Hocko
2018-06-04 18:47                                                                   ` Tejun Heo
2018-06-04 19:11                                                                     ` Eric W. Biederman
2018-06-06 11:13                                                           ` Michal Hocko
2018-06-07 11:42                                                             ` Eric W. Biederman
2018-06-07 12:19                                                               ` Michal Hocko
2018-06-01 14:53                                                         ` [RFC][PATCH 2/2] memcgl: Remove dead code now that all tasks of an mm share a memcg Eric W. Biederman
2018-06-01 14:07                                                     ` [PATCH 0/2] mm->owner to mm->memcg fixes Michal Hocko
2018-05-24 21:17                                               ` Andrew Morton
2018-05-30 11:52                                             ` Michal Hocko
2018-05-31 17:43                                               ` Eric W. Biederman
2018-05-07 14:33                                     ` [PATCH] memcg: Replace mm->owner with mm->memcg Oleg Nesterov
2018-05-08  3:15                                       ` Eric W. Biederman
2018-05-09 14:40                                         ` Oleg Nesterov
2018-05-10  3:09                                           ` Eric W. Biederman
2018-05-10  4:03                                             ` [RFC][PATCH] cgroup: Don't mess with tasks in exec Eric W. Biederman
2018-05-10 12:15                                               ` Oleg Nesterov
2018-05-10 12:35                                                 ` Tejun Heo
2018-05-10 12:38                                             ` [PATCH] memcg: Replace mm->owner with mm->memcg Oleg Nesterov
2018-05-04 11:07                     ` Michal Hocko
2018-05-05 16:54                     ` kbuild test robot
2018-05-07 23:18                       ` Andrew Morton
2018-05-08  2:17                         ` Eric W. Biederman
2018-05-09 21:00                         ` Michal Hocko
2018-05-02 23:59               ` Balbir Singh [this message]
2018-05-03 15:11                 ` [RFC][PATCH] " Eric W. Biederman
2018-05-04  4:59                   ` Balbir Singh
2018-05-03 10:52           ` [PATCH 0/4] exit: Make unlikely case in mm_update_next_owner() more scalable Kirill Tkhai
2018-06-01  1:07   ` Eric W. Biederman
2018-06-01 13:57     ` Michal Hocko
2018-06-01 14:32       ` Eric W. Biederman
2018-06-01 15:02         ` Michal Hocko
2018-06-01 15:25           ` Eric W. Biederman
2018-06-04  6:54             ` Michal Hocko
2018-06-04 14:31               ` Eric W. Biederman
2018-06-05  8:15                 ` Michal Hocko
2018-06-05  8:48             ` Kirill Tkhai
2018-06-05 15:36               ` Eric W. Biederman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180503095952.70bffde1@balbir.ozlabs.ibm.com \
    --to=bsingharora@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=dhowells@redhat.com \
    --cc=ebiederm@xmission.com \
    --cc=gs051095@gmail.com \
    --cc=hoeun.ryu@gmail.com \
    --cc=keescook@chromium.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=ktkhai@virtuozzo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marcos.souza.org@gmail.com \
    --cc=mhocko@kernel.org \
    --cc=mingo@kernel.org \
    --cc=oleg@redhat.com \
    --cc=pasha.tatashin@oracle.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=rppt@linux.vnet.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.