linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B
@ 2014-09-26 14:50 Vladimir Davydov
  2014-09-26 14:50 ` [PATCH 1/4] cpuset: convert callback_mutex to a spinlock Vladimir Davydov
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Vladimir Davydov @ 2014-09-26 14:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Li Zefan, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm

Hi,

SLAB and SLUB use hardwall cpuset check on fallback alloc, while the
page allocator uses softwall check for all kernel allocations. This may
result in falling into the page allocator even if there are free objects
on other nodes. SLAB algorithm is especially affected: the number of
objects allocated in vain is unlimited, so that they theoretically can
eat up a whole NUMA node. For more details see comments to patches 3, 4.

When I last sent a fix (https://lkml.org/lkml/2014/8/10/100), David
found the whole cpuset API being cumbersome and proposed to simplify it
before getting to fixing its users. So this patch set addresses both
David's complain (patches 1, 2) and the SL[AU]B issues (patches 3, 4).

Reviews are appreciated.

Thanks,

Vladimir Davydov (4):
  cpuset: convert callback_mutex to a spinlock
  cpuset: simplify cpuset_node_allowed API
  slab: fix cpuset check in fallback_alloc
  slub: fix cpuset check in get_any_partial

 include/linux/cpuset.h |   37 +++--------
 kernel/cpuset.c        |  162 +++++++++++++++++-------------------------------
 mm/hugetlb.c           |    2 +-
 mm/oom_kill.c          |    2 +-
 mm/page_alloc.c        |    6 +-
 mm/slab.c              |    2 +-
 mm/slub.c              |    2 +-
 mm/vmscan.c            |    5 +-
 8 files changed, 74 insertions(+), 144 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] cpuset: convert callback_mutex to a spinlock
  2014-09-26 14:50 [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Vladimir Davydov
@ 2014-09-26 14:50 ` Vladimir Davydov
  2014-09-26 15:44   ` Christoph Lameter
  2014-09-26 14:50 ` [PATCH 2/4] cpuset: simplify cpuset_node_allowed API Vladimir Davydov
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Vladimir Davydov @ 2014-09-26 14:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Li Zefan, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm

The callback_mutex is only used to synchronize reads/updates of cpusets'
flags and cpu/node masks. These operations should always proceed fast so
there's no reason why we can't use a spinlock instead of the mutex.

Converting the callback_mutex into a spinlock will let us call
cpuset_zone_allowed_softwall from atomic context. This, in turn, makes
it possible to simplify the code by merging the hardwall and asoftwall
checks into the same function, which is the business of the next patch.

Suggested-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 kernel/cpuset.c |  107 ++++++++++++++++++++++++++++---------------------------
 1 file changed, 55 insertions(+), 52 deletions(-)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 22874d7cf2c0..1c45774ee117 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -248,34 +248,34 @@ static struct cpuset top_cpuset = {
 		if (is_cpuset_online(((des_cs) = css_cs((pos_css)))))
 
 /*
- * There are two global mutexes guarding cpuset structures - cpuset_mutex
- * and callback_mutex.  The latter may nest inside the former.  We also
- * require taking task_lock() when dereferencing a task's cpuset pointer.
- * See "The task_lock() exception", at the end of this comment.
+ * There are two global locks guarding cpuset structures - cpuset_mutex and
+ * callback_lock. We also require taking task_lock() when dereferencing a
+ * task's cpuset pointer. See "The task_lock() exception", at the end of this
+ * comment.
  *
- * A task must hold both mutexes to modify cpusets.  If a task holds
+ * A task must hold both locks to modify cpusets.  If a task holds
  * cpuset_mutex, then it blocks others wanting that mutex, ensuring that it
- * is the only task able to also acquire callback_mutex and be able to
+ * is the only task able to also acquire callback_lock and be able to
  * modify cpusets.  It can perform various checks on the cpuset structure
  * first, knowing nothing will change.  It can also allocate memory while
  * just holding cpuset_mutex.  While it is performing these checks, various
- * callback routines can briefly acquire callback_mutex to query cpusets.
- * Once it is ready to make the changes, it takes callback_mutex, blocking
+ * callback routines can briefly acquire callback_lock to query cpusets.
+ * Once it is ready to make the changes, it takes callback_lock, blocking
  * everyone else.
  *
  * Calls to the kernel memory allocator can not be made while holding
- * callback_mutex, as that would risk double tripping on callback_mutex
+ * callback_lock, as that would risk double tripping on callback_lock
  * from one of the callbacks into the cpuset code from within
  * __alloc_pages().
  *
- * If a task is only holding callback_mutex, then it has read-only
+ * If a task is only holding callback_lock, then it has read-only
  * access to cpusets.
  *
  * Now, the task_struct fields mems_allowed and mempolicy may be changed
  * by other task, we use alloc_lock in the task_struct fields to protect
  * them.
  *
- * The cpuset_common_file_read() handlers only hold callback_mutex across
+ * The cpuset_common_file_read() handlers only hold callback_lock across
  * small pieces of code, such as when reading out possibly multi-word
  * cpumasks and nodemasks.
  *
@@ -284,7 +284,7 @@ static struct cpuset top_cpuset = {
  */
 
 static DEFINE_MUTEX(cpuset_mutex);
-static DEFINE_MUTEX(callback_mutex);
+static DEFINE_SPINLOCK(callback_lock);
 
 /*
  * CPU / memory hotplug is handled asynchronously.
@@ -329,7 +329,7 @@ static struct file_system_type cpuset_fs_type = {
  * One way or another, we guarantee to return some non-empty subset
  * of cpu_online_mask.
  *
- * Call with callback_mutex held.
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void guarantee_online_cpus(struct cpuset *cs, struct cpumask *pmask)
 {
@@ -347,7 +347,7 @@ static void guarantee_online_cpus(struct cpuset *cs, struct cpumask *pmask)
  * One way or another, we guarantee to return some non-empty subset
  * of node_states[N_MEMORY].
  *
- * Call with callback_mutex held.
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
 {
@@ -359,7 +359,7 @@ static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
 /*
  * update task's spread flag if cpuset's page/slab spread flag is set
  *
- * Called with callback_mutex/cpuset_mutex held
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void cpuset_update_task_spread_flag(struct cpuset *cs,
 					struct task_struct *tsk)
@@ -875,9 +875,9 @@ static void update_cpumasks_hier(struct cpuset *cs, struct cpumask *new_cpus)
 			continue;
 		rcu_read_unlock();
 
-		mutex_lock(&callback_mutex);
+		spin_lock_irq(&callback_lock);
 		cpumask_copy(cp->effective_cpus, new_cpus);
-		mutex_unlock(&callback_mutex);
+		spin_unlock_irq(&callback_lock);
 
 		WARN_ON(!cgroup_on_dfl(cp->css.cgroup) &&
 			!cpumask_equal(cp->cpus_allowed, cp->effective_cpus));
@@ -942,9 +942,9 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
 	if (retval < 0)
 		return retval;
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 	cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed);
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irq(&callback_lock);
 
 	/* use trialcs->cpus_allowed as a temp variable */
 	update_cpumasks_hier(cs, trialcs->cpus_allowed);
@@ -1131,9 +1131,9 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
 			continue;
 		rcu_read_unlock();
 
-		mutex_lock(&callback_mutex);
+		spin_lock_irq(&callback_lock);
 		cp->effective_mems = *new_mems;
-		mutex_unlock(&callback_mutex);
+		spin_unlock_irq(&callback_lock);
 
 		WARN_ON(!cgroup_on_dfl(cp->css.cgroup) &&
 			!nodes_equal(cp->mems_allowed, cp->effective_mems));
@@ -1154,7 +1154,7 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
  * mempolicies and if the cpuset is marked 'memory_migrate',
  * migrate the tasks pages to the new memory.
  *
- * Call with cpuset_mutex held.  May take callback_mutex during call.
+ * Call with cpuset_mutex held. May take callback_lock during call.
  * Will take tasklist_lock, scan tasklist for tasks in cpuset cs,
  * lock each such tasks mm->mmap_sem, scan its vma's and rebind
  * their mempolicies to the cpusets new mems_allowed.
@@ -1201,9 +1201,9 @@ static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs,
 	if (retval < 0)
 		goto done;
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 	cs->mems_allowed = trialcs->mems_allowed;
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irq(&callback_lock);
 
 	/* use trialcs->mems_allowed as a temp variable */
 	update_nodemasks_hier(cs, &cs->mems_allowed);
@@ -1294,9 +1294,9 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
 	spread_flag_changed = ((is_spread_slab(cs) != is_spread_slab(trialcs))
 			|| (is_spread_page(cs) != is_spread_page(trialcs)));
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 	cs->flags = trialcs->flags;
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irq(&callback_lock);
 
 	if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
 		rebuild_sched_domains_locked();
@@ -1712,7 +1712,7 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
 	count = seq_get_buf(sf, &buf);
 	s = buf;
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 
 	switch (type) {
 	case FILE_CPULIST:
@@ -1739,7 +1739,7 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
 		seq_commit(sf, -1);
 	}
 out_unlock:
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irq(&callback_lock);
 	return ret;
 }
 
@@ -1956,12 +1956,12 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 
 	cpuset_inc();
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 	if (cgroup_on_dfl(cs->css.cgroup)) {
 		cpumask_copy(cs->effective_cpus, parent->effective_cpus);
 		cs->effective_mems = parent->effective_mems;
 	}
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irq(&callback_lock);
 
 	if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags))
 		goto out_unlock;
@@ -1988,10 +1988,10 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 	}
 	rcu_read_unlock();
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 	cs->mems_allowed = parent->mems_allowed;
 	cpumask_copy(cs->cpus_allowed, parent->cpus_allowed);
-	mutex_unlock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 out_unlock:
 	mutex_unlock(&cpuset_mutex);
 	return 0;
@@ -2030,7 +2030,7 @@ static void cpuset_css_free(struct cgroup_subsys_state *css)
 static void cpuset_bind(struct cgroup_subsys_state *root_css)
 {
 	mutex_lock(&cpuset_mutex);
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 
 	if (cgroup_on_dfl(root_css->cgroup)) {
 		cpumask_copy(top_cpuset.cpus_allowed, cpu_possible_mask);
@@ -2041,7 +2041,7 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css)
 		top_cpuset.mems_allowed = top_cpuset.effective_mems;
 	}
 
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irq(&callback_lock);
 	mutex_unlock(&cpuset_mutex);
 }
 
@@ -2126,12 +2126,12 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
 {
 	bool is_empty;
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 	cpumask_copy(cs->cpus_allowed, new_cpus);
 	cpumask_copy(cs->effective_cpus, new_cpus);
 	cs->mems_allowed = *new_mems;
 	cs->effective_mems = *new_mems;
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irq(&callback_lock);
 
 	/*
 	 * Don't call update_tasks_cpumask() if the cpuset becomes empty,
@@ -2168,10 +2168,10 @@ hotplug_update_tasks(struct cpuset *cs,
 	if (nodes_empty(*new_mems))
 		*new_mems = parent_cs(cs)->effective_mems;
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irq(&callback_lock);
 	cpumask_copy(cs->effective_cpus, new_cpus);
 	cs->effective_mems = *new_mems;
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irq(&callback_lock);
 
 	if (cpus_updated)
 		update_tasks_cpumask(cs);
@@ -2257,21 +2257,21 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
 
 	/* synchronize cpus_allowed to cpu_active_mask */
 	if (cpus_updated) {
-		mutex_lock(&callback_mutex);
+		spin_lock_irq(&callback_lock);
 		if (!on_dfl)
 			cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
 		cpumask_copy(top_cpuset.effective_cpus, &new_cpus);
-		mutex_unlock(&callback_mutex);
+		spin_unlock_irq(&callback_lock);
 		/* we don't mess with cpumasks of tasks in top_cpuset */
 	}
 
 	/* synchronize mems_allowed to N_MEMORY */
 	if (mems_updated) {
-		mutex_lock(&callback_mutex);
+		spin_lock_irq(&callback_lock);
 		if (!on_dfl)
 			top_cpuset.mems_allowed = new_mems;
 		top_cpuset.effective_mems = new_mems;
-		mutex_unlock(&callback_mutex);
+		spin_unlock_irq(&callback_lock);
 		update_tasks_nodemask(&top_cpuset);
 	}
 
@@ -2364,11 +2364,13 @@ void __init cpuset_init_smp(void)
 
 void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
 {
-	mutex_lock(&callback_mutex);
+	unsigned long flags;
+
+	spin_lock_irqsave(&callback_lock, flags);
 	rcu_read_lock();
 	guarantee_online_cpus(task_cs(tsk), pmask);
 	rcu_read_unlock();
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irqrestore(&callback_lock, flags);
 }
 
 void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
@@ -2414,12 +2416,13 @@ void cpuset_init_current_mems_allowed(void)
 nodemask_t cpuset_mems_allowed(struct task_struct *tsk)
 {
 	nodemask_t mask;
+	unsigned long flags;
 
-	mutex_lock(&callback_mutex);
+	spin_lock_irqsave(&callback_lock, flags);
 	rcu_read_lock();
 	guarantee_online_mems(task_cs(tsk), &mask);
 	rcu_read_unlock();
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irqrestore(&callback_lock, flags);
 
 	return mask;
 }
@@ -2438,7 +2441,7 @@ int cpuset_nodemask_valid_mems_allowed(nodemask_t *nodemask)
 /*
  * nearest_hardwall_ancestor() - Returns the nearest mem_exclusive or
  * mem_hardwall ancestor to the specified cpuset.  Call holding
- * callback_mutex.  If no ancestor is mem_exclusive or mem_hardwall
+ * callback_lock.  If no ancestor is mem_exclusive or mem_hardwall
  * (an unusual configuration), then returns the root cpuset.
  */
 static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
@@ -2480,13 +2483,12 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
  * GFP_KERNEL allocations are not so marked, so can escape to the
  * nearest enclosing hardwalled ancestor cpuset.
  *
- * Scanning up parent cpusets requires callback_mutex.  The
+ * Scanning up parent cpusets requires callback_lock.  The
  * __alloc_pages() routine only calls here with __GFP_HARDWALL bit
  * _not_ set if it's a GFP_KERNEL allocation, and all nodes in the
  * current tasks mems_allowed came up empty on the first pass over
  * the zonelist.  So only GFP_KERNEL allocations, if all nodes in the
- * cpuset are short of memory, might require taking the callback_mutex
- * mutex.
+ * cpuset are short of memory, might require taking the callback_lock.
  *
  * The first call here from mm/page_alloc:get_page_from_freelist()
  * has __GFP_HARDWALL set in gfp_mask, enforcing hardwall cpusets,
@@ -2513,6 +2515,7 @@ int __cpuset_node_allowed_softwall(int node, gfp_t gfp_mask)
 {
 	struct cpuset *cs;		/* current cpuset ancestors */
 	int allowed;			/* is allocation in zone z allowed? */
+	unsigned long flags;
 
 	if (in_interrupt() || (gfp_mask & __GFP_THISNODE))
 		return 1;
@@ -2532,14 +2535,14 @@ int __cpuset_node_allowed_softwall(int node, gfp_t gfp_mask)
 		return 1;
 
 	/* Not hardwall and node outside mems_allowed: scan up cpusets */
-	mutex_lock(&callback_mutex);
+	spin_lock_irqsave(&callback_lock, flags);
 
 	rcu_read_lock();
 	cs = nearest_hardwall_ancestor(task_cs(current));
 	allowed = node_isset(node, cs->mems_allowed);
 	rcu_read_unlock();
 
-	mutex_unlock(&callback_mutex);
+	spin_unlock_irqrestore(&callback_lock, flags);
 	return allowed;
 }
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] cpuset: simplify cpuset_node_allowed API
  2014-09-26 14:50 [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Vladimir Davydov
  2014-09-26 14:50 ` [PATCH 1/4] cpuset: convert callback_mutex to a spinlock Vladimir Davydov
@ 2014-09-26 14:50 ` Vladimir Davydov
  2014-09-26 15:53   ` Christoph Lameter
  2014-09-26 14:50 ` [PATCH 3/4] slab: fix cpuset check in fallback_alloc Vladimir Davydov
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Vladimir Davydov @ 2014-09-26 14:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: David Rientjes, Li Zefan, Christoph Lameter, Pekka Enberg,
	Joonsoo Kim, Andrew Morton, linux-mm

Current cpuset API for checking if a zone/node is allowed to allocate
from looks rather awkward. We have hardwall and softwall versions of
cpuset_node_allowed with the softwall version doing literally the same
as the hardwall version if __GFP_HARDWALL is passed to it in gfp flags.
If it isn't, the softwall version may check the given node against the
enclosing hardwall cpuset, which it needs to take the callback lock to
do.

Such a distinction was introduced by commit 02a0e53d8227 ("cpuset:
rework cpuset_zone_allowed api"). Before, we had the only version with
the __GFP_HARDWALL flag determining its behavior. The purpose of the
commit was to avoid sleep-in-atomic bugs when someone would mistakenly
call the function without the __GFP_HARDWALL flag for an atomic
allocation. The suffixes introduced were intended to make the callers
think before using the function.

However, since the callback lock was converted from mutex to spinlock by
the previous patch, the softwall check function cannot sleep, and these
precautions are no longer necessary.

So let's simplify the API back to the single check.

Suggested-by: David Rientjes <rientjes@google.com>
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/cpuset.h |   37 ++++++--------------------------
 kernel/cpuset.c        |   55 ++----------------------------------------------
 mm/hugetlb.c           |    2 +-
 mm/oom_kill.c          |    2 +-
 mm/page_alloc.c        |    6 +++---
 mm/slab.c              |    2 +-
 mm/slub.c              |    3 ++-
 mm/vmscan.c            |    5 +++--
 8 files changed, 20 insertions(+), 92 deletions(-)

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index ade2390ffe92..fcad559df369 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -48,29 +48,16 @@ extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
 void cpuset_init_current_mems_allowed(void);
 int cpuset_nodemask_valid_mems_allowed(nodemask_t *nodemask);
 
-extern int __cpuset_node_allowed_softwall(int node, gfp_t gfp_mask);
-extern int __cpuset_node_allowed_hardwall(int node, gfp_t gfp_mask);
+extern int __cpuset_node_allowed(int node, gfp_t gfp_mask);
 
-static inline int cpuset_node_allowed_softwall(int node, gfp_t gfp_mask)
+static inline int cpuset_node_allowed(int node, gfp_t gfp_mask)
 {
-	return nr_cpusets() <= 1 ||
-		__cpuset_node_allowed_softwall(node, gfp_mask);
+	return nr_cpusets() <= 1 || __cpuset_node_allowed(node, gfp_mask);
 }
 
-static inline int cpuset_node_allowed_hardwall(int node, gfp_t gfp_mask)
+static inline int cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask)
 {
-	return nr_cpusets() <= 1 ||
-		__cpuset_node_allowed_hardwall(node, gfp_mask);
-}
-
-static inline int cpuset_zone_allowed_softwall(struct zone *z, gfp_t gfp_mask)
-{
-	return cpuset_node_allowed_softwall(zone_to_nid(z), gfp_mask);
-}
-
-static inline int cpuset_zone_allowed_hardwall(struct zone *z, gfp_t gfp_mask)
-{
-	return cpuset_node_allowed_hardwall(zone_to_nid(z), gfp_mask);
+	return cpuset_node_allowed(zone_to_nid(z), gfp_mask);
 }
 
 extern int cpuset_mems_allowed_intersects(const struct task_struct *tsk1,
@@ -178,22 +165,12 @@ static inline int cpuset_nodemask_valid_mems_allowed(nodemask_t *nodemask)
 	return 1;
 }
 
-static inline int cpuset_node_allowed_softwall(int node, gfp_t gfp_mask)
-{
-	return 1;
-}
-
-static inline int cpuset_node_allowed_hardwall(int node, gfp_t gfp_mask)
-{
-	return 1;
-}
-
-static inline int cpuset_zone_allowed_softwall(struct zone *z, gfp_t gfp_mask)
+static inline int cpuset_node_allowed(int node, gfp_t gfp_mask)
 {
 	return 1;
 }
 
-static inline int cpuset_zone_allowed_hardwall(struct zone *z, gfp_t gfp_mask)
+static inline int cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask)
 {
 	return 1;
 }
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 1c45774ee117..114a9f7cc07e 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -2452,7 +2452,7 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
 }
 
 /**
- * cpuset_node_allowed_softwall - Can we allocate on a memory node?
+ * cpuset_node_allowed - Can we allocate on a memory node?
  * @node: is this an allowed node?
  * @gfp_mask: memory allocation flags
  *
@@ -2464,13 +2464,6 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
  * flag, yes.
  * Otherwise, no.
  *
- * If __GFP_HARDWALL is set, cpuset_node_allowed_softwall() reduces to
- * cpuset_node_allowed_hardwall().  Otherwise, cpuset_node_allowed_softwall()
- * might sleep, and might allow a node from an enclosing cpuset.
- *
- * cpuset_node_allowed_hardwall() only handles the simpler case of hardwall
- * cpusets, and never sleeps.
- *
  * The __GFP_THISNODE placement logic is really handled elsewhere,
  * by forcibly using a zonelist starting at a specified node, and by
  * (in get_page_from_freelist()) refusing to consider the zones for
@@ -2505,13 +2498,8 @@ static struct cpuset *nearest_hardwall_ancestor(struct cpuset *cs)
  *	TIF_MEMDIE   - any node ok
  *	GFP_KERNEL   - any node in enclosing hardwalled cpuset ok
  *	GFP_USER     - only nodes in current tasks mems allowed ok.
- *
- * Rule:
- *    Don't call cpuset_node_allowed_softwall if you can't sleep, unless you
- *    pass in the __GFP_HARDWALL flag set in gfp_flag, which disables
- *    the code that might scan up ancestor cpusets and sleep.
  */
-int __cpuset_node_allowed_softwall(int node, gfp_t gfp_mask)
+int __cpuset_node_allowed(int node, gfp_t gfp_mask)
 {
 	struct cpuset *cs;		/* current cpuset ancestors */
 	int allowed;			/* is allocation in zone z allowed? */
@@ -2519,7 +2507,6 @@ int __cpuset_node_allowed_softwall(int node, gfp_t gfp_mask)
 
 	if (in_interrupt() || (gfp_mask & __GFP_THISNODE))
 		return 1;
-	might_sleep_if(!(gfp_mask & __GFP_HARDWALL));
 	if (node_isset(node, current->mems_allowed))
 		return 1;
 	/*
@@ -2546,44 +2533,6 @@ int __cpuset_node_allowed_softwall(int node, gfp_t gfp_mask)
 	return allowed;
 }
 
-/*
- * cpuset_node_allowed_hardwall - Can we allocate on a memory node?
- * @node: is this an allowed node?
- * @gfp_mask: memory allocation flags
- *
- * If we're in interrupt, yes, we can always allocate.  If __GFP_THISNODE is
- * set, yes, we can always allocate.  If node is in our task's mems_allowed,
- * yes.  If the task has been OOM killed and has access to memory reserves as
- * specified by the TIF_MEMDIE flag, yes.
- * Otherwise, no.
- *
- * The __GFP_THISNODE placement logic is really handled elsewhere,
- * by forcibly using a zonelist starting at a specified node, and by
- * (in get_page_from_freelist()) refusing to consider the zones for
- * any node on the zonelist except the first.  By the time any such
- * calls get to this routine, we should just shut up and say 'yes'.
- *
- * Unlike the cpuset_node_allowed_softwall() variant, above,
- * this variant requires that the node be in the current task's
- * mems_allowed or that we're in interrupt.  It does not scan up the
- * cpuset hierarchy for the nearest enclosing mem_exclusive cpuset.
- * It never sleeps.
- */
-int __cpuset_node_allowed_hardwall(int node, gfp_t gfp_mask)
-{
-	if (in_interrupt() || (gfp_mask & __GFP_THISNODE))
-		return 1;
-	if (node_isset(node, current->mems_allowed))
-		return 1;
-	/*
-	 * Allow tasks that have access to memory reserves because they have
-	 * been OOM killed to get memory anywhere.
-	 */
-	if (unlikely(test_thread_flag(TIF_MEMDIE)))
-		return 1;
-	return 0;
-}
-
 /**
  * cpuset_mem_spread_node() - On which node to begin search for a file page
  * cpuset_slab_spread_node() - On which node to begin search for a slab page
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index eeceeeb09019..e4e911e38fb8 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -582,7 +582,7 @@ retry_cpuset:
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist,
 						MAX_NR_ZONES - 1, nodemask) {
-		if (cpuset_zone_allowed_softwall(zone, htlb_alloc_mask(h))) {
+		if (cpuset_zone_allowed(zone, htlb_alloc_mask(h))) {
 			page = dequeue_huge_page_node(h, zone_to_nid(zone));
 			if (page) {
 				if (avoid_reserve)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 1e11df8fa7ec..2836ec2fad6d 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -233,7 +233,7 @@ static enum oom_constraint constrained_alloc(struct zonelist *zonelist,
 	/* Check this allocation failure is caused by cpuset's wall function */
 	for_each_zone_zonelist_nodemask(zone, z, zonelist,
 			high_zoneidx, nodemask)
-		if (!cpuset_zone_allowed_softwall(zone, gfp_mask))
+		if (!cpuset_zone_allowed(zone, gfp_mask))
 			cpuset_limited = true;
 
 	if (cpuset_limited) {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 18cee0d4c8a2..67971482d5a3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1963,7 +1963,7 @@ zonelist_scan:
 
 	/*
 	 * Scan zonelist, looking for a zone with enough free.
-	 * See also __cpuset_node_allowed_softwall() comment in kernel/cpuset.c.
+	 * See also __cpuset_node_allowed() comment in kernel/cpuset.c.
 	 */
 	for_each_zone_zonelist_nodemask(zone, z, zonelist,
 						high_zoneidx, nodemask) {
@@ -1974,7 +1974,7 @@ zonelist_scan:
 				continue;
 		if (cpusets_enabled() &&
 			(alloc_flags & ALLOC_CPUSET) &&
-			!cpuset_zone_allowed_softwall(zone, gfp_mask))
+			!cpuset_zone_allowed(zone, gfp_mask))
 				continue;
 		/*
 		 * Distribute pages in proportion to the individual
@@ -2492,7 +2492,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
 			alloc_flags |= ALLOC_HARDER;
 		/*
 		 * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the
-		 * comment for __cpuset_node_allowed_softwall().
+		 * comment for __cpuset_node_allowed().
 		 */
 		alloc_flags &= ~ALLOC_CPUSET;
 	} else if (unlikely(rt_task(current)) && !in_interrupt())
diff --git a/mm/slab.c b/mm/slab.c
index a467b308c682..eb6f0cf6875c 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3051,7 +3051,7 @@ retry:
 	for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
 		nid = zone_to_nid(zone);
 
-		if (cpuset_zone_allowed_hardwall(zone, flags) &&
+		if (cpuset_zone_allowed(zone, flags | __GFP_HARDWALL) &&
 			get_node(cache, nid) &&
 			get_node(cache, nid)->free_objects) {
 				obj = ____cache_alloc_node(cache,
diff --git a/mm/slub.c b/mm/slub.c
index 3e8afcc07a76..1bf4e59fea45 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1672,7 +1672,8 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
 
 			n = get_node(s, zone_to_nid(zone));
 
-			if (n && cpuset_zone_allowed_hardwall(zone, flags) &&
+			if (n && cpuset_zone_allowed(zone,
+						     flags | __GFP_HARDWALL) &&
 					n->nr_partial > s->min_partial) {
 				object = get_partial_node(s, n, c, flags);
 				if (object) {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2836b5373b2e..19fb4cb07b23 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2399,7 +2399,8 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 		 * to global LRU.
 		 */
 		if (global_reclaim(sc)) {
-			if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
+			if (!cpuset_zone_allowed(zone,
+						 GFP_KERNEL | __GFP_HARDWALL))
 				continue;
 
 			lru_pages += zone_reclaimable_pages(zone);
@@ -3381,7 +3382,7 @@ void wakeup_kswapd(struct zone *zone, int order, enum zone_type classzone_idx)
 	if (!populated_zone(zone))
 		return;
 
-	if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
+	if (!cpuset_zone_allowed(zone, GFP_KERNEL | __GFP_HARDWALL))
 		return;
 	pgdat = zone->zone_pgdat;
 	if (pgdat->kswapd_max_order < order) {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] slab: fix cpuset check in fallback_alloc
  2014-09-26 14:50 [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Vladimir Davydov
  2014-09-26 14:50 ` [PATCH 1/4] cpuset: convert callback_mutex to a spinlock Vladimir Davydov
  2014-09-26 14:50 ` [PATCH 2/4] cpuset: simplify cpuset_node_allowed API Vladimir Davydov
@ 2014-09-26 14:50 ` Vladimir Davydov
  2014-09-26 16:31   ` Christoph Lameter
  2014-09-26 14:50 ` [PATCH 4/4] slub: fix cpuset check in get_any_partial Vladimir Davydov
  2014-09-29  7:25 ` [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Zefan Li
  4 siblings, 1 reply; 10+ messages in thread
From: Vladimir Davydov @ 2014-09-26 14:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Li Zefan, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm

fallback_alloc is called on kmalloc if the preferred node doesn't have
free or partial slabs and there's no pages on the node's free list
(GFP_THISNODE allocations fail). Before invoking the reclaimer it tries
to locate a free or partial slab on other allowed nodes' lists. While
iterating over the preferred node's zonelist it skips those zones which
hardwall cpuset check returns false for. That means that for a task
bound to a specific node using cpusets fallback_alloc will always ignore
free slabs on other nodes and go directly to the reclaimer, which,
however, may allocate from other nodes if cpuset.mem_hardwall is unset
(default). As a result, we may get lists of free slabs grow without
bounds on other nodes, which is bad, because inactive slabs are only
evicted by cache_reap at a very slow rate and cannot be dropped
forcefully.

To reproduce the issue, run a process that will walk over a directory
tree with lots of files inside a cpuset bound to a node that constantly
experiences memory pressure. Look at num_slabs vs active_slabs growth as
reported by /proc/slabinfo.

To avoid this we should use softwall cpuset check in fallback_alloc.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 mm/slab.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slab.c b/mm/slab.c
index eb6f0cf6875c..e35822d07821 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3051,7 +3051,7 @@ retry:
 	for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
 		nid = zone_to_nid(zone);
 
-		if (cpuset_zone_allowed(zone, flags | __GFP_HARDWALL) &&
+		if (cpuset_zone_allowed(zone, flags) &&
 			get_node(cache, nid) &&
 			get_node(cache, nid)->free_objects) {
 				obj = ____cache_alloc_node(cache,
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] slub: fix cpuset check in get_any_partial
  2014-09-26 14:50 [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Vladimir Davydov
                   ` (2 preceding siblings ...)
  2014-09-26 14:50 ` [PATCH 3/4] slab: fix cpuset check in fallback_alloc Vladimir Davydov
@ 2014-09-26 14:50 ` Vladimir Davydov
  2014-09-29  7:25 ` [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Zefan Li
  4 siblings, 0 replies; 10+ messages in thread
From: Vladimir Davydov @ 2014-09-26 14:50 UTC (permalink / raw)
  To: linux-kernel
  Cc: Li Zefan, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm

If we fail to allocate from the current node's stock, we look for free
objects on other nodes before calling the page allocator (see
get_any_partial). While checking other nodes we respect cpuset
constraints by calling cpuset_zone_allowed. We enforce hardwall check.
As a result, we will fallback to the page allocator even if there are
some pages cached on other nodes, but the current cpuset doesn't have
them set. However, the page allocator uses softwall check for kernel
allocations, so it may allocate from one of the other nodes in this
case.

Therefore we should use softwall cpuset check in get_any_partial to
conform with the cpuset check in the page allocator.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 mm/slub.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 1bf4e59fea45..70cfdfcb1a75 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1672,8 +1672,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
 
 			n = get_node(s, zone_to_nid(zone));
 
-			if (n && cpuset_zone_allowed(zone,
-						     flags | __GFP_HARDWALL) &&
+			if (n && cpuset_zone_allowed(zone, flags) &&
 					n->nr_partial > s->min_partial) {
 				object = get_partial_node(s, n, c, flags);
 				if (object) {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/4] cpuset: convert callback_mutex to a spinlock
  2014-09-26 14:50 ` [PATCH 1/4] cpuset: convert callback_mutex to a spinlock Vladimir Davydov
@ 2014-09-26 15:44   ` Christoph Lameter
  0 siblings, 0 replies; 10+ messages in thread
From: Christoph Lameter @ 2014-09-26 15:44 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: linux-kernel, Li Zefan, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm

On Fri, 26 Sep 2014, Vladimir Davydov wrote:

> The callback_mutex is only used to synchronize reads/updates of cpusets'
> flags and cpu/node masks. These operations should always proceed fast so
> there's no reason why we can't use a spinlock instead of the mutex.

Checked that and given the other restrictions already on the use of
callback_mutex this is to be expected.

Acked-by: Christoph Lameter <cl@linux.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/4] cpuset: simplify cpuset_node_allowed API
  2014-09-26 14:50 ` [PATCH 2/4] cpuset: simplify cpuset_node_allowed API Vladimir Davydov
@ 2014-09-26 15:53   ` Christoph Lameter
  0 siblings, 0 replies; 10+ messages in thread
From: Christoph Lameter @ 2014-09-26 15:53 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: linux-kernel, David Rientjes, Li Zefan, Pekka Enberg,
	Joonsoo Kim, Andrew Morton, linux-mm

On Fri, 26 Sep 2014, Vladimir Davydov wrote:

> So let's simplify the API back to the single check.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] slab: fix cpuset check in fallback_alloc
  2014-09-26 14:50 ` [PATCH 3/4] slab: fix cpuset check in fallback_alloc Vladimir Davydov
@ 2014-09-26 16:31   ` Christoph Lameter
  2014-09-27  8:12     ` Vladimir Davydov
  0 siblings, 1 reply; 10+ messages in thread
From: Christoph Lameter @ 2014-09-26 16:31 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: linux-kernel, Li Zefan, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm

On Fri, 26 Sep 2014, Vladimir Davydov wrote:

> To avoid this we should use softwall cpuset check in fallback_alloc.

Its weird that softwall checking occurs by setting __GFP_HARDWALL.
>
> Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
> ---
>  mm/slab.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index eb6f0cf6875c..e35822d07821 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3051,7 +3051,7 @@ retry:
>  	for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
>  		nid = zone_to_nid(zone);
>
> -		if (cpuset_zone_allowed(zone, flags | __GFP_HARDWALL) &&
> +		if (cpuset_zone_allowed(zone, flags) &&
>  			get_node(cache, nid) &&
>  			get_node(cache, nid)->free_objects) {
>  				obj = ____cache_alloc_node(cache,
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/4] slab: fix cpuset check in fallback_alloc
  2014-09-26 16:31   ` Christoph Lameter
@ 2014-09-27  8:12     ` Vladimir Davydov
  0 siblings, 0 replies; 10+ messages in thread
From: Vladimir Davydov @ 2014-09-27  8:12 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: linux-kernel, Li Zefan, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm

Hi Christoph,

On Fri, Sep 26, 2014 at 11:31:31AM -0500, Christoph Lameter wrote:
> On Fri, 26 Sep 2014, Vladimir Davydov wrote:
> 
> > To avoid this we should use softwall cpuset check in fallback_alloc.
> 
> Its weird that softwall checking occurs by setting __GFP_HARDWALL.

Hmm, I don't think I follow. Currently we enforce *hardwall* check by
passing __GFP_HARDWALL to cpuset_zone_allowed(). However, we need
softwall check there to conform to the page allocator behavior, so I
remove the __GFP_HARDWALL flag from cpuset_zone_allowed() to get
softwall check.

Actually, initially we used softwall check in fallback_alloc(). This was
changed to hardwall check by commit b8b50b6519afa ("mm: fallback_alloc
cpuset_zone_allowed irq fix") in order to fix sleep-in-atomic bug,
because at that time softwall check required taking the callback_mutex
while fallback_alloc is called with interrupts disabled.

Thanks,
Vladimir

> >
> > Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
> > ---
> >  mm/slab.c |    2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/slab.c b/mm/slab.c
> > index eb6f0cf6875c..e35822d07821 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -3051,7 +3051,7 @@ retry:
> >  	for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) {
> >  		nid = zone_to_nid(zone);
> >
> > -		if (cpuset_zone_allowed(zone, flags | __GFP_HARDWALL) &&
> > +		if (cpuset_zone_allowed(zone, flags) &&
> >  			get_node(cache, nid) &&
> >  			get_node(cache, nid)->free_objects) {
> >  				obj = ____cache_alloc_node(cache,
> >

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B
  2014-09-26 14:50 [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Vladimir Davydov
                   ` (3 preceding siblings ...)
  2014-09-26 14:50 ` [PATCH 4/4] slub: fix cpuset check in get_any_partial Vladimir Davydov
@ 2014-09-29  7:25 ` Zefan Li
  4 siblings, 0 replies; 10+ messages in thread
From: Zefan Li @ 2014-09-29  7:25 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: linux-kernel, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm

On 2014/9/26 22:50, Vladimir Davydov wrote:
> Hi,
> 
> SLAB and SLUB use hardwall cpuset check on fallback alloc, while the
> page allocator uses softwall check for all kernel allocations. This may
> result in falling into the page allocator even if there are free objects
> on other nodes. SLAB algorithm is especially affected: the number of
> objects allocated in vain is unlimited, so that they theoretically can
> eat up a whole NUMA node. For more details see comments to patches 3, 4.
> 
> When I last sent a fix (https://lkml.org/lkml/2014/8/10/100), David
> found the whole cpuset API being cumbersome and proposed to simplify it
> before getting to fixing its users. So this patch set addresses both
> David's complain (patches 1, 2) and the SL[AU]B issues (patches 3, 4).
> 
> Reviews are appreciated.
> 
> Thanks,
> 
> Vladimir Davydov (4):
>   cpuset: convert callback_mutex to a spinlock
>   cpuset: simplify cpuset_node_allowed API
>   slab: fix cpuset check in fallback_alloc
>   slub: fix cpuset check in get_any_partial
> 

Acked-by: Zefan Li <lizefan@huawei.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-09-29  7:25 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-26 14:50 [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Vladimir Davydov
2014-09-26 14:50 ` [PATCH 1/4] cpuset: convert callback_mutex to a spinlock Vladimir Davydov
2014-09-26 15:44   ` Christoph Lameter
2014-09-26 14:50 ` [PATCH 2/4] cpuset: simplify cpuset_node_allowed API Vladimir Davydov
2014-09-26 15:53   ` Christoph Lameter
2014-09-26 14:50 ` [PATCH 3/4] slab: fix cpuset check in fallback_alloc Vladimir Davydov
2014-09-26 16:31   ` Christoph Lameter
2014-09-27  8:12     ` Vladimir Davydov
2014-09-26 14:50 ` [PATCH 4/4] slub: fix cpuset check in get_any_partial Vladimir Davydov
2014-09-29  7:25 ` [PATCH 0/4] Simplify cpuset API and fix cpuset check in SL[AU]B Zefan Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).