All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free
@ 2009-10-06  3:17 Lee Schermerhorn
  2009-10-06  3:17 ` [PATCH 1/11] hugetlb: rework hstate_next_node_* functions Lee Schermerhorn
                   ` (11 more replies)
  0 siblings, 12 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:17 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

PATCH 0/11 hugetlb: numa control of persistent huge pages alloc/free

Against:  2.6.31-mmotm-090925-1435 plus David Rientjes'
"nodemask: make NODEMASK_ALLOC more general" patch applied

This is V9 of a series of patches to provide control over the location
of the allocation and freeing of persistent huge pages on a NUMA
platform.   Please consider for merging into mmotm.

This series uses two mechanisms to constrain the nodes from which
persistent huge pages are allocated:  1) the task NUMA mempolicy of
the task modifying  a new sysctl "nr_hugepages_mempolicy", based on
a suggestion by Mel Gorman; and 2) a subset of the hugepages hstate
sysfs attributes have been added [in V4] to each node system device
under:

	/sys/devices/node/node[0-9]*/hugepages.

The per node attibutes allow direct assignment of a huge page
count on a specific node, regardless of the task's mempolicy or
cpuset constraints.

V5 addressed review comments -- changes described in patch descriptions.

V6 addressed more review comments, described in the patches.

V6 also included a 3 patch series that implements an enhancement suggested
by David Rientjes:   the default huge page nodes allowed mask will be the
nodes with memory rather than all on-line nodes and we will allocate per
node hstate attributes only for nodes with memory.  This requires that we
register a memory on/off-line notifier and [un]register the attributes on
transitions to/from memoryless state.

V7 addressed review comments, described in the patches, and included a
new patch, originally from Mel Gorman, to define a new vm sysctl and
sysfs global hugepages attribute "nr_hugepages_mempolicy" rather than
apply mempolicy contraints to pool adujstments via the pre-existing
"nr_hugepages".  The 3 patches to restrict hugetlb to visiting only
nodes with memory and to add/remove per node hstate attributes on
memory hotplug completed V7.

V8 reorganized the sysctl and sysfs attribute handlers to default
the nodes to default or define the nodes_allowed mask up in the
handlers and pass nodes_allowed [pointer] to set_max_huge_pages().
This cleanup was suggested by David Rientjes.  V8 also merged Mel
Gorman's "nr_hugepages_mempolicy" back into the patch to compute
nodes_allowed from mempolicy.

V8 turned out to be too large a reorg to pull off without botching
something.  V9 fixes these.  In the meantime, David Rientjes has
posted a patch to generalize NODEMASK_ALLOC.  This causes a build error
in my series.  David provided a patch to fix the build failure.  I
have included David's fixup as patch NN.  This causes V9 to depend
on David's patch.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 1/11] hugetlb:  rework hstate_next_node_* functions
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
@ 2009-10-06  3:17 ` Lee Schermerhorn
  2009-10-06  3:17 ` [PATCH 2/11] hugetlb: add nodemask arg to huge page alloc, free and surplus adjust fcns Lee Schermerhorn
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:17 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 1/11] hugetlb:  rework hstate_next_node* functions

Against:  2.6.31-mmotm-090925-1435

V2: + cleaned up comments, removed some deemed unnecessary,
      add some suggested by review
    + removed check for !current in huge_mpol_nodes_allowed().
    + added 'current->comm' to warning message in huge_mpol_nodes_allowed().
    + added VM_BUG_ON() assertion in hugetlb.c next_node_allowed() to
      catch out of range node id.
    + add examples to patch description

V3: + factored this "cleanup" patch out of V2 patch 2/3
    + moved ahead of patch to add nodes_allowed mask to alloc funcs
      as this patch is somewhat independent from using task mempolicy
      to control huge page allocation and freeing.

Modify the hstate_next_node* functions to allow them to be called to
obtain the "start_nid".  Then, whereas prior to this patch we
unconditionally called hstate_next_node_to_{alloc|free}(), whether
or not we successfully allocated/freed a huge page on the node,
now we only call these functions on failure to alloc/free to advance
to next allowed node.

Factor out the next_node_allowed() function to handle wrap at end
of node_online_map.  In this version, the allowed nodes include all 
of the online nodes.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Reviewed-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: David Rientjes <rientjes@google.com>

 mm/hugetlb.c |   70 +++++++++++++++++++++++++++++++++++++----------------------
 1 file changed, 45 insertions(+), 25 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-28 10:10:39.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-09-28 10:12:20.000000000 -0400
@@ -622,6 +622,20 @@ static struct page *alloc_fresh_huge_pag
 }
 
 /*
+ * common helper function for hstate_next_node_to_{alloc|free}.
+ * return next node in node_online_map, wrapping at end.
+ */
+static int next_node_allowed(int nid)
+{
+	nid = next_node(nid, node_online_map);
+	if (nid == MAX_NUMNODES)
+		nid = first_node(node_online_map);
+	VM_BUG_ON(nid >= MAX_NUMNODES);
+
+	return nid;
+}
+
+/*
  * Use a helper variable to find the next node and then
  * copy it back to next_nid_to_alloc afterwards:
  * otherwise there's a window in which a racer might
@@ -634,12 +648,12 @@ static struct page *alloc_fresh_huge_pag
  */
 static int hstate_next_node_to_alloc(struct hstate *h)
 {
-	int next_nid;
-	next_nid = next_node(h->next_nid_to_alloc, node_online_map);
-	if (next_nid == MAX_NUMNODES)
-		next_nid = first_node(node_online_map);
+	int nid, next_nid;
+
+	nid = h->next_nid_to_alloc;
+	next_nid = next_node_allowed(nid);
 	h->next_nid_to_alloc = next_nid;
-	return next_nid;
+	return nid;
 }
 
 static int alloc_fresh_huge_page(struct hstate *h)
@@ -649,15 +663,17 @@ static int alloc_fresh_huge_page(struct
 	int next_nid;
 	int ret = 0;
 
-	start_nid = h->next_nid_to_alloc;
+	start_nid = hstate_next_node_to_alloc(h);
 	next_nid = start_nid;
 
 	do {
 		page = alloc_fresh_huge_page_node(h, next_nid);
-		if (page)
+		if (page) {
 			ret = 1;
+			break;
+		}
 		next_nid = hstate_next_node_to_alloc(h);
-	} while (!page && next_nid != start_nid);
+	} while (next_nid != start_nid);
 
 	if (ret)
 		count_vm_event(HTLB_BUDDY_PGALLOC);
@@ -668,17 +684,19 @@ static int alloc_fresh_huge_page(struct
 }
 
 /*
- * helper for free_pool_huge_page() - find next node
- * from which to free a huge page
+ * helper for free_pool_huge_page() - return the next node
+ * from which to free a huge page.  Advance the next node id
+ * whether or not we find a free huge page to free so that the
+ * next attempt to free addresses the next node.
  */
 static int hstate_next_node_to_free(struct hstate *h)
 {
-	int next_nid;
-	next_nid = next_node(h->next_nid_to_free, node_online_map);
-	if (next_nid == MAX_NUMNODES)
-		next_nid = first_node(node_online_map);
+	int nid, next_nid;
+
+	nid = h->next_nid_to_free;
+	next_nid = next_node_allowed(nid);
 	h->next_nid_to_free = next_nid;
-	return next_nid;
+	return nid;
 }
 
 /*
@@ -693,7 +711,7 @@ static int free_pool_huge_page(struct hs
 	int next_nid;
 	int ret = 0;
 
-	start_nid = h->next_nid_to_free;
+	start_nid = hstate_next_node_to_free(h);
 	next_nid = start_nid;
 
 	do {
@@ -715,9 +733,10 @@ static int free_pool_huge_page(struct hs
 			}
 			update_and_free_page(h, page);
 			ret = 1;
+			break;
 		}
 		next_nid = hstate_next_node_to_free(h);
-	} while (!ret && next_nid != start_nid);
+	} while (next_nid != start_nid);
 
 	return ret;
 }
@@ -1028,10 +1047,9 @@ int __weak alloc_bootmem_huge_page(struc
 		void *addr;
 
 		addr = __alloc_bootmem_node_nopanic(
-				NODE_DATA(h->next_nid_to_alloc),
+				NODE_DATA(hstate_next_node_to_alloc(h)),
 				huge_page_size(h), huge_page_size(h), 0);
 
-		hstate_next_node_to_alloc(h);
 		if (addr) {
 			/*
 			 * Use the beginning of the huge page to store the
@@ -1167,29 +1185,31 @@ static int adjust_pool_surplus(struct hs
 	VM_BUG_ON(delta != -1 && delta != 1);
 
 	if (delta < 0)
-		start_nid = h->next_nid_to_alloc;
+		start_nid = hstate_next_node_to_alloc(h);
 	else
-		start_nid = h->next_nid_to_free;
+		start_nid = hstate_next_node_to_free(h);
 	next_nid = start_nid;
 
 	do {
 		int nid = next_nid;
 		if (delta < 0)  {
-			next_nid = hstate_next_node_to_alloc(h);
 			/*
 			 * To shrink on this node, there must be a surplus page
 			 */
-			if (!h->surplus_huge_pages_node[nid])
+			if (!h->surplus_huge_pages_node[nid]) {
+				next_nid = hstate_next_node_to_alloc(h);
 				continue;
+			}
 		}
 		if (delta > 0) {
-			next_nid = hstate_next_node_to_free(h);
 			/*
 			 * Surplus cannot exceed the total number of pages
 			 */
 			if (h->surplus_huge_pages_node[nid] >=
-						h->nr_huge_pages_node[nid])
+						h->nr_huge_pages_node[nid]) {
+				next_nid = hstate_next_node_to_free(h);
 				continue;
+			}
 		}
 
 		h->surplus_huge_pages += delta;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 2/11] hugetlb:  add nodemask arg to huge page alloc, free and surplus adjust fcns
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
  2009-10-06  3:17 ` [PATCH 1/11] hugetlb: rework hstate_next_node_* functions Lee Schermerhorn
@ 2009-10-06  3:17 ` Lee Schermerhorn
  2009-10-06  9:09   ` David Rientjes
  2009-10-07  3:26   ` David Rientjes
  2009-10-06  3:17 ` [PATCH 3/11] hugetlb: factor init_nodemask_of_node Lee Schermerhorn
                   ` (9 subsequent siblings)
  11 siblings, 2 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:17 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 2/11] hugetlb:  add nodemask arg to huge page alloc, free and surplus adjust fcns

Against:  2.6.31-mmotm-090925-1435

V3: + moved this patch to after the "rework" of hstate_next_node_to_...
      functions as this patch is more specific to using task mempolicy
      to control huge page allocation and freeing.

V5: + removed now unneeded 'nextnid' from hstate_next_node_to_{alloc|free}
      and updated the stale comments.

V6: + move defaulting of nodes_allowed [to &node_online_map] up to
      set_max_huge_pages().  Eliminate from hstate_next_node_*()
      functions.  [David Rientjes' suggestion].
    + renamed "this_node_allowed()" to "get_valid_node_allowed()"
      [for David]

V8: + add nodemask_t arg to set_max_huge_pages().  Subsequent
      patches will pass non-default values.

In preparation for constraining huge page allocation and freeing by the
controlling task's numa mempolicy, add a "nodes_allowed" nodemask pointer
to the allocate, free and surplus adjustment functions.  For now, pass
NULL to indicate default behavior--i.e., use node_online_map.  A
subsqeuent patch will derive a non-default mask from the controlling 
task's numa mempolicy.

Note that this method of updating the global hstate nr_hugepages under
the constraint of a nodemask simplifies keeping the global state 
consistent--especially the number of persistent and surplus pages
relative to reservations and overcommit limits.  There are undoubtedly
other ways to do this, but this works for both interfaces:  mempolicy
and per node attributes.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Reviewed-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: David Rientjes <rientjes@google.com>

 mm/hugetlb.c |  127 ++++++++++++++++++++++++++++++++++-------------------------
 1 file changed, 75 insertions(+), 52 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-28 10:12:20.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-09-30 11:41:36.000000000 -0400
@@ -622,48 +622,56 @@ static struct page *alloc_fresh_huge_pag
 }
 
 /*
- * common helper function for hstate_next_node_to_{alloc|free}.
- * return next node in node_online_map, wrapping at end.
+ * common helper functions for hstate_next_node_to_{alloc|free}.
+ * We may have allocated or freed a huge page based on a different
+ * nodes_allowed previously, so h->next_node_to_{alloc|free} might
+ * be outside of *nodes_allowed.  Ensure that we use an allowed
+ * node for alloc or free.
  */
-static int next_node_allowed(int nid)
+static int next_node_allowed(int nid, nodemask_t *nodes_allowed)
 {
-	nid = next_node(nid, node_online_map);
+	nid = next_node(nid, *nodes_allowed);
 	if (nid == MAX_NUMNODES)
-		nid = first_node(node_online_map);
+		nid = first_node(*nodes_allowed);
 	VM_BUG_ON(nid >= MAX_NUMNODES);
 
 	return nid;
 }
 
+static int get_valid_node_allowed(int nid, nodemask_t *nodes_allowed)
+{
+	if (!node_isset(nid, *nodes_allowed))
+		nid = next_node_allowed(nid, nodes_allowed);
+	return nid;
+}
+
 /*
- * Use a helper variable to find the next node and then
- * copy it back to next_nid_to_alloc afterwards:
- * otherwise there's a window in which a racer might
- * pass invalid nid MAX_NUMNODES to alloc_pages_exact_node.
- * But we don't need to use a spin_lock here: it really
- * doesn't matter if occasionally a racer chooses the
- * same nid as we do.  Move nid forward in the mask even
- * if we just successfully allocated a hugepage so that
- * the next caller gets hugepages on the next node.
+ * returns the previously saved node ["this node"] from which to
+ * allocate a persistent huge page for the pool and advance the
+ * next node from which to allocate, handling wrap at end of node
+ * mask.
  */
-static int hstate_next_node_to_alloc(struct hstate *h)
+static int hstate_next_node_to_alloc(struct hstate *h,
+					nodemask_t *nodes_allowed)
 {
-	int nid, next_nid;
+	int nid;
+
+	VM_BUG_ON(!nodes_allowed);
+
+	nid = get_valid_node_allowed(h->next_nid_to_alloc, nodes_allowed);
+	h->next_nid_to_alloc = next_node_allowed(nid, nodes_allowed);
 
-	nid = h->next_nid_to_alloc;
-	next_nid = next_node_allowed(nid);
-	h->next_nid_to_alloc = next_nid;
 	return nid;
 }
 
-static int alloc_fresh_huge_page(struct hstate *h)
+static int alloc_fresh_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
 {
 	struct page *page;
 	int start_nid;
 	int next_nid;
 	int ret = 0;
 
-	start_nid = hstate_next_node_to_alloc(h);
+	start_nid = hstate_next_node_to_alloc(h, nodes_allowed);
 	next_nid = start_nid;
 
 	do {
@@ -672,7 +680,7 @@ static int alloc_fresh_huge_page(struct
 			ret = 1;
 			break;
 		}
-		next_nid = hstate_next_node_to_alloc(h);
+		next_nid = hstate_next_node_to_alloc(h, nodes_allowed);
 	} while (next_nid != start_nid);
 
 	if (ret)
@@ -684,18 +692,20 @@ static int alloc_fresh_huge_page(struct
 }
 
 /*
- * helper for free_pool_huge_page() - return the next node
- * from which to free a huge page.  Advance the next node id
- * whether or not we find a free huge page to free so that the
- * next attempt to free addresses the next node.
+ * helper for free_pool_huge_page() - return the previously saved
+ * node ["this node"] from which to free a huge page.  Advance the
+ * next node id whether or not we find a free huge page to free so
+ * that the next attempt to free addresses the next node.
  */
-static int hstate_next_node_to_free(struct hstate *h)
+static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed)
 {
-	int nid, next_nid;
+	int nid;
+
+	VM_BUG_ON(!nodes_allowed);
+
+	nid = get_valid_node_allowed(h->next_nid_to_free, nodes_allowed);
+	h->next_nid_to_free = next_node_allowed(nid, nodes_allowed);
 
-	nid = h->next_nid_to_free;
-	next_nid = next_node_allowed(nid);
-	h->next_nid_to_free = next_nid;
 	return nid;
 }
 
@@ -705,13 +715,14 @@ static int hstate_next_node_to_free(stru
  * balanced over allowed nodes.
  * Called with hugetlb_lock locked.
  */
-static int free_pool_huge_page(struct hstate *h, bool acct_surplus)
+static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
+							 bool acct_surplus)
 {
 	int start_nid;
 	int next_nid;
 	int ret = 0;
 
-	start_nid = hstate_next_node_to_free(h);
+	start_nid = hstate_next_node_to_free(h, nodes_allowed);
 	next_nid = start_nid;
 
 	do {
@@ -735,7 +746,7 @@ static int free_pool_huge_page(struct hs
 			ret = 1;
 			break;
 		}
-		next_nid = hstate_next_node_to_free(h);
+		next_nid = hstate_next_node_to_free(h, nodes_allowed);
 	} while (next_nid != start_nid);
 
 	return ret;
@@ -937,7 +948,7 @@ static void return_unused_surplus_pages(
 	 * on-line nodes for us and will handle the hstate accounting.
 	 */
 	while (nr_pages--) {
-		if (!free_pool_huge_page(h, 1))
+		if (!free_pool_huge_page(h, &node_online_map, 1))
 			break;
 	}
 }
@@ -1047,7 +1058,7 @@ int __weak alloc_bootmem_huge_page(struc
 		void *addr;
 
 		addr = __alloc_bootmem_node_nopanic(
-				NODE_DATA(hstate_next_node_to_alloc(h)),
+				NODE_DATA(hstate_next_node_to_alloc(h, NULL)),
 				huge_page_size(h), huge_page_size(h), 0);
 
 		if (addr) {
@@ -1102,7 +1113,7 @@ static void __init hugetlb_hstate_alloc_
 		if (h->order >= MAX_ORDER) {
 			if (!alloc_bootmem_huge_page(h))
 				break;
-		} else if (!alloc_fresh_huge_page(h))
+		} else if (!alloc_fresh_huge_page(h, &node_online_map))
 			break;
 	}
 	h->max_huge_pages = i;
@@ -1144,16 +1155,22 @@ static void __init report_hugepages(void
 }
 
 #ifdef CONFIG_HIGHMEM
-static void try_to_free_low(struct hstate *h, unsigned long count)
+static void try_to_free_low(struct hstate *h, unsigned long count,
+						nodemask_t *nodes_allowed)
 {
 	int i;
 
 	if (h->order >= MAX_ORDER)
 		return;
 
+	if (!nodes_allowed)
+		nodes_allowed = &node_online_map;
+
 	for (i = 0; i < MAX_NUMNODES; ++i) {
 		struct page *page, *next;
 		struct list_head *freel = &h->hugepage_freelists[i];
+		if (!node_isset(i, *nodes_allowed))
+			continue;
 		list_for_each_entry_safe(page, next, freel, lru) {
 			if (count >= h->nr_huge_pages)
 				return;
@@ -1167,7 +1184,8 @@ static void try_to_free_low(struct hstat
 	}
 }
 #else
-static inline void try_to_free_low(struct hstate *h, unsigned long count)
+static inline void try_to_free_low(struct hstate *h, unsigned long count,
+						nodemask_t *nodes_allowed)
 {
 }
 #endif
@@ -1177,7 +1195,8 @@ static inline void try_to_free_low(struc
  * balanced by operating on them in a round-robin fashion.
  * Returns 1 if an adjustment was made.
  */
-static int adjust_pool_surplus(struct hstate *h, int delta)
+static int adjust_pool_surplus(struct hstate *h, nodemask_t *nodes_allowed,
+				int delta)
 {
 	int start_nid, next_nid;
 	int ret = 0;
@@ -1185,9 +1204,9 @@ static int adjust_pool_surplus(struct hs
 	VM_BUG_ON(delta != -1 && delta != 1);
 
 	if (delta < 0)
-		start_nid = hstate_next_node_to_alloc(h);
+		start_nid = hstate_next_node_to_alloc(h, nodes_allowed);
 	else
-		start_nid = hstate_next_node_to_free(h);
+		start_nid = hstate_next_node_to_free(h, nodes_allowed);
 	next_nid = start_nid;
 
 	do {
@@ -1197,7 +1216,8 @@ static int adjust_pool_surplus(struct hs
 			 * To shrink on this node, there must be a surplus page
 			 */
 			if (!h->surplus_huge_pages_node[nid]) {
-				next_nid = hstate_next_node_to_alloc(h);
+				next_nid = hstate_next_node_to_alloc(h,
+								nodes_allowed);
 				continue;
 			}
 		}
@@ -1207,7 +1227,8 @@ static int adjust_pool_surplus(struct hs
 			 */
 			if (h->surplus_huge_pages_node[nid] >=
 						h->nr_huge_pages_node[nid]) {
-				next_nid = hstate_next_node_to_free(h);
+				next_nid = hstate_next_node_to_free(h,
+								nodes_allowed);
 				continue;
 			}
 		}
@@ -1222,7 +1243,8 @@ static int adjust_pool_surplus(struct hs
 }
 
 #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
-static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count)
+static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
+						nodemask_t *nodes_allowed)
 {
 	unsigned long min_count, ret;
 
@@ -1242,7 +1264,7 @@ static unsigned long set_max_huge_pages(
 	 */
 	spin_lock(&hugetlb_lock);
 	while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
-		if (!adjust_pool_surplus(h, -1))
+		if (!adjust_pool_surplus(h, nodes_allowed, -1))
 			break;
 	}
 
@@ -1253,7 +1275,7 @@ static unsigned long set_max_huge_pages(
 		 * and reducing the surplus.
 		 */
 		spin_unlock(&hugetlb_lock);
-		ret = alloc_fresh_huge_page(h);
+		ret = alloc_fresh_huge_page(h, nodes_allowed);
 		spin_lock(&hugetlb_lock);
 		if (!ret)
 			goto out;
@@ -1277,13 +1299,13 @@ static unsigned long set_max_huge_pages(
 	 */
 	min_count = h->resv_huge_pages + h->nr_huge_pages - h->free_huge_pages;
 	min_count = max(count, min_count);
-	try_to_free_low(h, min_count);
+	try_to_free_low(h, min_count, nodes_allowed);
 	while (min_count < persistent_huge_pages(h)) {
-		if (!free_pool_huge_page(h, 0))
+		if (!free_pool_huge_page(h, nodes_allowed, 0))
 			break;
 	}
 	while (count < persistent_huge_pages(h)) {
-		if (!adjust_pool_surplus(h, 1))
+		if (!adjust_pool_surplus(h, nodes_allowed, 1))
 			break;
 	}
 out:
@@ -1329,7 +1351,7 @@ static ssize_t nr_hugepages_store(struct
 	if (err)
 		return 0;
 
-	h->max_huge_pages = set_max_huge_pages(h, input);
+	h->max_huge_pages = set_max_huge_pages(h, input, &node_online_map);
 
 	return count;
 }
@@ -1571,7 +1593,8 @@ int hugetlb_sysctl_handler(struct ctl_ta
 	proc_doulongvec_minmax(table, write, buffer, length, ppos);
 
 	if (write)
-		h->max_huge_pages = set_max_huge_pages(h, tmp);
+		h->max_huge_pages = set_max_huge_pages(h, tmp,
+							&node_online_map);
 
 	return 0;
 }

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 3/11] hugetlb:  factor init_nodemask_of_node
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
  2009-10-06  3:17 ` [PATCH 1/11] hugetlb: rework hstate_next_node_* functions Lee Schermerhorn
  2009-10-06  3:17 ` [PATCH 2/11] hugetlb: add nodemask arg to huge page alloc, free and surplus adjust fcns Lee Schermerhorn
@ 2009-10-06  3:17 ` Lee Schermerhorn
  2009-10-07  3:21   ` David Rientjes
  2009-10-06  3:18 ` [PATCH 4/11] hugetlb: derive huge pages nodes allowed from task mempolicy Lee Schermerhorn
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:17 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 3/11] - hugetlb:  factor init_nodemask_of_node()

Against:  2.6.31-mmotm-090925-1435

New in V5 of series

V6: + rename 'init_nodemask_of_nodes()' to 'init_nodemask_of_node()'
    + redefine init_nodemask_of_node() as static inline fcn
    + move this patch back 1 in series

V8: + factor 'init_nodemask_of_node()' from nodemask_of_node()
    + drop alloc_nodemask_of_node() -- not used any more

V9: + remove extra parens around arguments now that init_nodemask_of_node
      is not longer a macro.

Factor init_nodemask_of_node() out of the nodemask_of_node()
macro.

This will be used to populate the huge pages "nodes_allowed"
nodemask for a single node when basing nodes_allowed on a
preferred/local mempolicy or when a persistent huge page
pool page count is modified via a per node sysfs attribute.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>

 include/linux/nodemask.h |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/include/linux/nodemask.h
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/include/linux/nodemask.h	2009-09-30 11:19:52.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/include/linux/nodemask.h	2009-09-30 11:22:01.000000000 -0400
@@ -245,14 +245,19 @@ static inline int __next_node(int n, con
 	return min_t(int,MAX_NUMNODES,find_next_bit(srcp->bits, MAX_NUMNODES, n+1));
 }
 
+static inline void init_nodemask_of_node(nodemask_t *mask, int node)
+{
+	nodes_clear(*(mask));
+	node_set((node), *(mask));
+}
+
 #define nodemask_of_node(node)						\
 ({									\
 	typeof(_unused_nodemask_arg_) m;				\
 	if (sizeof(m) == sizeof(unsigned long)) {			\
 		m.bits[0] = 1UL<<(node);				\
 	} else {							\
-		nodes_clear(m);						\
-		node_set((node), m);					\
+		init_nodemask_of_node(&m, (node));			\
 	}								\
 	m;								\
 })

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 4/11] hugetlb:  derive huge pages nodes allowed from task mempolicy
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (2 preceding siblings ...)
  2009-10-06  3:17 ` [PATCH 3/11] hugetlb: factor init_nodemask_of_node Lee Schermerhorn
@ 2009-10-06  3:18 ` Lee Schermerhorn
  2009-10-07  3:26   ` David Rientjes
  2009-10-06  3:18 ` [PATCH 5/11] hugetlb: accomodate reworked NODEMASK_ALLOC Lee Schermerhorn
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:18 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 4/11] hugetlb:  derive huge pages nodes allowed from task mempolicy

Against:  2.6.31-mmotm-090925-1435

V2: + cleaned up comments, removed some deemed unnecessary,
      add some suggested by review
    + removed check for !current in huge_mpol_nodes_allowed().
    + added 'current->comm' to warning message in huge_mpol_nodes_allowed().
    + added VM_BUG_ON() assertion in hugetlb.c next_node_allowed() to
      catch out of range node id.
    + add examples to patch description

V3: Factored this patch from V2 patch 2/3

V4: added back missing "kfree(nodes_allowed)" in set_max_nr_hugepages()

V5: remove internal '\n' from printk in huge_mpol_nodes_allowed()

V6: + rename 'huge_mpol_nodes_allowed()" to "alloc_nodemask_of_mempolicy()"
    + move the printk() when we can't kmalloc() a nodemask_t to
      set_max_huge_pages(), as alloc_nodemask_of_mempolicy() is no longer
      hugepage specific.
    + handle movement of nodes_allowed initialization:
    ++ Don't kfree() nodes_allowed when it points at node_online_map.

V7: + drop mpol-get/put from alloc_nodemask_of_mempolicy().  Not needed
      here because current task is examining it's own mempolicy.  Add
      comment to that effect.
    + use init_nodemask_of_node() to initialize the nodes_allowed for
      single node policies [preferred/local].

V8: + fold in subsequent patches to:
      1) define a new sysctl and hugepages sysfs attribute
         nr_hugepages_mempolicy which will modify the huge page pool
         under the current task's mempolicy.  Modifications via the
         existing nr_hugepages will continue to ignore mempolicy.
         NOTE:  This part comes from a patch from Mel Gorman.
      2) reorganize sysctl and sysfs attribute handlers to create
         and pass nodes_allowed mask to set_max_huge_pages().

V9: + fix botched patch reorg/folding in nr_hugepages_store_common()
      noted by Mel Gorman.

This patch derives a "nodes_allowed" node mask from the numa
mempolicy of the task modifying the number of persistent huge
pages to control the allocation, freeing and adjusting of surplus
huge pages when the pool page count is modified via the new sysctl
or sysfs attribute "nr_hugepages_mempolicy".  The nodes_allowed
mask is derived as follows:

* For "default" [NULL] task mempolicy, a NULL nodemask_t pointer
  is produced.  This will cause the hugetlb subsystem to use
  node_online_map as the "nodes_allowed".  This preserves the
  behavior before this patch.
* For "preferred" mempolicy, including explicit local allocation,
  a nodemask with the single preferred node will be produced.
  "local" policy will NOT track any internode migrations of the
  task adjusting nr_hugepages.
* For "bind" and "interleave" policy, the mempolicy's nodemask
  will be used.
* Other than to inform the construction of the nodes_allowed node
  mask, the actual mempolicy mode is ignored.  That is, all modes
  behave like interleave over the resulting nodes_allowed mask
  with no "fallback".

See the updated documentation [next patch] for more information
about the implications of this patch.

Examples:

Starting with:

	Node 0 HugePages_Total:     0
	Node 1 HugePages_Total:     0
	Node 2 HugePages_Total:     0
	Node 3 HugePages_Total:     0

Default behavior [with or without this patch] balances persistent
hugepage allocation across nodes [with sufficient contiguous memory]:

	sysctl vm.nr_hugepages[_mempolicy]=32

yields:

	Node 0 HugePages_Total:     8
	Node 1 HugePages_Total:     8
	Node 2 HugePages_Total:     8
	Node 3 HugePages_Total:     8

Of course, we only have nr_hugepages_mempolicy with the patch,
but with default mempolicy, nr_hugepages_mempolicy behaves the
same as nr_hugepages.

Applying mempolicy--e.g., with numactl [using '-m' a.k.a.
'--membind' because it allows multiple nodes to be specified
and it's easy to type]--we can allocate huge pages on
individual nodes or sets of nodes.  So, starting from the
condition above, with 8 huge pages per node, add 8 more to
node 2 using:

	numactl -m 2 sysctl vm.nr_hugepages_mempolicy=40

This yields:

	Node 0 HugePages_Total:     8
	Node 1 HugePages_Total:     8
	Node 2 HugePages_Total:    16
	Node 3 HugePages_Total:     8

The incremental 8 huge pages were restricted to node 2 by the
specified mempolicy.

Similarly, we can use mempolicy to free persistent huge pages
from specified nodes:

	numactl -m 0,1 sysctl vm.nr_hugepages_mempolicy=32

yields:

	Node 0 HugePages_Total:     4
	Node 1 HugePages_Total:     4
	Node 2 HugePages_Total:    16
	Node 3 HugePages_Total:     8

The 8 huge pages freed were balanced over nodes 0 and 1.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>

 include/linux/hugetlb.h   |    6 ++
 include/linux/mempolicy.h |    3 +
 kernel/sysctl.c           |   16 ++++++-
 mm/hugetlb.c              |   97 +++++++++++++++++++++++++++++++++++++++-------
 mm/mempolicy.c            |   47 ++++++++++++++++++++++
 5 files changed, 154 insertions(+), 15 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/mm/mempolicy.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/mm/mempolicy.c	2009-09-30 12:48:45.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/mm/mempolicy.c	2009-09-30 12:48:46.000000000 -0400
@@ -1564,6 +1564,53 @@ struct zonelist *huge_zonelist(struct vm
 	}
 	return zl;
 }
+
+/*
+ * init_nodemask_of_mempolicy
+ *
+ * If the current task's mempolicy is "default" [NULL], return 'false'
+ * to indicate * default policy.  Otherwise, extract the policy nodemask
+ * for 'bind' * or 'interleave' policy into the argument nodemask, or
+ * initialize the argument nodemask to contain the single node for
+ * 'preferred' or * 'local' policy and return 'true' to indicate presence
+ * of non-default mempolicy.
+ *
+ * We don't bother with reference counting the mempolicy [mpol_get/put]
+ * because the current task is examining it's own mempolicy and a task's
+ * mempolicy is only ever changed by the task itself.
+ *
+ * N.B., it is the caller's responsibility to free a returned nodemask.
+ */
+bool init_nodemask_of_mempolicy(nodemask_t *mask)
+{
+	struct mempolicy *mempolicy;
+	int nid;
+
+	if (!current->mempolicy)
+		return false;
+
+	mempolicy = current->mempolicy;
+	switch (mempolicy->mode) {
+	case MPOL_PREFERRED:
+		if (mempolicy->flags & MPOL_F_LOCAL)
+			nid = numa_node_id();
+		else
+			nid = mempolicy->v.preferred_node;
+		init_nodemask_of_node(mask, nid);
+		break;
+
+	case MPOL_BIND:
+		/* Fall through */
+	case MPOL_INTERLEAVE:
+		*mask =  mempolicy->v.nodes;
+		break;
+
+	default:
+		BUG();
+	}
+
+	return true;
+}
 #endif
 
 /* Allocate a page in interleaved policy.
Index: linux-2.6.31-mmotm-090925-1435/include/linux/mempolicy.h
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/include/linux/mempolicy.h	2009-09-30 12:48:45.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/include/linux/mempolicy.h	2009-09-30 12:48:46.000000000 -0400
@@ -201,6 +201,7 @@ extern void mpol_fix_fork_child_flag(str
 extern struct zonelist *huge_zonelist(struct vm_area_struct *vma,
 				unsigned long addr, gfp_t gfp_flags,
 				struct mempolicy **mpol, nodemask_t **nodemask);
+extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
 extern unsigned slab_node(struct mempolicy *policy);
 
 extern enum zone_type policy_zone;
@@ -328,6 +329,8 @@ static inline struct zonelist *huge_zone
 	return node_zonelist(0, gfp_flags);
 }
 
+static inline bool init_nodemask_of_mempolicy(nodemask_t *m) { return false; }
+
 static inline int do_migrate_pages(struct mm_struct *mm,
 			const nodemask_t *from_nodes,
 			const nodemask_t *to_nodes, int flags)
Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-30 12:48:45.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-10-02 21:22:04.000000000 -0400
@@ -1334,29 +1334,71 @@ static struct hstate *kobj_to_hstate(str
 	return NULL;
 }
 
-static ssize_t nr_hugepages_show(struct kobject *kobj,
+static ssize_t nr_hugepages_show_common(struct kobject *kobj,
 					struct kobj_attribute *attr, char *buf)
 {
 	struct hstate *h = kobj_to_hstate(kobj);
 	return sprintf(buf, "%lu\n", h->nr_huge_pages);
 }
-static ssize_t nr_hugepages_store(struct kobject *kobj,
-		struct kobj_attribute *attr, const char *buf, size_t count)
+static ssize_t nr_hugepages_store_common(bool obey_mempolicy,
+			struct kobject *kobj, struct kobj_attribute *attr,
+			const char *buf, size_t len)
 {
 	int err;
-	unsigned long input;
+	unsigned long count;
 	struct hstate *h = kobj_to_hstate(kobj);
+	NODEMASK_ALLOC(nodemask, nodes_allowed);
 
-	err = strict_strtoul(buf, 10, &input);
+	err = strict_strtoul(buf, 10, &count);
 	if (err)
 		return 0;
 
-	h->max_huge_pages = set_max_huge_pages(h, input, &node_online_map);
+	if (!(obey_mempolicy && init_nodemask_of_mempolicy(nodes_allowed))) {
+		NODEMASK_FREE(nodes_allowed);
+		nodes_allowed = &node_online_map;
+	}
+	h->max_huge_pages = set_max_huge_pages(h, count, nodes_allowed);
 
-	return count;
+	if (nodes_allowed != &node_online_map)
+		NODEMASK_FREE(nodes_allowed);
+
+	return len;
+}
+
+static ssize_t nr_hugepages_show(struct kobject *kobj,
+				       struct kobj_attribute *attr, char *buf)
+{
+	return nr_hugepages_show_common(kobj, attr, buf);
+}
+
+static ssize_t nr_hugepages_store(struct kobject *kobj,
+	       struct kobj_attribute *attr, const char *buf, size_t len)
+{
+	return nr_hugepages_store_common(false, kobj, attr, buf, len);
 }
 HSTATE_ATTR(nr_hugepages);
 
+#ifdef CONFIG_NUMA
+
+/*
+ * hstate attribute for optionally mempolicy-based constraint on persistent
+ * huge page alloc/free.
+ */
+static ssize_t nr_hugepages_mempolicy_show(struct kobject *kobj,
+				       struct kobj_attribute *attr, char *buf)
+{
+	return nr_hugepages_show_common(kobj, attr, buf);
+}
+
+static ssize_t nr_hugepages_mempolicy_store(struct kobject *kobj,
+	       struct kobj_attribute *attr, const char *buf, size_t len)
+{
+	return nr_hugepages_store_common(true, kobj, attr, buf, len);
+}
+HSTATE_ATTR(nr_hugepages_mempolicy);
+#endif
+
+
 static ssize_t nr_overcommit_hugepages_show(struct kobject *kobj,
 					struct kobj_attribute *attr, char *buf)
 {
@@ -1412,6 +1454,9 @@ static struct attribute *hstate_attrs[]
 	&free_hugepages_attr.attr,
 	&resv_hugepages_attr.attr,
 	&surplus_hugepages_attr.attr,
+#ifdef CONFIG_NUMA
+	&nr_hugepages_mempolicy_attr.attr,
+#endif
 	NULL,
 };
 
@@ -1578,9 +1623,9 @@ static unsigned int cpuset_mems_nr(unsig
 }
 
 #ifdef CONFIG_SYSCTL
-int hugetlb_sysctl_handler(struct ctl_table *table, int write,
-			   void __user *buffer,
-			   size_t *length, loff_t *ppos)
+static int hugetlb_sysctl_handler_common(bool obey_mempolicy,
+			 struct ctl_table *table, int write,
+			 void __user *buffer, size_t *length, loff_t *ppos)
 {
 	struct hstate *h = &default_hstate;
 	unsigned long tmp;
@@ -1592,13 +1637,39 @@ int hugetlb_sysctl_handler(struct ctl_ta
 	table->maxlen = sizeof(unsigned long);
 	proc_doulongvec_minmax(table, write, buffer, length, ppos);
 
-	if (write)
-		h->max_huge_pages = set_max_huge_pages(h, tmp,
-							&node_online_map);
+	if (write) {
+		NODEMASK_ALLOC(nodemask, nodes_allowed);
+		if (!(obey_mempolicy &&
+			       init_nodemask_of_mempolicy(nodes_allowed))) {
+			NODEMASK_FREE(nodes_allowed);
+			nodes_allowed = &node_states[N_HIGH_MEMORY];
+		}
+		h->max_huge_pages = set_max_huge_pages(h, tmp, nodes_allowed);
+
+		if (nodes_allowed != &node_states[N_HIGH_MEMORY])
+			NODEMASK_FREE(nodes_allowed);
+	}
 
 	return 0;
 }
 
+int hugetlb_sysctl_handler(struct ctl_table *table, int write,
+			  void __user *buffer, size_t *length, loff_t *ppos)
+{
+
+	return hugetlb_sysctl_handler_common(false, table, write,
+							buffer, length, ppos);
+}
+
+#ifdef CONFIG_NUMA
+int hugetlb_mempolicy_sysctl_handler(struct ctl_table *table, int write,
+			  void __user *buffer, size_t *length, loff_t *ppos)
+{
+	return hugetlb_sysctl_handler_common(true, table, write,
+							buffer, length, ppos);
+}
+#endif /* CONFIG_NUMA */
+
 int hugetlb_treat_movable_handler(struct ctl_table *table, int write,
 			void __user *buffer,
 			size_t *length, loff_t *ppos)
Index: linux-2.6.31-mmotm-090925-1435/include/linux/hugetlb.h
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/include/linux/hugetlb.h	2009-09-30 12:48:45.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/include/linux/hugetlb.h	2009-09-30 12:48:46.000000000 -0400
@@ -23,6 +23,12 @@ void reset_vma_resv_huge_pages(struct vm
 int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
 int hugetlb_overcommit_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
 int hugetlb_treat_movable_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
+
+#ifdef CONFIG_NUMA
+int hugetlb_mempolicy_sysctl_handler(struct ctl_table *, int,
+					void __user *, size_t *, loff_t *);
+#endif
+
 int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct vm_area_struct *);
 int follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *,
 			struct page **, struct vm_area_struct **,
Index: linux-2.6.31-mmotm-090925-1435/kernel/sysctl.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/kernel/sysctl.c	2009-09-30 12:48:45.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/kernel/sysctl.c	2009-09-30 12:48:46.000000000 -0400
@@ -1164,7 +1164,7 @@ static struct ctl_table vm_table[] = {
 		.extra2		= &one_hundred,
 	},
 #ifdef CONFIG_HUGETLB_PAGE
-	 {
+	{
 		.procname	= "nr_hugepages",
 		.data		= NULL,
 		.maxlen		= sizeof(unsigned long),
@@ -1172,7 +1172,19 @@ static struct ctl_table vm_table[] = {
 		.proc_handler	= &hugetlb_sysctl_handler,
 		.extra1		= (void *)&hugetlb_zero,
 		.extra2		= (void *)&hugetlb_infinity,
-	 },
+	},
+#ifdef CONFIG_NUMA
+	{
+	       .ctl_name       = CTL_UNNUMBERED,
+	       .procname       = "nr_hugepages_mempolicy",
+	       .data           = NULL,
+	       .maxlen         = sizeof(unsigned long),
+	       .mode           = 0644,
+	       .proc_handler   = &hugetlb_mempolicy_sysctl_handler,
+	       .extra1	 = (void *)&hugetlb_zero,
+	       .extra2	 = (void *)&hugetlb_infinity,
+	},
+#endif
 	 {
 		.ctl_name	= VM_HUGETLB_GROUP,
 		.procname	= "hugetlb_shm_group",

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 5/11] hugetlb:  accomodate reworked NODEMASK_ALLOC
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (3 preceding siblings ...)
  2009-10-06  3:18 ` [PATCH 4/11] hugetlb: derive huge pages nodes allowed from task mempolicy Lee Schermerhorn
@ 2009-10-06  3:18 ` Lee Schermerhorn
  2009-10-06  3:18 ` [PATCH 6/11] hugetlb: add generic definition of NUMA_NO_NODE Lee Schermerhorn
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:18 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 5/11] hugetlb:  accomodate reworked NODEMASK_ALLOC
From:	David Rientjes <rientjes@google.com>

Against:  2.6.31-mmotm-090925-1435

Depends on:  David Rientjes' "nodemask: make NODEMASK_ALLOC more general"
patch.

Fix hugetlb usage of NODEMASK_ALLOC after aforementioned patch is merged.

Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>

 mm/hugetlb.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-10-05 10:45:12.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-10-05 10:49:42.000000000 -0400
@@ -1347,7 +1347,7 @@ static ssize_t nr_hugepages_store_common
 	int err;
 	unsigned long count;
 	struct hstate *h = kobj_to_hstate(kobj);
-	NODEMASK_ALLOC(nodemask, nodes_allowed);
+	NODEMASK_ALLOC(nodemask_t, nodes_allowed);
 
 	err = strict_strtoul(buf, 10, &count);
 	if (err)
@@ -1638,7 +1638,7 @@ static int hugetlb_sysctl_handler_common
 	proc_doulongvec_minmax(table, write, buffer, length, ppos);
 
 	if (write) {
-		NODEMASK_ALLOC(nodemask, nodes_allowed);
+		NODEMASK_ALLOC(nodemask_t, nodes_allowed);
 		if (!(obey_mempolicy &&
 			       init_nodemask_of_mempolicy(nodes_allowed))) {
 			NODEMASK_FREE(nodes_allowed);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 6/11] hugetlb:  add generic definition of NUMA_NO_NODE
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (4 preceding siblings ...)
  2009-10-06  3:18 ` [PATCH 5/11] hugetlb: accomodate reworked NODEMASK_ALLOC Lee Schermerhorn
@ 2009-10-06  3:18 ` Lee Schermerhorn
  2009-10-06  9:28   ` David Rientjes
  2009-10-06  3:18 ` [PATCH 7/11] hugetlb: add per node hstate attributes Lee Schermerhorn
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:18 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 6/11] - hugetlb:  promote NUMA_NO_NODE to generic constant

Against:  2.6.31-mmotm-090925-1435

New in V7 of series

Move definition of NUMA_NO_NODE from ia64 and x86_64 arch specific
headers to generic header 'linux/numa.h' for use in generic code.
NUMA_NO_NODE replaces bare '-1' where it's used in this series to
indicate "no node id specified".  Ultimately, it can be used
to replace the -1 elsewhere where it is used similarly.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>

 arch/ia64/include/asm/numa.h    |    2 --
 arch/x86/include/asm/topology.h |    5 ++---
 include/linux/numa.h            |    2 ++
 3 files changed, 4 insertions(+), 5 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/arch/ia64/include/asm/numa.h
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/arch/ia64/include/asm/numa.h	2009-09-30 15:04:40.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/arch/ia64/include/asm/numa.h	2009-09-30 15:05:19.000000000 -0400
@@ -22,8 +22,6 @@
 
 #include <asm/mmzone.h>
 
-#define NUMA_NO_NODE	-1
-
 extern u16 cpu_to_node_map[NR_CPUS] __cacheline_aligned;
 extern cpumask_t node_to_cpu_mask[MAX_NUMNODES] __cacheline_aligned;
 extern pg_data_t *pgdat_list[MAX_NUMNODES];
Index: linux-2.6.31-mmotm-090925-1435/arch/x86/include/asm/topology.h
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/arch/x86/include/asm/topology.h	2009-09-30 15:04:40.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/arch/x86/include/asm/topology.h	2009-09-30 15:05:19.000000000 -0400
@@ -35,11 +35,10 @@
 # endif
 #endif
 
-/* Node not present */
-#define NUMA_NO_NODE	(-1)
-
 #ifdef CONFIG_NUMA
 #include <linux/cpumask.h>
+#include <linux/numa.h>
+
 #include <asm/mpspec.h>
 
 #ifdef CONFIG_X86_32
Index: linux-2.6.31-mmotm-090925-1435/include/linux/numa.h
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/include/linux/numa.h	2009-09-30 15:04:40.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/include/linux/numa.h	2009-09-30 15:05:19.000000000 -0400
@@ -10,4 +10,6 @@
 
 #define MAX_NUMNODES    (1 << NODES_SHIFT)
 
+#define	NUMA_NO_NODE	(-1)
+
 #endif /* _LINUX_NUMA_H */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 7/11] hugetlb:  add per node hstate attributes
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (5 preceding siblings ...)
  2009-10-06  3:18 ` [PATCH 6/11] hugetlb: add generic definition of NUMA_NO_NODE Lee Schermerhorn
@ 2009-10-06  3:18 ` Lee Schermerhorn
  2009-10-07  4:04   ` David Rientjes
  2009-10-06  3:18 ` [PATCH 8/11] hugetlb: update hugetlb documentation for NUMA controls Lee Schermerhorn
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:18 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 7/11] hugetlb:  register per node hugepages attributes

Against:  2.6.31-mmotm-090925-1435

V2:  remove dependency on kobject private bitfield.  Search
     global hstates then all per node hstates for kobject
     match in attribute show/store functions.

V3:  rebase atop the mempolicy-based hugepage alloc/free;
     use custom "nodes_allowed" to restrict alloc/free to
     a specific node via per node attributes.  Per node
     attribute overrides mempolicy.  I.e., mempolicy only
     applies to global attributes.

V5:  Fix issues raised by Mel Gorman:
     + add !NUMA versions of hugetlb_[un]register_node()
     + rename 'hi' to 'i' in kobj_to_node_hstate()
     + rename (count, input) to (len, count) in nr_hugepages_store()
     + moved per node hugepages_kobj and hstate_kobjs[] from the
       struct node [sysdev] to hugetlb.c private arrays.
     + changed registration mechanism so that hugetlbfs [a module]
       register its attributes registration callbacks with the node
       driver, eliminating the dependency between the node driver
       and hugetlbfs.  From it's init func, hugetlbfs will register
       all on-line nodes' hugepage sysfs attributes along with
       hugetlbfs' attributes register/unregister functions.  The
       node driver will use these functions to [un]register nodes
       with hugetlbfs on node hot-plug.
     + replaced hugetlb.c private "nodes_allowed_from_node()" with
       [new] generic "alloc_nodemask_of_node()".

V5a: + fix !NUMA register_hugetlbfs_with_node():  don't use
       keyword 'do' as parameter name!

V6:  + Use NUMA_NO_NODE for unspecified node id throughout hugetlb.c
       to indicate that we didn't get there via a per node attribute.
       Drop redundant "NO_NODEID_SPECIFIED" definition.
     + handle movement of defaulting of nodes_allowed up to
       set_max_huge_pages()

V7:  + add ifdefs + stubs to eliminate unneeded hugetlb registration
       functions when HUGETLBFS not configured. 
     + add some comments to per node hstate registration code in
       hugetlb.c

V8:  + folded in subsequent patch to reorganize sysctl and sysfs
       attribute handlers to pass nodes_allowed mask t0
       set_max_huge_pages()

V9: + fix rejects caused by new patch 5/11 -- NODEMASK_ALLOC() rework.

This patch adds the per huge page size control/query attributes
to the per node sysdevs:

/sys/devices/system/node/node<ID>/hugepages/hugepages-<size>/
	nr_hugepages       - r/w
	free_huge_pages    - r/o
	surplus_huge_pages - r/o

The patch attempts to re-use/share as much of the existing
global hstate attribute initialization and handling, and the
"nodes_allowed" constraint processing as possible.
Calling set_max_huge_pages() with no node indicates a change to
global hstate parameters.  In this case, any non-default task
mempolicy will be used to generate the nodes_allowed mask.  A
valid node id indicates an update to that node's hstate 
parameters, and the count argument specifies the target count
for the specified node.  From this info, we compute the target
global count for the hstate and construct a nodes_allowed node
mask contain only the specified node.

Setting the node specific nr_hugepages via the per node attribute
effectively ignores any task mempolicy or cpuset constraints.

With this patch:

(me):ls /sys/devices/system/node/node0/hugepages/hugepages-2048kB
./  ../  free_hugepages  nr_hugepages  surplus_hugepages

Starting from:
Node 0 HugePages_Total:     0
Node 0 HugePages_Free:      0
Node 0 HugePages_Surp:      0
Node 1 HugePages_Total:     0
Node 1 HugePages_Free:      0
Node 1 HugePages_Surp:      0
Node 2 HugePages_Total:     0
Node 2 HugePages_Free:      0
Node 2 HugePages_Surp:      0
Node 3 HugePages_Total:     0
Node 3 HugePages_Free:      0
Node 3 HugePages_Surp:      0
vm.nr_hugepages = 0

Allocate 16 persistent huge pages on node 2:
(me):echo 16 >/sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages

[Note that this is equivalent to:
	numactl -m 2 hugeadmin --pool-pages-min 2M:+16
]

Yields:
Node 0 HugePages_Total:     0
Node 0 HugePages_Free:      0
Node 0 HugePages_Surp:      0
Node 1 HugePages_Total:     0
Node 1 HugePages_Free:      0
Node 1 HugePages_Surp:      0
Node 2 HugePages_Total:    16
Node 2 HugePages_Free:     16
Node 2 HugePages_Surp:      0
Node 3 HugePages_Total:     0
Node 3 HugePages_Free:      0
Node 3 HugePages_Surp:      0
vm.nr_hugepages = 16

Global controls work as expected--reduce pool to 8 persistent huge pages:
(me):echo 8 >/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

Node 0 HugePages_Total:     0
Node 0 HugePages_Free:      0
Node 0 HugePages_Surp:      0
Node 1 HugePages_Total:     0
Node 1 HugePages_Free:      0
Node 1 HugePages_Surp:      0
Node 2 HugePages_Total:     8
Node 2 HugePages_Free:      8
Node 2 HugePages_Surp:      0
Node 3 HugePages_Total:     0
Node 3 HugePages_Free:      0
Node 3 HugePages_Surp:      0

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>

 drivers/base/node.c  |   39 +++++++
 include/linux/node.h |   11 ++
 mm/hugetlb.c         |  270 ++++++++++++++++++++++++++++++++++++++++++++++-----
 3 files changed, 295 insertions(+), 25 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/drivers/base/node.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/drivers/base/node.c	2009-10-05 13:33:00.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/drivers/base/node.c	2009-10-05 14:00:32.000000000 -0400
@@ -173,6 +173,43 @@ static ssize_t node_read_distance(struct
 }
 static SYSDEV_ATTR(distance, S_IRUGO, node_read_distance, NULL);
 
+#ifdef CONFIG_HUGETLBFS
+/*
+ * hugetlbfs per node attributes registration interface:
+ * When/if hugetlb[fs] subsystem initializes [sometime after this module],
+ * it will register its per node attributes for all nodes online at that
+ * time.  It will also call register_hugetlbfs_with_node(), below, to
+ * register its attribute registration functions with this node driver.
+ * Once these hooks have been initialized, the node driver will call into
+ * the hugetlb module to [un]register attributes for hot-plugged nodes.
+ */
+static node_registration_func_t __hugetlb_register_node;
+static node_registration_func_t __hugetlb_unregister_node;
+
+static inline void hugetlb_register_node(struct node *node)
+{
+	if (__hugetlb_register_node)
+		__hugetlb_register_node(node);
+}
+
+static inline void hugetlb_unregister_node(struct node *node)
+{
+	if (__hugetlb_unregister_node)
+		__hugetlb_unregister_node(node);
+}
+
+void register_hugetlbfs_with_node(node_registration_func_t doregister,
+				  node_registration_func_t unregister)
+{
+	__hugetlb_register_node   = doregister;
+	__hugetlb_unregister_node = unregister;
+}
+#else
+static inline void hugetlb_register_node(struct node *node) {}
+
+static inline void hugetlb_unregister_node(struct node *node) {}
+#endif
+
 
 /*
  * register_node - Setup a sysfs device for a node.
@@ -196,6 +233,7 @@ int register_node(struct node *node, int
 		sysdev_create_file(&node->sysdev, &attr_distance);
 
 		scan_unevictable_register_node(node);
+		hugetlb_register_node(node);
 	}
 	return error;
 }
@@ -216,6 +254,7 @@ void unregister_node(struct node *node)
 	sysdev_remove_file(&node->sysdev, &attr_distance);
 
 	scan_unevictable_unregister_node(node);
+	hugetlb_unregister_node(node);
 
 	sysdev_unregister(&node->sysdev);
 }
Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-10-05 13:33:21.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-10-05 14:00:57.000000000 -0400
@@ -24,6 +24,7 @@
 #include <asm/io.h>
 
 #include <linux/hugetlb.h>
+#include <linux/node.h>
 #include "internal.h"
 
 const unsigned long hugetlb_zero = 0, hugetlb_infinity = ~0UL;
@@ -1324,38 +1325,68 @@ out:
 static struct kobject *hugepages_kobj;
 static struct kobject *hstate_kobjs[HUGE_MAX_HSTATE];
 
-static struct hstate *kobj_to_hstate(struct kobject *kobj)
+static struct hstate *kobj_to_node_hstate(struct kobject *kobj, int *nidp);
+
+static struct hstate *kobj_to_hstate(struct kobject *kobj, int *nidp)
 {
 	int i;
+
 	for (i = 0; i < HUGE_MAX_HSTATE; i++)
-		if (hstate_kobjs[i] == kobj)
+		if (hstate_kobjs[i] == kobj) {
+			if (nidp)
+				*nidp = NUMA_NO_NODE;
 			return &hstates[i];
-	BUG();
-	return NULL;
+		}
+
+	return kobj_to_node_hstate(kobj, nidp);
 }
 
 static ssize_t nr_hugepages_show_common(struct kobject *kobj,
 					struct kobj_attribute *attr, char *buf)
 {
-	struct hstate *h = kobj_to_hstate(kobj);
-	return sprintf(buf, "%lu\n", h->nr_huge_pages);
+	struct hstate *h;
+	unsigned long nr_huge_pages;
+	int nid;
+
+	h = kobj_to_hstate(kobj, &nid);
+	if (nid == NUMA_NO_NODE)
+		nr_huge_pages = h->nr_huge_pages;
+	else
+		nr_huge_pages = h->nr_huge_pages_node[nid];
+
+	return sprintf(buf, "%lu\n", nr_huge_pages);
 }
 static ssize_t nr_hugepages_store_common(bool obey_mempolicy,
 			struct kobject *kobj, struct kobj_attribute *attr,
 			const char *buf, size_t len)
 {
 	int err;
+	int nid;
 	unsigned long count;
-	struct hstate *h = kobj_to_hstate(kobj);
+	struct hstate *h;
 	NODEMASK_ALLOC(nodemask_t, nodes_allowed);
 
 	err = strict_strtoul(buf, 10, &count);
 	if (err)
 		return 0;
 
-	if (!(obey_mempolicy && init_nodemask_of_mempolicy(nodes_allowed))) {
-		NODEMASK_FREE(nodes_allowed);
-		nodes_allowed = &node_online_map;
+	h = kobj_to_hstate(kobj, &nid);
+	if (nid == NUMA_NO_NODE) {
+		/*
+		 * global hstate attribute
+		 */
+		if (!(obey_mempolicy &&
+				init_nodemask_of_mempolicy(nodes_allowed))) {
+			NODEMASK_FREE(nodes_allowed);
+			nodes_allowed = &node_states[N_HIGH_MEMORY];
+		}
+	} else {
+		/*
+		 * per node hstate attribute: adjust count to global,
+		 * but restrict alloc/free to the specified node.
+		 */
+		count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+		init_nodemask_of_node(nodes_allowed, nid);
 	}
 	h->max_huge_pages = set_max_huge_pages(h, count, nodes_allowed);
 
@@ -1402,7 +1433,7 @@ HSTATE_ATTR(nr_hugepages_mempolicy);
 static ssize_t nr_overcommit_hugepages_show(struct kobject *kobj,
 					struct kobj_attribute *attr, char *buf)
 {
-	struct hstate *h = kobj_to_hstate(kobj);
+	struct hstate *h = kobj_to_hstate(kobj, NULL);
 	return sprintf(buf, "%lu\n", h->nr_overcommit_huge_pages);
 }
 static ssize_t nr_overcommit_hugepages_store(struct kobject *kobj,
@@ -1410,7 +1441,7 @@ static ssize_t nr_overcommit_hugepages_s
 {
 	int err;
 	unsigned long input;
-	struct hstate *h = kobj_to_hstate(kobj);
+	struct hstate *h = kobj_to_hstate(kobj, NULL);
 
 	err = strict_strtoul(buf, 10, &input);
 	if (err)
@@ -1427,15 +1458,24 @@ HSTATE_ATTR(nr_overcommit_hugepages);
 static ssize_t free_hugepages_show(struct kobject *kobj,
 					struct kobj_attribute *attr, char *buf)
 {
-	struct hstate *h = kobj_to_hstate(kobj);
-	return sprintf(buf, "%lu\n", h->free_huge_pages);
+	struct hstate *h;
+	unsigned long free_huge_pages;
+	int nid;
+
+	h = kobj_to_hstate(kobj, &nid);
+	if (nid == NUMA_NO_NODE)
+		free_huge_pages = h->free_huge_pages;
+	else
+		free_huge_pages = h->free_huge_pages_node[nid];
+
+	return sprintf(buf, "%lu\n", free_huge_pages);
 }
 HSTATE_ATTR_RO(free_hugepages);
 
 static ssize_t resv_hugepages_show(struct kobject *kobj,
 					struct kobj_attribute *attr, char *buf)
 {
-	struct hstate *h = kobj_to_hstate(kobj);
+	struct hstate *h = kobj_to_hstate(kobj, NULL);
 	return sprintf(buf, "%lu\n", h->resv_huge_pages);
 }
 HSTATE_ATTR_RO(resv_hugepages);
@@ -1443,8 +1483,17 @@ HSTATE_ATTR_RO(resv_hugepages);
 static ssize_t surplus_hugepages_show(struct kobject *kobj,
 					struct kobj_attribute *attr, char *buf)
 {
-	struct hstate *h = kobj_to_hstate(kobj);
-	return sprintf(buf, "%lu\n", h->surplus_huge_pages);
+	struct hstate *h;
+	unsigned long surplus_huge_pages;
+	int nid;
+
+	h = kobj_to_hstate(kobj, &nid);
+	if (nid == NUMA_NO_NODE)
+		surplus_huge_pages = h->surplus_huge_pages;
+	else
+		surplus_huge_pages = h->surplus_huge_pages_node[nid];
+
+	return sprintf(buf, "%lu\n", surplus_huge_pages);
 }
 HSTATE_ATTR_RO(surplus_hugepages);
 
@@ -1464,19 +1513,21 @@ static struct attribute_group hstate_att
 	.attrs = hstate_attrs,
 };
 
-static int __init hugetlb_sysfs_add_hstate(struct hstate *h)
+static int __init hugetlb_sysfs_add_hstate(struct hstate *h,
+				struct kobject *parent,
+				struct kobject **hstate_kobjs,
+				struct attribute_group *hstate_attr_group)
 {
 	int retval;
+	int hi = h - hstates;
 
-	hstate_kobjs[h - hstates] = kobject_create_and_add(h->name,
-							hugepages_kobj);
-	if (!hstate_kobjs[h - hstates])
+	hstate_kobjs[hi] = kobject_create_and_add(h->name, parent);
+	if (!hstate_kobjs[hi])
 		return -ENOMEM;
 
-	retval = sysfs_create_group(hstate_kobjs[h - hstates],
-							&hstate_attr_group);
+	retval = sysfs_create_group(hstate_kobjs[hi], hstate_attr_group);
 	if (retval)
-		kobject_put(hstate_kobjs[h - hstates]);
+		kobject_put(hstate_kobjs[hi]);
 
 	return retval;
 }
@@ -1491,17 +1542,184 @@ static void __init hugetlb_sysfs_init(vo
 		return;
 
 	for_each_hstate(h) {
-		err = hugetlb_sysfs_add_hstate(h);
+		err = hugetlb_sysfs_add_hstate(h, hugepages_kobj,
+					 hstate_kobjs, &hstate_attr_group);
 		if (err)
 			printk(KERN_ERR "Hugetlb: Unable to add hstate %s",
 								h->name);
 	}
 }
 
+#ifdef CONFIG_NUMA
+
+/*
+ * node_hstate/s - associate per node hstate attributes, via their kobjects,
+ * with node sysdevs in node_devices[] using a parallel array.  The array
+ * index of a node sysdev or _hstate == node id.
+ * This is here to avoid any static dependency of the node sysdev driver, in
+ * the base kernel, on the hugetlb module.
+ */
+struct node_hstate {
+	struct kobject		*hugepages_kobj;
+	struct kobject		*hstate_kobjs[HUGE_MAX_HSTATE];
+};
+struct node_hstate node_hstates[MAX_NUMNODES];
+
+/*
+ * A subset of global hstate attributes for node sysdevs
+ */
+static struct attribute *per_node_hstate_attrs[] = {
+	&nr_hugepages_attr.attr,
+	&free_hugepages_attr.attr,
+	&surplus_hugepages_attr.attr,
+	NULL,
+};
+
+static struct attribute_group per_node_hstate_attr_group = {
+	.attrs = per_node_hstate_attrs,
+};
+
+/*
+ * kobj_to_node_hstate - lookup global hstate for node sysdev hstate attr kobj.
+ * Returns node id via non-NULL nidp.
+ */
+static struct hstate *kobj_to_node_hstate(struct kobject *kobj, int *nidp)
+{
+	int nid;
+
+	for (nid = 0; nid < nr_node_ids; nid++) {
+		struct node_hstate *nhs = &node_hstates[nid];
+		int i;
+		for (i = 0; i < HUGE_MAX_HSTATE; i++)
+			if (nhs->hstate_kobjs[i] == kobj) {
+				if (nidp)
+					*nidp = nid;
+				return &hstates[i];
+			}
+	}
+
+	BUG();
+	return NULL;
+}
+
+/*
+ * Unregister hstate attributes from a single node sysdev.
+ * No-op if no hstate attributes attached.
+ */
+void hugetlb_unregister_node(struct node *node)
+{
+	struct hstate *h;
+	struct node_hstate *nhs = &node_hstates[node->sysdev.id];
+
+	if (!nhs->hugepages_kobj)
+		return;
+
+	for_each_hstate(h)
+		if (nhs->hstate_kobjs[h - hstates]) {
+			kobject_put(nhs->hstate_kobjs[h - hstates]);
+			nhs->hstate_kobjs[h - hstates] = NULL;
+		}
+
+	kobject_put(nhs->hugepages_kobj);
+	nhs->hugepages_kobj = NULL;
+}
+
+/*
+ * hugetlb module exit:  unregister hstate attributes from node sysdevs
+ * that have them.
+ */
+static void hugetlb_unregister_all_nodes(void)
+{
+	int nid;
+
+	/*
+	 * disable node sysdev registrations.
+	 */
+	register_hugetlbfs_with_node(NULL, NULL);
+
+	/*
+	 * remove hstate attributes from any nodes that have them.
+	 */
+	for (nid = 0; nid < nr_node_ids; nid++)
+		hugetlb_unregister_node(&node_devices[nid]);
+}
+
+/*
+ * Register hstate attributes for a single node sysdev.
+ * No-op if attributes already registered.
+ */
+void hugetlb_register_node(struct node *node)
+{
+	struct hstate *h;
+	struct node_hstate *nhs = &node_hstates[node->sysdev.id];
+	int err;
+
+	if (nhs->hugepages_kobj)
+		return;		/* already allocated */
+
+	nhs->hugepages_kobj = kobject_create_and_add("hugepages",
+							&node->sysdev.kobj);
+	if (!nhs->hugepages_kobj)
+		return;
+
+	for_each_hstate(h) {
+		err = hugetlb_sysfs_add_hstate(h, nhs->hugepages_kobj,
+						nhs->hstate_kobjs,
+						&per_node_hstate_attr_group);
+		if (err) {
+			printk(KERN_ERR "Hugetlb: Unable to add hstate %s"
+					" for node %d\n",
+						h->name, node->sysdev.id);
+			hugetlb_unregister_node(node);
+			break;
+		}
+	}
+}
+
+/*
+ * hugetlb init time:  register hstate attributes for all registered
+ * node sysdevs.  All on-line nodes should have registered their
+ * associated sysdev by the time the hugetlb module initializes.
+ */
+static void hugetlb_register_all_nodes(void)
+{
+	int nid;
+
+	for (nid = 0; nid < nr_node_ids; nid++) {
+		struct node *node = &node_devices[nid];
+		if (node->sysdev.id == nid)
+			hugetlb_register_node(node);
+	}
+
+	/*
+	 * Let the node sysdev driver know we're here so it can
+	 * [un]register hstate attributes on node hotplug.
+	 */
+	register_hugetlbfs_with_node(hugetlb_register_node,
+				     hugetlb_unregister_node);
+}
+#else	/* !CONFIG_NUMA */
+
+static struct hstate *kobj_to_node_hstate(struct kobject *kobj, int *nidp)
+{
+	BUG();
+	if (nidp)
+		*nidp = -1;
+	return NULL;
+}
+
+static void hugetlb_unregister_all_nodes(void) { }
+
+static void hugetlb_register_all_nodes(void) { }
+
+#endif
+
 static void __exit hugetlb_exit(void)
 {
 	struct hstate *h;
 
+	hugetlb_unregister_all_nodes();
+
 	for_each_hstate(h) {
 		kobject_put(hstate_kobjs[h - hstates]);
 	}
@@ -1536,6 +1754,8 @@ static int __init hugetlb_init(void)
 
 	hugetlb_sysfs_init();
 
+	hugetlb_register_all_nodes();
+
 	return 0;
 }
 module_init(hugetlb_init);
Index: linux-2.6.31-mmotm-090925-1435/include/linux/node.h
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/include/linux/node.h	2009-10-05 13:33:00.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/include/linux/node.h	2009-10-05 14:00:32.000000000 -0400
@@ -28,6 +28,7 @@ struct node {
 
 struct memory_block;
 extern struct node node_devices[];
+typedef  void (*node_registration_func_t)(struct node *);
 
 extern int register_node(struct node *, int, struct node *);
 extern void unregister_node(struct node *node);
@@ -39,6 +40,11 @@ extern int unregister_cpu_under_node(uns
 extern int register_mem_sect_under_node(struct memory_block *mem_blk,
 						int nid);
 extern int unregister_mem_sect_under_nodes(struct memory_block *mem_blk);
+
+#ifdef CONFIG_HUGETLBFS
+extern void register_hugetlbfs_with_node(node_registration_func_t doregister,
+					 node_registration_func_t unregister);
+#endif
 #else
 static inline int register_one_node(int nid)
 {
@@ -65,6 +71,11 @@ static inline int unregister_mem_sect_un
 {
 	return 0;
 }
+
+static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
+						node_registration_func_t unreg)
+{
+}
 #endif
 
 #define to_node(sys_device) container_of(sys_device, struct node, sysdev)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 8/11] hugetlb:  update hugetlb documentation for NUMA controls.
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (6 preceding siblings ...)
  2009-10-06  3:18 ` [PATCH 7/11] hugetlb: add per node hstate attributes Lee Schermerhorn
@ 2009-10-06  3:18 ` Lee Schermerhorn
  2009-10-06  3:18 ` [PATCH 9/11] hugetlb: use only nodes with memory for huge pages Lee Schermerhorn
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:18 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 8/11] hugetlb:  update hugetlb documentation for NUMA controls

Against:  2.6.31-mmotm-090925-1435

V2:  Add brief description of per node attributes.

V6:  address review comments

V8: + folded in changes for new nr_hugepages_mempolicy sysctl and
       sysfs attribute

V9: + address Randy Dunlap's comments.

This patch updates the kernel huge tlb documentation to describe the
numa memory policy based huge page management.  Additionaly, the patch
includes a fair amount of rework to improve consistency, eliminate
duplication and set the context for documenting the memory policy
interaction.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Mel Gorman <mel@csn.ul.ie>

 Documentation/vm/hugetlbpage.txt |  267 ++++++++++++++++++++++++++-------------
 1 file changed, 179 insertions(+), 88 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/Documentation/vm/hugetlbpage.txt
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/Documentation/vm/hugetlbpage.txt	2009-10-05 14:00:32.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/Documentation/vm/hugetlbpage.txt	2009-10-05 14:01:00.000000000 -0400
@@ -11,23 +11,21 @@ This optimization is more critical now a
 (several GBs) are more readily available.
 
 Users can use the huge page support in Linux kernel by either using the mmap
-system call or standard SYSv shared memory system calls (shmget, shmat).
+system call or standard SYSV shared memory system calls (shmget, shmat).
 
 First the Linux kernel needs to be built with the CONFIG_HUGETLBFS
 (present under "File systems") and CONFIG_HUGETLB_PAGE (selected
 automatically when CONFIG_HUGETLBFS is selected) configuration
 options.
 
-The kernel built with huge page support should show the number of configured
-huge pages in the system by running the "cat /proc/meminfo" command.
+The /proc/meminfo file provides information about the total number of
+persistent hugetlb pages in the kernel's huge page pool.  It also displays
+information about the number of free, reserved and surplus huge pages and the
+default huge page size.  The huge page size is needed for generating the
+proper alignment and size of the arguments to system calls that map huge page
+regions.
 
-/proc/meminfo also provides information about the total number of hugetlb
-pages configured in the kernel.  It also displays information about the
-number of free hugetlb pages at any time.  It also displays information about
-the configured huge page size - this is needed for generating the proper
-alignment and size of the arguments to the above system calls.
-
-The output of "cat /proc/meminfo" will have lines like:
+The output of "cat /proc/meminfo" will include lines like:
 
 .....
 HugePages_Total: vvv
@@ -53,59 +51,63 @@ HugePages_Surp  is short for "surplus,"
 /proc/filesystems should also show a filesystem of type "hugetlbfs" configured
 in the kernel.
 
-/proc/sys/vm/nr_hugepages indicates the current number of configured hugetlb
-pages in the kernel.  Super user can dynamically request more (or free some
-pre-configured) huge pages.
-The allocation (or deallocation) of hugetlb pages is possible only if there are
-enough physically contiguous free pages in system (freeing of huge pages is
-possible only if there are enough hugetlb pages free that can be transferred
-back to regular memory pool).
-
-Pages that are used as hugetlb pages are reserved inside the kernel and cannot
-be used for other purposes.
-
-Once the kernel with Hugetlb page support is built and running, a user can
-use either the mmap system call or shared memory system calls to start using
-the huge pages.  It is required that the system administrator preallocate
-enough memory for huge page purposes.
-
-The administrator can preallocate huge pages on the kernel boot command line by
-specifying the "hugepages=N" parameter, where 'N' = the number of huge pages
-requested.  This is the most reliable method for preallocating huge pages as
-memory has not yet become fragmented.
+/proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge
+pages in the kernel's huge page pool.  "Persistent" huge pages will be
+returned to the huge page pool when freed by a task.  A user with root
+privileges can dynamically allocate more or free some persistent huge pages
+by increasing or decreasing the value of 'nr_hugepages'.
+
+Pages that are used as huge pages are reserved inside the kernel and cannot
+be used for other purposes.  Huge pages cannot be swapped out under
+memory pressure.
+
+Once a number of huge pages have been pre-allocated to the kernel huge page
+pool, a user with appropriate privilege can use either the mmap system call
+or shared memory system calls to use the huge pages.  See the discussion of
+Using Huge Pages, below.
+
+The administrator can allocate persistent huge pages on the kernel boot
+command line by specifying the "hugepages=N" parameter, where 'N' = the
+number of huge pages requested.  This is the most reliable method of
+allocating huge pages as memory has not yet become fragmented.
 
-Some platforms support multiple huge page sizes.  To preallocate huge pages
+Some platforms support multiple huge page sizes.  To allocate huge pages
 of a specific size, one must preceed the huge pages boot command parameters
 with a huge page size selection parameter "hugepagesz=<size>".  <size> must
 be specified in bytes with optional scale suffix [kKmMgG].  The default huge
 page size may be selected with the "default_hugepagesz=<size>" boot parameter.
 
-/proc/sys/vm/nr_hugepages indicates the current number of configured [default
-size] hugetlb pages in the kernel.  Super user can dynamically request more
-(or free some pre-configured) huge pages.
-
-Use the following command to dynamically allocate/deallocate default sized
-huge pages:
+When multiple huge page sizes are supported, /proc/sys/vm/nr_hugepages
+indicates the current number of pre-allocated huge pages of the default size.
+Thus, one can use the following command to dynamically allocate/deallocate
+default sized persistent huge pages:
 
 	echo 20 > /proc/sys/vm/nr_hugepages
 
-This command will try to configure 20 default sized huge pages in the system.
+This command will try to adjust the number of default sized huge pages in the
+huge page pool to 20, allocating or freeing huge pages, as required.
+
 On a NUMA platform, the kernel will attempt to distribute the huge page pool
-over the all on-line nodes.  These huge pages, allocated when nr_hugepages
-is increased, are called "persistent huge pages".
+over all the set of allowed nodes specified by the NUMA memory policy of the
+task that modifies nr_hugepages.  The default for the allowed nodes--when the
+task has default memory policy--is all on-line nodes.  Allowed nodes with
+insufficient available, contiguous memory for a huge page will be silently
+skipped when allocating persistent huge pages.  See the discussion below of
+the interaction of task memory policy, cpusets and per node attributes with
+the allocation and freeing of persistent huge pages.
 
 The success or failure of huge page allocation depends on the amount of
-physically contiguous memory that is preset in system at the time of the
+physically contiguous memory that is present in system at the time of the
 allocation attempt.  If the kernel is unable to allocate huge pages from
 some nodes in a NUMA system, it will attempt to make up the difference by
 allocating extra pages on other nodes with sufficient available contiguous
 memory, if any.
 
-System administrators may want to put this command in one of the local rc init
-files.  This will enable the kernel to request huge pages early in the boot
-process when the possibility of getting physical contiguous pages is still
-very high.  Administrators can verify the number of huge pages actually
-allocated by checking the sysctl or meminfo.  To check the per node
+System administrators may want to put this command in one of the local rc
+init files.  This will enable the kernel to allocate huge pages early in
+the boot process when the possibility of getting physical contiguous pages
+is still very high.  Administrators can verify the number of huge pages
+actually allocated by checking the sysctl or meminfo.  To check the per node
 distribution of huge pages in a NUMA system, use:
 
 	cat /sys/devices/system/node/node*/meminfo | fgrep Huge
@@ -113,45 +115,47 @@ distribution of huge pages in a NUMA sys
 /proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of
 huge pages can grow, if more huge pages than /proc/sys/vm/nr_hugepages are
 requested by applications.  Writing any non-zero value into this file
-indicates that the hugetlb subsystem is allowed to try to obtain "surplus"
-huge pages from the buddy allocator, when the normal pool is exhausted. As
-these surplus huge pages go out of use, they are freed back to the buddy
-allocator.
+indicates that the hugetlb subsystem is allowed to try to obtain that
+number of "surplus" huge pages from the kernel's normal page pool, when the
+persistent huge page pool is exhausted. As these surplus huge pages become
+unused, they are freed back to the kernel's normal page pool.
 
-When increasing the huge page pool size via nr_hugepages, any surplus
+When increasing the huge page pool size via nr_hugepages, any existing surplus
 pages will first be promoted to persistent huge pages.  Then, additional
 huge pages will be allocated, if necessary and if possible, to fulfill
-the new huge page pool size.
+the new persistent huge page pool size.
 
-The administrator may shrink the pool of preallocated huge pages for
+The administrator may shrink the pool of persistent huge pages for
 the default huge page size by setting the nr_hugepages sysctl to a
 smaller value.  The kernel will attempt to balance the freeing of huge pages
-across all on-line nodes.  Any free huge pages on the selected nodes will
-be freed back to the buddy allocator.
-
-Caveat: Shrinking the pool via nr_hugepages such that it becomes less
-than the number of huge pages in use will convert the balance to surplus
-huge pages even if it would exceed the overcommit value.  As long as
-this condition holds, however, no more surplus huge pages will be
-allowed on the system until one of the two sysctls are increased
-sufficiently, or the surplus huge pages go out of use and are freed.
+across all nodes in the memory policy of the task modifying nr_hugepages.
+Any free huge pages on the selected nodes will be freed back to the kernel's
+normal page pool.
+
+Caveat: Shrinking the persistent huge page pool via nr_hugepages such that
+it becomes less than the number of huge pages in use will convert the balance
+of the in-use huge pages to surplus huge pages.  This will occur even if
+the number of surplus pages it would exceed the overcommit value.  As long as
+this condition holds--that is, until nr_hugepages+nr_overcommit_hugepages is
+increased sufficiently, or the surplus huge pages go out of use and are freed--
+no more surplus huge pages will be allowed to be allocated.
 
 With support for multiple huge page pools at run-time available, much of
-the huge page userspace interface has been duplicated in sysfs. The above
-information applies to the default huge page size which will be
-controlled by the /proc interfaces for backwards compatibility. The root
-huge page control directory in sysfs is:
+the huge page userspace interface in /proc/sys/vm has been duplicated in sysfs.
+The /proc interfaces discussed above have been retained for backwards
+compatibility. The root huge page control directory in sysfs is:
 
 	/sys/kernel/mm/hugepages
 
 For each huge page size supported by the running kernel, a subdirectory
-will exist, of the form
+will exist, of the form:
 
 	hugepages-${size}kB
 
 Inside each of these directories, the same set of files will exist:
 
 	nr_hugepages
+	nr_hugepages_mempolicy
 	nr_overcommit_hugepages
 	free_hugepages
 	resv_hugepages
@@ -159,6 +163,101 @@ Inside each of these directories, the sa
 
 which function as described above for the default huge page-sized case.
 
+
+Interaction of Task Memory Policy with Huge Page Allocation/Freeing
+
+Whether huge pages are allocated and freed via the /proc interface or
+the /sysfs interface using the nr_hugepages_mempolicy attribute, the NUMA
+nodes from which huge pages are allocated or freed are controlled by the
+NUMA memory policy of the task that modifies the nr_hugepages_mempolicy
+sysctl or attribute.  When the nr_hugepages attribute is used, mempolicy
+is ignored.
+
+The recommended method to allocate or free huge pages to/from the kernel
+huge page pool, using the nr_hugepages example above, is:
+
+    numactl --interleave <node-list> echo 20 \
+				>/proc/sys/vm/nr_hugepages_mempolicy
+
+or, more succinctly:
+
+    numactl -m <node-list> echo 20 >/proc/sys/vm/nr_hugepages_mempolicy
+
+This will allocate or free abs(20 - nr_hugepages) to or from the nodes
+specified in <node-list>, depending on whether number of persistent huge pages
+is initially less than or greater than 20, respectively.  No huge pages will be
+allocated nor freed on any node not included in the specified <node-list>.
+
+When adjusting the persistent hugepage count via nr_hugepages_mempolicy, any
+memory policy mode--bind, preferred, local or interleave--may be used.  The
+resulting effect on persistent huge page allocation is as follows:
+
+1) Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.txt],
+   persistent huge pages will be distributed across the node or nodes
+   specified in the mempolicy as if "interleave" had been specified.
+   However, if a node in the policy does not contain sufficient contiguous
+   memory for a huge page, the allocation will not "fallback" to the nearest
+   neighbor node with sufficient contiguous memory.  To do this would cause
+   undesirable imbalance in the distribution of the huge page pool, or
+   possibly, allocation of persistent huge pages on nodes not allowed by
+   the task's memory policy.
+
+2) One or more nodes may be specified with the bind or interleave policy.
+   If more than one node is specified with the preferred policy, only the
+   lowest numeric id will be used.  Local policy will select the node where
+   the task is running at the time the nodes_allowed mask is constructed.
+   For local policy to be deterministic, the task must be bound to a cpu or
+   cpus in a single node.  Otherwise, the task could be migrated to some
+   other node at any time after launch and the resulting node will be
+   indeterminate.  Thus, local policy is not very useful for this purpose.
+   Any of the other mempolicy modes may be used to specify a single node.
+
+3) The nodes allowed mask will be derived from any non-default task mempolicy,
+   whether this policy was set explicitly by the task itself or one of its
+   ancestors, such as numactl.  This means that if the task is invoked from a
+   shell with non-default policy, that policy will be used.  One can specify a
+   node list of "all" with numactl --interleave or --membind [-m] to achieve
+   interleaving over all nodes in the system or cpuset.
+
+4) Any task mempolicy specifed--e.g., using numactl--will be constrained by
+   the resource limits of any cpuset in which the task runs.  Thus, there will
+   be no way for a task with non-default policy running in a cpuset with a
+   subset of the system nodes to allocate huge pages outside the cpuset
+   without first moving to a cpuset that contains all of the desired nodes.
+
+5) Boot-time huge page allocation attempts to distribute the requested number
+   of huge pages over all on-lines nodes.
+
+Per Node Hugepages Attributes
+
+A subset of the contents of the root huge page control directory in sysfs,
+described above, has been replicated under each "node" system device in:
+
+	/sys/devices/system/node/node[0-9]*/hugepages/
+
+Under this directory, the subdirectory for each supported huge page size
+contains the following attribute files:
+
+	nr_hugepages
+	free_hugepages
+	surplus_hugepages
+
+The free_' and surplus_' attribute files are read-only.  They return the number
+of free and surplus [overcommitted] huge pages, respectively, on the parent
+node.
+
+The nr_hugepages attribute returns the total number of huge pages on the
+specified node.  When this attribute is written, the number of persistent huge
+pages on the parent node will be adjusted to the specified value, if sufficient
+resources exist, regardless of the task's mempolicy or cpuset constraints.
+
+Note that the number of overcommit and reserve pages remain global quantities,
+as we don't know until fault time, when the faulting task's mempolicy is
+applied, from which node the huge page allocation will be attempted.
+
+
+Using Huge Pages
+
 If the user applications are going to request huge pages using mmap system
 call, then it is required that system administrator mount a file system of
 type hugetlbfs:
@@ -206,9 +305,11 @@ map_hugetlb.c.
  * requesting huge pages.
  *
  * For the ia64 architecture, the Linux kernel reserves Region number 4 for
- * huge pages.  That means the addresses starting with 0x800000... will need
- * to be specified.  Specifying a fixed address is not required on ppc64,
- * i386 or x86_64.
+ * huge pages.  That means that if one requires a fixed address, a huge page
+ * aligned address starting with 0x800000... will be required.  If a fixed
+ * address is not required, the kernel will select an address in the proper
+ * range.
+ * Other architectures, such as ppc64, i386 or x86_64 are not so constrained.
  *
  * Note: The default shared memory limit is quite low on many kernels,
  * you may need to increase it via:
@@ -237,14 +338,8 @@ map_hugetlb.c.
 
 #define dprintf(x)  printf(x)
 
-/* Only ia64 requires this */
-#ifdef __ia64__
-#define ADDR (void *)(0x8000000000000000UL)
-#define SHMAT_FLAGS (SHM_RND)
-#else
-#define ADDR (void *)(0x0UL)
+#define ADDR (void *)(0x0UL)	/* let kernel choose address */
 #define SHMAT_FLAGS (0)
-#endif
 
 int main(void)
 {
@@ -302,10 +397,12 @@ int main(void)
  * example, the app is requesting memory of size 256MB that is backed by
  * huge pages.
  *
- * For ia64 architecture, Linux kernel reserves Region number 4 for huge pages.
- * That means the addresses starting with 0x800000... will need to be
- * specified.  Specifying a fixed address is not required on ppc64, i386
- * or x86_64.
+ * For the ia64 architecture, the Linux kernel reserves Region number 4 for
+ * huge pages.  That means that if one requires a fixed address, a huge page
+ * aligned address starting with 0x800000... will be required.  If a fixed
+ * address is not required, the kernel will select an address in the proper
+ * range.
+ * Other architectures, such as ppc64, i386 or x86_64 are not so constrained.
  */
 #include <stdlib.h>
 #include <stdio.h>
@@ -317,14 +414,8 @@ int main(void)
 #define LENGTH (256UL*1024*1024)
 #define PROTECTION (PROT_READ | PROT_WRITE)
 
-/* Only ia64 requires this */
-#ifdef __ia64__
-#define ADDR (void *)(0x8000000000000000UL)
-#define FLAGS (MAP_SHARED | MAP_FIXED)
-#else
-#define ADDR (void *)(0x0UL)
+#define ADDR (void *)(0x0UL)	/* let kernel choose address */
 #define FLAGS (MAP_SHARED)
-#endif
 
 void check_bytes(char *addr)
 {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 9/11] hugetlb:  use only nodes with memory for huge pages
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (7 preceding siblings ...)
  2009-10-06  3:18 ` [PATCH 8/11] hugetlb: update hugetlb documentation for NUMA controls Lee Schermerhorn
@ 2009-10-06  3:18 ` Lee Schermerhorn
  2009-10-06  3:18 ` [PATCH 10/11] hugetlb: handle memory hot-plug events Lee Schermerhorn
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:18 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 9/11] hugetlb:  use only nodes with memory

Against:  2.6.31-mmotm-090925-1435


V9 + fix botched merge.
     s/node_online_map/node_states[N_HIGH_MEMORY]/ in
     nr_hugepages_store_common

Register per node hstate sysfs attributes only for nodes with
memory.  Global replacement of 'all online nodes" with "all nodes
with memory" in mm/hugetlb.c.  Suggested by David Rientjes.

A subsequent patch will handle adding/removing of per node hstate
sysfs attributes when nodes transition to/from memoryless state
via memory hotplug.

NOTE:  this patch has not been tested with memoryless nodes.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>

 Documentation/vm/hugetlbpage.txt |   12 ++++++------
 mm/hugetlb.c                     |   35 ++++++++++++++++++-----------------
 2 files changed, 24 insertions(+), 23 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-10-05 13:33:24.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-10-05 13:34:32.000000000 -0400
@@ -942,14 +942,14 @@ static void return_unused_surplus_pages(
 
 	/*
 	 * We want to release as many surplus pages as possible, spread
-	 * evenly across all nodes. Iterate across all nodes until we
-	 * can no longer free unreserved surplus pages. This occurs when
-	 * the nodes with surplus pages have no free pages.
-	 * free_pool_huge_page() will balance the the frees across the
-	 * on-line nodes for us and will handle the hstate accounting.
+	 * evenly across all nodes with memory. Iterate across these nodes
+	 * until we can no longer free unreserved surplus pages. This occurs
+	 * when the nodes with surplus pages have no free pages.
+	 * free_pool_huge_page() will balance the the freed pages across the
+	 * on-line nodes with memory and will handle the hstate accounting.
 	 */
 	while (nr_pages--) {
-		if (!free_pool_huge_page(h, &node_online_map, 1))
+		if (!free_pool_huge_page(h, &node_states[N_HIGH_MEMORY], 1))
 			break;
 	}
 }
@@ -1053,7 +1053,7 @@ static struct page *alloc_huge_page(stru
 int __weak alloc_bootmem_huge_page(struct hstate *h)
 {
 	struct huge_bootmem_page *m;
-	int nr_nodes = nodes_weight(node_online_map);
+	int nr_nodes = nodes_weight(node_states[N_HIGH_MEMORY]);
 
 	while (nr_nodes) {
 		void *addr;
@@ -1114,7 +1114,8 @@ static void __init hugetlb_hstate_alloc_
 		if (h->order >= MAX_ORDER) {
 			if (!alloc_bootmem_huge_page(h))
 				break;
-		} else if (!alloc_fresh_huge_page(h, &node_online_map))
+		} else if (!alloc_fresh_huge_page(h,
+					 &node_states[N_HIGH_MEMORY]))
 			break;
 	}
 	h->max_huge_pages = i;
@@ -1165,7 +1166,7 @@ static void try_to_free_low(struct hstat
 		return;
 
 	if (!nodes_allowed)
-		nodes_allowed = &node_online_map;
+		nodes_allowed = &node_states[N_HIGH_MEMORY];
 
 	for (i = 0; i < MAX_NUMNODES; ++i) {
 		struct page *page, *next;
@@ -1390,7 +1391,7 @@ static ssize_t nr_hugepages_store_common
 	}
 	h->max_huge_pages = set_max_huge_pages(h, count, nodes_allowed);
 
-	if (nodes_allowed != &node_online_map)
+	if (nodes_allowed != &node_states[N_HIGH_MEMORY])
 		NODEMASK_FREE(nodes_allowed);
 
 	return len;
@@ -1612,7 +1613,7 @@ void hugetlb_unregister_node(struct node
 	struct node_hstate *nhs = &node_hstates[node->sysdev.id];
 
 	if (!nhs->hugepages_kobj)
-		return;
+		return;		/* no hstate attributes */
 
 	for_each_hstate(h)
 		if (nhs->hstate_kobjs[h - hstates]) {
@@ -1677,15 +1678,15 @@ void hugetlb_register_node(struct node *
 }
 
 /*
- * hugetlb init time:  register hstate attributes for all registered
- * node sysdevs.  All on-line nodes should have registered their
- * associated sysdev by the time the hugetlb module initializes.
+ * hugetlb init time:  register hstate attributes for all registered node
+ * sysdevs of nodes that have memory.  All on-line nodes should have
+ * registered their associated sysdev by this time.
  */
 static void hugetlb_register_all_nodes(void)
 {
 	int nid;
 
-	for (nid = 0; nid < nr_node_ids; nid++) {
+	for_each_node_state(nid, N_HIGH_MEMORY) {
 		struct node *node = &node_devices[nid];
 		if (node->sysdev.id == nid)
 			hugetlb_register_node(node);
@@ -1779,8 +1780,8 @@ void __init hugetlb_add_hstate(unsigned
 	h->free_huge_pages = 0;
 	for (i = 0; i < MAX_NUMNODES; ++i)
 		INIT_LIST_HEAD(&h->hugepage_freelists[i]);
-	h->next_nid_to_alloc = first_node(node_online_map);
-	h->next_nid_to_free = first_node(node_online_map);
+	h->next_nid_to_alloc = first_node(node_states[N_HIGH_MEMORY]);
+	h->next_nid_to_free = first_node(node_states[N_HIGH_MEMORY]);
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
 
Index: linux-2.6.31-mmotm-090925-1435/Documentation/vm/hugetlbpage.txt
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/Documentation/vm/hugetlbpage.txt	2009-10-05 13:33:26.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/Documentation/vm/hugetlbpage.txt	2009-10-05 13:33:27.000000000 -0400
@@ -90,11 +90,11 @@ huge page pool to 20, allocating or free
 On a NUMA platform, the kernel will attempt to distribute the huge page pool
 over all the set of allowed nodes specified by the NUMA memory policy of the
 task that modifies nr_hugepages.  The default for the allowed nodes--when the
-task has default memory policy--is all on-line nodes.  Allowed nodes with
-insufficient available, contiguous memory for a huge page will be silently
-skipped when allocating persistent huge pages.  See the discussion below of
-the interaction of task memory policy, cpusets and per node attributes with
-the allocation and freeing of persistent huge pages.
+task has default memory policy--is all on-line nodes with memory.  Allowed
+nodes with insufficient available, contiguous memory for a huge page will be
+silently skipped when allocating persistent huge pages.  See the discussion
+below of the interaction of task memory policy, cpusets and per node attributes
+with the allocation and freeing of persistent huge pages.
 
 The success or failure of huge page allocation depends on the amount of
 physically contiguous memory that is present in system at the time of the
@@ -226,7 +226,7 @@ resulting effect on persistent huge page
    without first moving to a cpuset that contains all of the desired nodes.
 
 5) Boot-time huge page allocation attempts to distribute the requested number
-   of huge pages over all on-lines nodes.
+   of huge pages over all on-lines nodes with memory.
 
 Per Node Hugepages Attributes
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 10/11] hugetlb:  handle memory hot-plug events
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (8 preceding siblings ...)
  2009-10-06  3:18 ` [PATCH 9/11] hugetlb: use only nodes with memory for huge pages Lee Schermerhorn
@ 2009-10-06  3:18 ` Lee Schermerhorn
  2009-10-07  4:12   ` David Rientjes
  2009-10-06  3:19 ` [PATCH 11/11] hugetlb: offload per node attribute registrations Lee Schermerhorn
  2009-10-06 16:02 ` [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Andi Kleen
  11 siblings, 1 reply; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:18 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 10/11] hugetlb:  per node attributes -- handle memory hot plug

Against:  2.6.31-mmotm-090925-1435

Register per node hstate attributes only for nodes with memory.

With Memory Hotplug, memory can be added to a memoryless node and
a node with memory can become memoryless.  Therefore, add a memory
on/off-line notifier callback to [un]register a node's attributes
on transition to/from memoryless state.

N.B.,  Only tested build, boot, libhugetlbfs regression.
       i.e., no memory hotplug testing.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>

 Documentation/vm/hugetlbpage.txt |    3 +-
 drivers/base/node.c              |   53 +++++++++++++++++++++++++++++++++++----
 2 files changed, 50 insertions(+), 6 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/drivers/base/node.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/drivers/base/node.c	2009-09-30 15:05:20.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/drivers/base/node.c	2009-09-30 15:05:57.000000000 -0400
@@ -177,8 +177,8 @@ static SYSDEV_ATTR(distance, S_IRUGO, no
 /*
  * hugetlbfs per node attributes registration interface:
  * When/if hugetlb[fs] subsystem initializes [sometime after this module],
- * it will register its per node attributes for all nodes online at that
- * time.  It will also call register_hugetlbfs_with_node(), below, to
+ * it will register its per node attributes for all online nodes with
+ * memory.  It will also call register_hugetlbfs_with_node(), below, to
  * register its attribute registration functions with this node driver.
  * Once these hooks have been initialized, the node driver will call into
  * the hugetlb module to [un]register attributes for hot-plugged nodes.
@@ -188,7 +188,8 @@ static node_registration_func_t __hugetl
 
 static inline void hugetlb_register_node(struct node *node)
 {
-	if (__hugetlb_register_node)
+	if (__hugetlb_register_node &&
+			node_state(node->sysdev.id, N_HIGH_MEMORY))
 		__hugetlb_register_node(node);
 }
 
@@ -233,6 +234,7 @@ int register_node(struct node *node, int
 		sysdev_create_file(&node->sysdev, &attr_distance);
 
 		scan_unevictable_register_node(node);
+
 		hugetlb_register_node(node);
 	}
 	return error;
@@ -254,7 +256,7 @@ void unregister_node(struct node *node)
 	sysdev_remove_file(&node->sysdev, &attr_distance);
 
 	scan_unevictable_unregister_node(node);
-	hugetlb_unregister_node(node);
+	hugetlb_unregister_node(node);		/* no-op, if memoryless node */
 
 	sysdev_unregister(&node->sysdev);
 }
@@ -384,8 +386,45 @@ static int link_mem_sections(int nid)
 	}
 	return err;
 }
+
+/*
+ * Handle per node hstate attribute [un]registration on transistions
+ * to/from memoryless state.
+ */
+
+static int node_memory_callback(struct notifier_block *self,
+				unsigned long action, void *arg)
+{
+	struct memory_notify *mnb = arg;
+	int nid = mnb->status_change_nid;
+
+	switch (action) {
+	case MEM_ONLINE:    /* memory successfully brought online */
+		if (nid != NUMA_NO_NODE)
+			hugetlb_register_node(&node_devices[nid]);
+		break;
+	case MEM_OFFLINE:   /* or offline */
+		if (nid != NUMA_NO_NODE)
+			hugetlb_unregister_node(&node_devices[nid]);
+		break;
+	case MEM_GOING_ONLINE:
+	case MEM_GOING_OFFLINE:
+	case MEM_CANCEL_ONLINE:
+	case MEM_CANCEL_OFFLINE:
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
 #else
 static int link_mem_sections(int nid) { return 0; }
+
+static inline int node_memory_callback(struct notifier_block *self,
+				unsigned long action, void *arg)
+{
+	return NOTIFY_OK;
+}
 #endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */
 
 int register_one_node(int nid)
@@ -499,13 +538,17 @@ static int node_states_init(void)
 	return err;
 }
 
+#define NODE_CALLBACK_PRI	2	/* lower than SLAB */
 static int __init register_node_type(void)
 {
 	int ret;
 
 	ret = sysdev_class_register(&node_class);
-	if (!ret)
+	if (!ret) {
 		ret = node_states_init();
+		hotplug_memory_notifier(node_memory_callback,
+					NODE_CALLBACK_PRI);
+	}
 
 	/*
 	 * Note:  we're not going to unregister the node class if we fail
Index: linux-2.6.31-mmotm-090925-1435/Documentation/vm/hugetlbpage.txt
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/Documentation/vm/hugetlbpage.txt	2009-09-30 15:05:31.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/Documentation/vm/hugetlbpage.txt	2009-09-30 15:05:57.000000000 -0400
@@ -231,7 +231,8 @@ resulting effect on persistent huge page
 Per Node Hugepages Attributes
 
 A subset of the contents of the root huge page control directory in sysfs,
-described above, has been replicated under each "node" system device in:
+described above, will be replicated under each the system device of each
+NUMA node with memory in:
 
 	/sys/devices/system/node/node[0-9]*/hugepages/
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 11/11] hugetlb:  offload per node attribute registrations
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (9 preceding siblings ...)
  2009-10-06  3:18 ` [PATCH 10/11] hugetlb: handle memory hot-plug events Lee Schermerhorn
@ 2009-10-06  3:19 ` Lee Schermerhorn
  2009-10-06 16:01   ` Andi Kleen
  2009-10-07  8:24   ` [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined David Rientjes
  2009-10-06 16:02 ` [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Andi Kleen
  11 siblings, 2 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06  3:19 UTC (permalink / raw)
  To: linux-mm, linux-numa
  Cc: akpm, Mel Gorman, Randy Dunlap, Nishanth Aravamudan,
	David Rientjes, Adam Litke, Andy Whitcroft, eric.whitney

[PATCH 11/11] hugetlb:  offload [un]registration of sysfs attr to worker thread

Against:  2.6.31-mmotm-090925-1435

New in V6

V7:  + remove redundant check for memory{ful|less} node from 
       node_hugetlb_work().  Rely on [added] return from
       hugetlb_register_node() to differentiate between transitions
       to/from memoryless state.

This patch offloads the registration and unregistration of per node
hstate sysfs attributes to a worker thread rather than attempt the
allocation/attachment or detachment/freeing of the attributes in 
the context of the memory hotplug handler.

N.B.,  Only tested build, boot, libhugetlbfs regression.
       i.e., no memory hotplug testing.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>

 drivers/base/node.c  |   51 ++++++++++++++++++++++++++++++++++++++++++---------
 include/linux/node.h |    5 +++++
 2 files changed, 47 insertions(+), 9 deletions(-)

Index: linux-2.6.31-mmotm-090925-1435/include/linux/node.h
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/include/linux/node.h	2009-09-30 15:05:20.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/include/linux/node.h	2009-09-30 15:05:58.000000000 -0400
@@ -21,9 +21,14 @@
 
 #include <linux/sysdev.h>
 #include <linux/cpumask.h>
+#include <linux/workqueue.h>
 
 struct node {
 	struct sys_device	sysdev;
+
+#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HUGETLBFS)
+	struct work_struct	node_work;
+#endif
 };
 
 struct memory_block;
Index: linux-2.6.31-mmotm-090925-1435/drivers/base/node.c
===================================================================
--- linux-2.6.31-mmotm-090925-1435.orig/drivers/base/node.c	2009-09-30 15:05:57.000000000 -0400
+++ linux-2.6.31-mmotm-090925-1435/drivers/base/node.c	2009-09-30 15:05:58.000000000 -0400
@@ -186,11 +186,14 @@ static SYSDEV_ATTR(distance, S_IRUGO, no
 static node_registration_func_t __hugetlb_register_node;
 static node_registration_func_t __hugetlb_unregister_node;
 
-static inline void hugetlb_register_node(struct node *node)
+static inline bool hugetlb_register_node(struct node *node)
 {
 	if (__hugetlb_register_node &&
-			node_state(node->sysdev.id, N_HIGH_MEMORY))
+			node_state(node->sysdev.id, N_HIGH_MEMORY)) {
 		__hugetlb_register_node(node);
+		return true;
+	}
+	return false;
 }
 
 static inline void hugetlb_unregister_node(struct node *node)
@@ -387,10 +390,31 @@ static int link_mem_sections(int nid)
 	return err;
 }
 
+#ifdef CONFIG_HUGETLBFS
 /*
  * Handle per node hstate attribute [un]registration on transistions
  * to/from memoryless state.
  */
+static void node_hugetlb_work(struct work_struct *work)
+{
+	struct node *node = container_of(work, struct node, node_work);
+
+	/*
+	 * We only get here when a node transitions to/from memoryless state.
+	 * We can detect which transition occurred by examining whether the
+	 * node has memory now.  hugetlb_register_node() already check this
+	 * so we try to register the attributes.  If that fails, then the
+	 * node has transitioned to memoryless, try to unregister the
+	 * attributes.
+	 */
+	if (!hugetlb_register_node(node))
+		hugetlb_unregister_node(node);
+}
+
+static void init_node_hugetlb_work(int nid)
+{
+	INIT_WORK(&node_devices[nid].node_work, node_hugetlb_work);
+}
 
 static int node_memory_callback(struct notifier_block *self,
 				unsigned long action, void *arg)
@@ -399,14 +423,16 @@ static int node_memory_callback(struct n
 	int nid = mnb->status_change_nid;
 
 	switch (action) {
-	case MEM_ONLINE:    /* memory successfully brought online */
+	case MEM_ONLINE:
+	case MEM_OFFLINE:
+		/*
+		 * offload per node hstate [un]registration to a work thread
+		 * when transitioning to/from memoryless state.
+		 */
 		if (nid != NUMA_NO_NODE)
-			hugetlb_register_node(&node_devices[nid]);
-		break;
-	case MEM_OFFLINE:   /* or offline */
-		if (nid != NUMA_NO_NODE)
-			hugetlb_unregister_node(&node_devices[nid]);
+			schedule_work(&node_devices[nid].node_work);
 		break;
+
 	case MEM_GOING_ONLINE:
 	case MEM_GOING_OFFLINE:
 	case MEM_CANCEL_ONLINE:
@@ -417,7 +443,8 @@ static int node_memory_callback(struct n
 
 	return NOTIFY_OK;
 }
-#else
+#endif	/* CONFIG_HUGETLBFS */
+#else	/* !CONFIG_MEMORY_HOTPLUG_SPARSE */
 static int link_mem_sections(int nid) { return 0; }
 
 static inline int node_memory_callback(struct notifier_block *self,
@@ -425,6 +452,9 @@ static inline int node_memory_callback(s
 {
 	return NOTIFY_OK;
 }
+
+static void init_node_hugetlb_work(int nid) { }
+
 #endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */
 
 int register_one_node(int nid)
@@ -449,6 +479,9 @@ int register_one_node(int nid)
 
 		/* link memory sections under this node */
 		error = link_mem_sections(nid);
+
+		/* initialize work queue for memory hot plug */
+		init_node_hugetlb_work(nid);
 	}
 
 	return error;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/11] hugetlb:  add nodemask arg to huge page alloc, free and surplus adjust fcns
  2009-10-06  3:17 ` [PATCH 2/11] hugetlb: add nodemask arg to huge page alloc, free and surplus adjust fcns Lee Schermerhorn
@ 2009-10-06  9:09   ` David Rientjes
  2009-10-07  3:26   ` David Rientjes
  1 sibling, 0 replies; 31+ messages in thread
From: David Rientjes @ 2009-10-06  9:09 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Mon, 5 Oct 2009, Lee Schermerhorn wrote:

> Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-28 10:12:20.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-09-30 11:41:36.000000000 -0400
> @@ -622,48 +622,56 @@ static struct page *alloc_fresh_huge_pag
>  }
>  
>  /*
> - * common helper function for hstate_next_node_to_{alloc|free}.
> - * return next node in node_online_map, wrapping at end.
> + * common helper functions for hstate_next_node_to_{alloc|free}.
> + * We may have allocated or freed a huge page based on a different
> + * nodes_allowed previously, so h->next_node_to_{alloc|free} might
> + * be outside of *nodes_allowed.  Ensure that we use an allowed
> + * node for alloc or free.
>   */
> -static int next_node_allowed(int nid)
> +static int next_node_allowed(int nid, nodemask_t *nodes_allowed)
>  {
> -	nid = next_node(nid, node_online_map);
> +	nid = next_node(nid, *nodes_allowed);
>  	if (nid == MAX_NUMNODES)
> -		nid = first_node(node_online_map);
> +		nid = first_node(*nodes_allowed);
>  	VM_BUG_ON(nid >= MAX_NUMNODES);
>  
>  	return nid;
>  }
>  
> +static int get_valid_node_allowed(int nid, nodemask_t *nodes_allowed)
> +{
> +	if (!node_isset(nid, *nodes_allowed))
> +		nid = next_node_allowed(nid, nodes_allowed);
> +	return nid;
> +}
> +
>  /*
> - * Use a helper variable to find the next node and then
> - * copy it back to next_nid_to_alloc afterwards:
> - * otherwise there's a window in which a racer might
> - * pass invalid nid MAX_NUMNODES to alloc_pages_exact_node.
> - * But we don't need to use a spin_lock here: it really
> - * doesn't matter if occasionally a racer chooses the
> - * same nid as we do.  Move nid forward in the mask even
> - * if we just successfully allocated a hugepage so that
> - * the next caller gets hugepages on the next node.
> + * returns the previously saved node ["this node"] from which to
> + * allocate a persistent huge page for the pool and advance the
> + * next node from which to allocate, handling wrap at end of node
> + * mask.
>   */
> -static int hstate_next_node_to_alloc(struct hstate *h)
> +static int hstate_next_node_to_alloc(struct hstate *h,
> +					nodemask_t *nodes_allowed)
>  {
> -	int nid, next_nid;
> +	int nid;
> +
> +	VM_BUG_ON(!nodes_allowed);
> +
> +	nid = get_valid_node_allowed(h->next_nid_to_alloc, nodes_allowed);
> +	h->next_nid_to_alloc = next_node_allowed(nid, nodes_allowed);
>  
> -	nid = h->next_nid_to_alloc;
> -	next_nid = next_node_allowed(nid);
> -	h->next_nid_to_alloc = next_nid;
>  	return nid;
>  }
>  
> -static int alloc_fresh_huge_page(struct hstate *h)
> +static int alloc_fresh_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>  {
>  	struct page *page;
>  	int start_nid;
>  	int next_nid;
>  	int ret = 0;
>  
> -	start_nid = hstate_next_node_to_alloc(h);
> +	start_nid = hstate_next_node_to_alloc(h, nodes_allowed);
>  	next_nid = start_nid;
>  
>  	do {
> @@ -672,7 +680,7 @@ static int alloc_fresh_huge_page(struct
>  			ret = 1;
>  			break;
>  		}
> -		next_nid = hstate_next_node_to_alloc(h);
> +		next_nid = hstate_next_node_to_alloc(h, nodes_allowed);
>  	} while (next_nid != start_nid);
>  
>  	if (ret)
> @@ -684,18 +692,20 @@ static int alloc_fresh_huge_page(struct
>  }
>  
>  /*
> - * helper for free_pool_huge_page() - return the next node
> - * from which to free a huge page.  Advance the next node id
> - * whether or not we find a free huge page to free so that the
> - * next attempt to free addresses the next node.
> + * helper for free_pool_huge_page() - return the previously saved
> + * node ["this node"] from which to free a huge page.  Advance the
> + * next node id whether or not we find a free huge page to free so
> + * that the next attempt to free addresses the next node.
>   */
> -static int hstate_next_node_to_free(struct hstate *h)
> +static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed)
>  {
> -	int nid, next_nid;
> +	int nid;
> +
> +	VM_BUG_ON(!nodes_allowed);
> +
> +	nid = get_valid_node_allowed(h->next_nid_to_free, nodes_allowed);
> +	h->next_nid_to_free = next_node_allowed(nid, nodes_allowed);
>  
> -	nid = h->next_nid_to_free;
> -	next_nid = next_node_allowed(nid);
> -	h->next_nid_to_free = next_nid;
>  	return nid;
>  }
>  
> @@ -705,13 +715,14 @@ static int hstate_next_node_to_free(stru
>   * balanced over allowed nodes.
>   * Called with hugetlb_lock locked.
>   */
> -static int free_pool_huge_page(struct hstate *h, bool acct_surplus)
> +static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> +							 bool acct_surplus)
>  {
>  	int start_nid;
>  	int next_nid;
>  	int ret = 0;
>  
> -	start_nid = hstate_next_node_to_free(h);
> +	start_nid = hstate_next_node_to_free(h, nodes_allowed);
>  	next_nid = start_nid;
>  
>  	do {
> @@ -735,7 +746,7 @@ static int free_pool_huge_page(struct hs
>  			ret = 1;
>  			break;
>  		}
> -		next_nid = hstate_next_node_to_free(h);
> +		next_nid = hstate_next_node_to_free(h, nodes_allowed);
>  	} while (next_nid != start_nid);
>  
>  	return ret;
> @@ -937,7 +948,7 @@ static void return_unused_surplus_pages(
>  	 * on-line nodes for us and will handle the hstate accounting.
>  	 */
>  	while (nr_pages--) {
> -		if (!free_pool_huge_page(h, 1))
> +		if (!free_pool_huge_page(h, &node_online_map, 1))
>  			break;
>  	}
>  }
> @@ -1047,7 +1058,7 @@ int __weak alloc_bootmem_huge_page(struc
>  		void *addr;
>  
>  		addr = __alloc_bootmem_node_nopanic(
> -				NODE_DATA(hstate_next_node_to_alloc(h)),
> +				NODE_DATA(hstate_next_node_to_alloc(h, NULL)),
>  				huge_page_size(h), huge_page_size(h), 0);
>  
>  		if (addr) {

This will trigger the VM_BUG_ON() in hstate_next_node_to_alloc() so it 
needs to be changed to &node_states[N_HIGH_MEMORY].  I'm wondering why it 
didn't show up in testing; CONFIG_DEBUG_VM isn't enabled by default so 
perhaps this hasn't had any hugepagez= coverage for hugepages of greater 
than MAX_ORDER?

 [ The VM_BUG_ON()'s in both hstate_next_node_to_alloc() and 
   hstate_next_node_to_free() are actually unnecessary since both will 
   quickly hit a node_isset() NULL pointer on the subsequent call to 
   get_valid_node_allowed() if it's true. ]

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 6/11] hugetlb:  add generic definition of NUMA_NO_NODE
  2009-10-06  3:18 ` [PATCH 6/11] hugetlb: add generic definition of NUMA_NO_NODE Lee Schermerhorn
@ 2009-10-06  9:28   ` David Rientjes
  0 siblings, 0 replies; 31+ messages in thread
From: David Rientjes @ 2009-10-06  9:28 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, Andrew Morton, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Mon, 5 Oct 2009, Lee Schermerhorn wrote:

> Index: linux-2.6.31-mmotm-090925-1435/arch/ia64/include/asm/numa.h
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/arch/ia64/include/asm/numa.h	2009-09-30 15:04:40.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/arch/ia64/include/asm/numa.h	2009-09-30 15:05:19.000000000 -0400
> @@ -22,8 +22,6 @@
>  
>  #include <asm/mmzone.h>
>  
> -#define NUMA_NO_NODE	-1
> -
>  extern u16 cpu_to_node_map[NR_CPUS] __cacheline_aligned;
>  extern cpumask_t node_to_cpu_mask[MAX_NUMNODES] __cacheline_aligned;
>  extern pg_data_t *pgdat_list[MAX_NUMNODES];
> Index: linux-2.6.31-mmotm-090925-1435/arch/x86/include/asm/topology.h
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/arch/x86/include/asm/topology.h	2009-09-30 15:04:40.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/arch/x86/include/asm/topology.h	2009-09-30 15:05:19.000000000 -0400
> @@ -35,11 +35,10 @@
>  # endif
>  #endif
>  
> -/* Node not present */
> -#define NUMA_NO_NODE	(-1)
> -
>  #ifdef CONFIG_NUMA
>  #include <linux/cpumask.h>
> +#include <linux/numa.h>
> +
>  #include <asm/mpspec.h>
>  
>  #ifdef CONFIG_X86_32

This could get nasty later because this is now only defining NUMA_NO_NODE 
for CONFIG_NUMA yet it's used in generic hugetlb code that you add in 
patch 7 that isn't dependent on that configuration.

It doesn't cause a compile error at this time, probably because some other 
header in mm/hugetlb.c is including linux/numa.h indirectly.  I'd err on 
the side of caution, however, and move the #include here out from under 
#ifdef CONFIG_NUMA to avoid that header file dependency.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 11/11] hugetlb:  offload per node attribute registrations
  2009-10-06  3:19 ` [PATCH 11/11] hugetlb: offload per node attribute registrations Lee Schermerhorn
@ 2009-10-06 16:01   ` Andi Kleen
  2009-10-06 16:28     ` Lee Schermerhorn
  2009-10-07  8:24   ` [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined David Rientjes
  1 sibling, 1 reply; 31+ messages in thread
From: Andi Kleen @ 2009-10-06 16:01 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, David Rientjes, Adam Litke, Andy Whitcroft,
	eric.whitney

On Mon, Oct 05, 2009 at 11:19:24PM -0400, Lee Schermerhorn wrote:
> [PATCH 11/11] hugetlb:  offload [un]registration of sysfs attr to worker thread
> 
> Against:  2.6.31-mmotm-090925-1435
> 
> New in V6
> 
> V7:  + remove redundant check for memory{ful|less} node from 
>        node_hugetlb_work().  Rely on [added] return from
>        hugetlb_register_node() to differentiate between transitions
>        to/from memoryless state.
> 
> This patch offloads the registration and unregistration of per node
> hstate sysfs attributes to a worker thread rather than attempt the
> allocation/attachment or detachment/freeing of the attributes in 
> the context of the memory hotplug handler.

Why this change? The hotplug handler should be allowed to sleep, shouldn't it?

> N.B.,  Only tested build, boot, libhugetlbfs regression.
>        i.e., no memory hotplug testing.

Yes, you have to because I know for a fact it's broken (outside your code) :)

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free
  2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
                   ` (10 preceding siblings ...)
  2009-10-06  3:19 ` [PATCH 11/11] hugetlb: offload per node attribute registrations Lee Schermerhorn
@ 2009-10-06 16:02 ` Andi Kleen
  11 siblings, 0 replies; 31+ messages in thread
From: Andi Kleen @ 2009-10-06 16:02 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, David Rientjes, Adam Litke, Andy Whitcroft,
	eric.whitney

On Mon, Oct 05, 2009 at 11:17:39PM -0400, Lee Schermerhorn wrote:
> PATCH 0/11 hugetlb: numa control of persistent huge pages alloc/free
> 
> Against:  2.6.31-mmotm-090925-1435 plus David Rientjes'
> "nodemask: make NODEMASK_ALLOC more general" patch applied
> 
> This is V9 of a series of patches to provide control over the location
> of the allocation and freeing of persistent huge pages on a NUMA
> platform.   Please consider for merging into mmotm.

FWIW I reviewed the series briefly and it seems good to me.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 11/11] hugetlb:  offload per node attribute registrations
  2009-10-06 16:01   ` Andi Kleen
@ 2009-10-06 16:28     ` Lee Schermerhorn
  2009-10-06 16:46       ` Andi Kleen
  0 siblings, 1 reply; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06 16:28 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, David Rientjes, Adam Litke, Andy Whitcroft,
	eric.whitney

On Tue, 2009-10-06 at 18:01 +0200, Andi Kleen wrote:
> On Mon, Oct 05, 2009 at 11:19:24PM -0400, Lee Schermerhorn wrote:
> > [PATCH 11/11] hugetlb:  offload [un]registration of sysfs attr to worker thread
> > 
> > Against:  2.6.31-mmotm-090925-1435
> > 
> > New in V6
> > 
> > V7:  + remove redundant check for memory{ful|less} node from 
> >        node_hugetlb_work().  Rely on [added] return from
> >        hugetlb_register_node() to differentiate between transitions
> >        to/from memoryless state.
> > 
> > This patch offloads the registration and unregistration of per node
> > hstate sysfs attributes to a worker thread rather than attempt the
> > allocation/attachment or detachment/freeing of the attributes in 
> > the context of the memory hotplug handler.
> 
> Why this change? The hotplug handler should be allowed to sleep, shouldn't it?

Andy:  perhaps it can.  I'm not familiar with hotplug, so I followed a
pattern found elsewhere.  I created a separate patch in case someone
familiar with this area says I don't need it.


> 
> > N.B.,  Only tested build, boot, libhugetlbfs regression.
> >        i.e., no memory hotplug testing.
> 
> Yes, you have to because I know for a fact it's broken (outside your code) :)

We need to be able to remove all memory from a node without that node
disappearing [as I think it does on x86_64] to even exercise this code.
I think some ia64 platforms can do that, perhaps others.

Lee

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 11/11] hugetlb:  offload per node attribute registrations
  2009-10-06 16:28     ` Lee Schermerhorn
@ 2009-10-06 16:46       ` Andi Kleen
  2009-10-06 17:57         ` Lee Schermerhorn
  0 siblings, 1 reply; 31+ messages in thread
From: Andi Kleen @ 2009-10-06 16:46 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: Andi Kleen, linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, David Rientjes, Adam Litke, Andy Whitcroft,
	eric.whitney

On Tue, Oct 06, 2009 at 12:28:49PM -0400, Lee Schermerhorn wrote:
> On Tue, 2009-10-06 at 18:01 +0200, Andi Kleen wrote:
> > On Mon, Oct 05, 2009 at 11:19:24PM -0400, Lee Schermerhorn wrote:
> > > [PATCH 11/11] hugetlb:  offload [un]registration of sysfs attr to worker thread
> > > 
> > > Against:  2.6.31-mmotm-090925-1435
> > > 
> > > New in V6
> > > 
> > > V7:  + remove redundant check for memory{ful|less} node from 
> > >        node_hugetlb_work().  Rely on [added] return from
> > >        hugetlb_register_node() to differentiate between transitions
> > >        to/from memoryless state.
> > > 
> > > This patch offloads the registration and unregistration of per node
> > > hstate sysfs attributes to a worker thread rather than attempt the
> > > allocation/attachment or detachment/freeing of the attributes in 
> > > the context of the memory hotplug handler.
> > 
> > Why this change? The hotplug handler should be allowed to sleep, shouldn't it?
> 
> Andy:  perhaps it can.  I'm not familiar with hotplug, so I followed a
> pattern found elsewhere.  I created a separate patch in case someone
> familiar with this area says I don't need it.

At least ACPI already puts it on a work queue.

> > 
> > > N.B.,  Only tested build, boot, libhugetlbfs regression.
> > >        i.e., no memory hotplug testing.
> > 
> > Yes, you have to because I know for a fact it's broken (outside your code) :)
> 
> We need to be able to remove all memory from a node without that node
> disappearing [as I think it does on x86_64] to even exercise this code.

Are you sure? x86-64 doesn't support full node hotplug afaik.

> I think some ia64 platforms can do that, perhaps others.

I've been thinking  about adding a hotadd regression test at boot time that
only adds memory to nodes later after boot. That would at least test hotadd
(and hot-removal is dubious anyways). 

That wouldn't be real node hotadd, but at least memory hotadd of all 
to a node (which doesn't work currently)

-Andi

P.S.: You can add Reviewed-by for me to the other patches if you want.

-- 
ak@linux.intel.com -- Speaking for myself only.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 11/11] hugetlb:  offload per node attribute registrations
  2009-10-06 16:46       ` Andi Kleen
@ 2009-10-06 17:57         ` Lee Schermerhorn
  0 siblings, 0 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-06 17:57 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, David Rientjes, Adam Litke, Andy Whitcroft,
	eric.whitney

On Tue, 2009-10-06 at 18:46 +0200, Andi Kleen wrote:
> On Tue, Oct 06, 2009 at 12:28:49PM -0400, Lee Schermerhorn wrote:
> > On Tue, 2009-10-06 at 18:01 +0200, Andi Kleen wrote:
> > > On Mon, Oct 05, 2009 at 11:19:24PM -0400, Lee Schermerhorn wrote:
> > > > [PATCH 11/11] hugetlb:  offload [un]registration of sysfs attr to worker thread
> > > > 
> > > > Against:  2.6.31-mmotm-090925-1435
> > > > 
> > > > New in V6
> > > > 
> > > > V7:  + remove redundant check for memory{ful|less} node from 
> > > >        node_hugetlb_work().  Rely on [added] return from
> > > >        hugetlb_register_node() to differentiate between transitions
> > > >        to/from memoryless state.
> > > > 
> > > > This patch offloads the registration and unregistration of per node
> > > > hstate sysfs attributes to a worker thread rather than attempt the
> > > > allocation/attachment or detachment/freeing of the attributes in 
> > > > the context of the memory hotplug handler.
> > > 
> > > Why this change? The hotplug handler should be allowed to sleep, shouldn't it?
> > 
> > Andy:  perhaps it can.  I'm not familiar with hotplug, so I followed a
> > pattern found elsewhere.  I created a separate patch in case someone
> > familiar with this area says I don't need it.
> 
> At least ACPI already puts it on a work queue.

Well, maybe we don't need it then.

> 
> > > 
> > > > N.B.,  Only tested build, boot, libhugetlbfs regression.
> > > >        i.e., no memory hotplug testing.
> > > 
> > > Yes, you have to because I know for a fact it's broken (outside your code) :)
> > 
> > We need to be able to remove all memory from a node without that node
> > disappearing [as I think it does on x86_64] to even exercise this code.
> 
> Are you sure? x86-64 doesn't support full node hotplug afaik.

I'll have to look.  At boot time on x86, we hide any memoryless nodes by
assigning their cpus to other nodes [currently just "round robin", but I
think this needs to change to distance based].  I recall seeing a memory
hotplug handler that moves the cpus when a node becomes memoryless.  If
it then unregisters the node [again, have to look.  no time now :(], the
earlier patches handle [un]registration of the per node attributes.  The
subject code only gets triggered if we have a node that becomes
memoryless as a result of hot remove but remains registered or that
starts memoryless and has memory hot-added. 


> 
> > I think some ia64 platforms can do that, perhaps others.
> 
> I've been thinking  about adding a hotadd regression test at boot time that
> only adds memory to nodes later after boot. That would at least test hotadd
> (and hot-removal is dubious anyways). 
> 
> That wouldn't be real node hotadd, but at least memory hotadd of all 
> to a node (which doesn't work currently)

That would be useful for testing the last 3 patches in the series, if
memoryless nodes can exist.  Otherwise, the per node attributes will
just come and go with the node itself.

> 
> -Andi
> 
> P.S.: You can add Reviewed-by for me to the other patches if you want.
> 

Thanks.  will do!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 3/11] hugetlb:  factor init_nodemask_of_node
  2009-10-06  3:17 ` [PATCH 3/11] hugetlb: factor init_nodemask_of_node Lee Schermerhorn
@ 2009-10-07  3:21   ` David Rientjes
  0 siblings, 0 replies; 31+ messages in thread
From: David Rientjes @ 2009-10-07  3:21 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Mon, 5 Oct 2009, Lee Schermerhorn wrote:

> [PATCH 3/11] - hugetlb:  factor init_nodemask_of_node()
> 
> Against:  2.6.31-mmotm-090925-1435
> 
> New in V5 of series
> 
> V6: + rename 'init_nodemask_of_nodes()' to 'init_nodemask_of_node()'
>     + redefine init_nodemask_of_node() as static inline fcn
>     + move this patch back 1 in series
> 
> V8: + factor 'init_nodemask_of_node()' from nodemask_of_node()
>     + drop alloc_nodemask_of_node() -- not used any more
> 
> V9: + remove extra parens around arguments now that init_nodemask_of_node
>       is not longer a macro.
> 

I don't see this change in v9, the extra parentheses are still there in 
the inline function.

> Factor init_nodemask_of_node() out of the nodemask_of_node()
> macro.
> 
> This will be used to populate the huge pages "nodes_allowed"
> nodemask for a single node when basing nodes_allowed on a
> preferred/local mempolicy or when a persistent huge page
> pool page count is modified via a per node sysfs attribute.
> 
> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
> Acked-by: Mel Gorman <mel@csn.ul.ie>
> 
>  include/linux/nodemask.h |    9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> Index: linux-2.6.31-mmotm-090925-1435/include/linux/nodemask.h
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/include/linux/nodemask.h	2009-09-30 11:19:52.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/include/linux/nodemask.h	2009-09-30 11:22:01.000000000 -0400
> @@ -245,14 +245,19 @@ static inline int __next_node(int n, con
>  	return min_t(int,MAX_NUMNODES,find_next_bit(srcp->bits, MAX_NUMNODES, n+1));
>  }
>  
> +static inline void init_nodemask_of_node(nodemask_t *mask, int node)
> +{
> +	nodes_clear(*(mask));
> +	node_set((node), *(mask));
> +}
> +
>  #define nodemask_of_node(node)						\
>  ({									\
>  	typeof(_unused_nodemask_arg_) m;				\
>  	if (sizeof(m) == sizeof(unsigned long)) {			\
>  		m.bits[0] = 1UL<<(node);				\
>  	} else {							\
> -		nodes_clear(m);						\
> -		node_set((node), m);					\
> +		init_nodemask_of_node(&m, (node));			\
>  	}								\
>  	m;								\
>  })
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/11] hugetlb:  derive huge pages nodes allowed from task mempolicy
  2009-10-06  3:18 ` [PATCH 4/11] hugetlb: derive huge pages nodes allowed from task mempolicy Lee Schermerhorn
@ 2009-10-07  3:26   ` David Rientjes
  2009-10-07 16:30     ` Lee Schermerhorn
  0 siblings, 1 reply; 31+ messages in thread
From: David Rientjes @ 2009-10-07  3:26 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Mon, 5 Oct 2009, Lee Schermerhorn wrote:

> Index: linux-2.6.31-mmotm-090925-1435/mm/mempolicy.c
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/mm/mempolicy.c	2009-09-30 12:48:45.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/mm/mempolicy.c	2009-09-30 12:48:46.000000000 -0400
> @@ -1564,6 +1564,53 @@ struct zonelist *huge_zonelist(struct vm
>  	}
>  	return zl;
>  }
> +
> +/*
> + * init_nodemask_of_mempolicy
> + *
> + * If the current task's mempolicy is "default" [NULL], return 'false'
> + * to indicate * default policy.  Otherwise, extract the policy nodemask
> + * for 'bind' * or 'interleave' policy into the argument nodemask, or
> + * initialize the argument nodemask to contain the single node for
> + * 'preferred' or * 'local' policy and return 'true' to indicate presence
> + * of non-default mempolicy.
> + *

Looks like some mangling of the comment, there's spurious '*' throughout.

> + * We don't bother with reference counting the mempolicy [mpol_get/put]
> + * because the current task is examining it's own mempolicy and a task's
> + * mempolicy is only ever changed by the task itself.
> + *
> + * N.B., it is the caller's responsibility to free a returned nodemask.
> + */
> +bool init_nodemask_of_mempolicy(nodemask_t *mask)
> +{
> +	struct mempolicy *mempolicy;
> +	int nid;
> +
> +	if (!current->mempolicy)
> +		return false;
> +
> +	mempolicy = current->mempolicy;
> +	switch (mempolicy->mode) {
> +	case MPOL_PREFERRED:
> +		if (mempolicy->flags & MPOL_F_LOCAL)
> +			nid = numa_node_id();
> +		else
> +			nid = mempolicy->v.preferred_node;
> +		init_nodemask_of_node(mask, nid);
> +		break;
> +
> +	case MPOL_BIND:
> +		/* Fall through */
> +	case MPOL_INTERLEAVE:
> +		*mask =  mempolicy->v.nodes;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	return true;
> +}
>  #endif
>  
>  /* Allocate a page in interleaved policy.
> Index: linux-2.6.31-mmotm-090925-1435/include/linux/mempolicy.h
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/include/linux/mempolicy.h	2009-09-30 12:48:45.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/include/linux/mempolicy.h	2009-09-30 12:48:46.000000000 -0400
> @@ -201,6 +201,7 @@ extern void mpol_fix_fork_child_flag(str
>  extern struct zonelist *huge_zonelist(struct vm_area_struct *vma,
>  				unsigned long addr, gfp_t gfp_flags,
>  				struct mempolicy **mpol, nodemask_t **nodemask);
> +extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
>  extern unsigned slab_node(struct mempolicy *policy);
>  
>  extern enum zone_type policy_zone;
> @@ -328,6 +329,8 @@ static inline struct zonelist *huge_zone
>  	return node_zonelist(0, gfp_flags);
>  }
>  
> +static inline bool init_nodemask_of_mempolicy(nodemask_t *m) { return false; }
> +
>  static inline int do_migrate_pages(struct mm_struct *mm,
>  			const nodemask_t *from_nodes,
>  			const nodemask_t *to_nodes, int flags)
> Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-30 12:48:45.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-10-02 21:22:04.000000000 -0400
> @@ -1334,29 +1334,71 @@ static struct hstate *kobj_to_hstate(str
>  	return NULL;
>  }
>  
> -static ssize_t nr_hugepages_show(struct kobject *kobj,
> +static ssize_t nr_hugepages_show_common(struct kobject *kobj,
>  					struct kobj_attribute *attr, char *buf)
>  {
>  	struct hstate *h = kobj_to_hstate(kobj);
>  	return sprintf(buf, "%lu\n", h->nr_huge_pages);
>  }
> -static ssize_t nr_hugepages_store(struct kobject *kobj,
> -		struct kobj_attribute *attr, const char *buf, size_t count)
> +static ssize_t nr_hugepages_store_common(bool obey_mempolicy,
> +			struct kobject *kobj, struct kobj_attribute *attr,
> +			const char *buf, size_t len)
>  {
>  	int err;
> -	unsigned long input;
> +	unsigned long count;
>  	struct hstate *h = kobj_to_hstate(kobj);
> +	NODEMASK_ALLOC(nodemask, nodes_allowed);
>  

In the two places you do NODEMASK_ALLOC(), here and 
hugetlb_sysctl_handler(), you'll need to check that nodes_allowed is 
non-NULL since it's possible that kmalloc() will return NULL for 
CONFIG_NODES_SHIFT > 8.

In such a case, it's probably sufficient to simply set nodes_allowed to 
node_states[N_HIGH_MEMORY] so that we can still free hugepages when we're 
oom, a common memory freeing tactic.

You could do that by simply returning false from 
init_nodemask_of_mempolicy() if !nodes_allowed since NODEMASK_FREE() can 
take a NULL pointer, but it may be easier to factor that logic into your 
conditional below:

> -	err = strict_strtoul(buf, 10, &input);
> +	err = strict_strtoul(buf, 10, &count);
>  	if (err)
>  		return 0;
>  
> -	h->max_huge_pages = set_max_huge_pages(h, input, &node_online_map);
> +	if (!(obey_mempolicy && init_nodemask_of_mempolicy(nodes_allowed))) {
> +		NODEMASK_FREE(nodes_allowed);
> +		nodes_allowed = &node_online_map;
> +	}
> +	h->max_huge_pages = set_max_huge_pages(h, count, nodes_allowed);
>

You can get away with just testing !nodes_allowed here since the stack 
allocation variation of NODEMASK_ALLOC() is such that nodes_allowed will 
always be an initialized pointer pointing to _nodes_allowed so you won't 
have an uninitialized warning.

Once that's done, you can get rid of the check for a NULL nodes_allowed in 
try_to_free_low() from patch 2 since it will always be valid in 
set_max_huge_pages().

> -	return count;
> +	if (nodes_allowed != &node_online_map)
> +		NODEMASK_FREE(nodes_allowed);
> +
> +	return len;
> +}
> +
> +static ssize_t nr_hugepages_show(struct kobject *kobj,
> +				       struct kobj_attribute *attr, char *buf)
> +{
> +	return nr_hugepages_show_common(kobj, attr, buf);
> +}
> +
> +static ssize_t nr_hugepages_store(struct kobject *kobj,
> +	       struct kobj_attribute *attr, const char *buf, size_t len)
> +{
> +	return nr_hugepages_store_common(false, kobj, attr, buf, len);
>  }
>  HSTATE_ATTR(nr_hugepages);
>  
> +#ifdef CONFIG_NUMA
> +
> +/*
> + * hstate attribute for optionally mempolicy-based constraint on persistent
> + * huge page alloc/free.
> + */
> +static ssize_t nr_hugepages_mempolicy_show(struct kobject *kobj,
> +				       struct kobj_attribute *attr, char *buf)
> +{
> +	return nr_hugepages_show_common(kobj, attr, buf);
> +}
> +
> +static ssize_t nr_hugepages_mempolicy_store(struct kobject *kobj,
> +	       struct kobj_attribute *attr, const char *buf, size_t len)
> +{
> +	return nr_hugepages_store_common(true, kobj, attr, buf, len);
> +}
> +HSTATE_ATTR(nr_hugepages_mempolicy);
> +#endif
> +
> +
>  static ssize_t nr_overcommit_hugepages_show(struct kobject *kobj,
>  					struct kobj_attribute *attr, char *buf)
>  {
> @@ -1412,6 +1454,9 @@ static struct attribute *hstate_attrs[]
>  	&free_hugepages_attr.attr,
>  	&resv_hugepages_attr.attr,
>  	&surplus_hugepages_attr.attr,
> +#ifdef CONFIG_NUMA
> +	&nr_hugepages_mempolicy_attr.attr,
> +#endif
>  	NULL,
>  };
>  
> @@ -1578,9 +1623,9 @@ static unsigned int cpuset_mems_nr(unsig
>  }
>  
>  #ifdef CONFIG_SYSCTL
> -int hugetlb_sysctl_handler(struct ctl_table *table, int write,
> -			   void __user *buffer,
> -			   size_t *length, loff_t *ppos)
> +static int hugetlb_sysctl_handler_common(bool obey_mempolicy,
> +			 struct ctl_table *table, int write,
> +			 void __user *buffer, size_t *length, loff_t *ppos)
>  {
>  	struct hstate *h = &default_hstate;
>  	unsigned long tmp;
> @@ -1592,13 +1637,39 @@ int hugetlb_sysctl_handler(struct ctl_ta
>  	table->maxlen = sizeof(unsigned long);
>  	proc_doulongvec_minmax(table, write, buffer, length, ppos);
>  
> -	if (write)
> -		h->max_huge_pages = set_max_huge_pages(h, tmp,
> -							&node_online_map);
> +	if (write) {
> +		NODEMASK_ALLOC(nodemask, nodes_allowed);
> +		if (!(obey_mempolicy &&
> +			       init_nodemask_of_mempolicy(nodes_allowed))) {
> +			NODEMASK_FREE(nodes_allowed);
> +			nodes_allowed = &node_states[N_HIGH_MEMORY];
> +		}
> +		h->max_huge_pages = set_max_huge_pages(h, tmp, nodes_allowed);
> +
> +		if (nodes_allowed != &node_states[N_HIGH_MEMORY])
> +			NODEMASK_FREE(nodes_allowed);
> +	}
>  
>  	return 0;
>  }
>  
> +int hugetlb_sysctl_handler(struct ctl_table *table, int write,
> +			  void __user *buffer, size_t *length, loff_t *ppos)
> +{
> +
> +	return hugetlb_sysctl_handler_common(false, table, write,
> +							buffer, length, ppos);
> +}
> +
> +#ifdef CONFIG_NUMA
> +int hugetlb_mempolicy_sysctl_handler(struct ctl_table *table, int write,
> +			  void __user *buffer, size_t *length, loff_t *ppos)
> +{
> +	return hugetlb_sysctl_handler_common(true, table, write,
> +							buffer, length, ppos);
> +}
> +#endif /* CONFIG_NUMA */
> +
>  int hugetlb_treat_movable_handler(struct ctl_table *table, int write,
>  			void __user *buffer,
>  			size_t *length, loff_t *ppos)
> Index: linux-2.6.31-mmotm-090925-1435/include/linux/hugetlb.h
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/include/linux/hugetlb.h	2009-09-30 12:48:45.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/include/linux/hugetlb.h	2009-09-30 12:48:46.000000000 -0400
> @@ -23,6 +23,12 @@ void reset_vma_resv_huge_pages(struct vm
>  int hugetlb_sysctl_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
>  int hugetlb_overcommit_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
>  int hugetlb_treat_movable_handler(struct ctl_table *, int, void __user *, size_t *, loff_t *);
> +
> +#ifdef CONFIG_NUMA
> +int hugetlb_mempolicy_sysctl_handler(struct ctl_table *, int,
> +					void __user *, size_t *, loff_t *);
> +#endif
> +
>  int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct vm_area_struct *);
>  int follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *,
>  			struct page **, struct vm_area_struct **,
> Index: linux-2.6.31-mmotm-090925-1435/kernel/sysctl.c
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/kernel/sysctl.c	2009-09-30 12:48:45.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/kernel/sysctl.c	2009-09-30 12:48:46.000000000 -0400
> @@ -1164,7 +1164,7 @@ static struct ctl_table vm_table[] = {
>  		.extra2		= &one_hundred,
>  	},
>  #ifdef CONFIG_HUGETLB_PAGE
> -	 {
> +	{
>  		.procname	= "nr_hugepages",
>  		.data		= NULL,
>  		.maxlen		= sizeof(unsigned long),
> @@ -1172,7 +1172,19 @@ static struct ctl_table vm_table[] = {
>  		.proc_handler	= &hugetlb_sysctl_handler,
>  		.extra1		= (void *)&hugetlb_zero,
>  		.extra2		= (void *)&hugetlb_infinity,
> -	 },
> +	},
> +#ifdef CONFIG_NUMA
> +	{
> +	       .ctl_name       = CTL_UNNUMBERED,
> +	       .procname       = "nr_hugepages_mempolicy",
> +	       .data           = NULL,
> +	       .maxlen         = sizeof(unsigned long),
> +	       .mode           = 0644,
> +	       .proc_handler   = &hugetlb_mempolicy_sysctl_handler,
> +	       .extra1	 = (void *)&hugetlb_zero,
> +	       .extra2	 = (void *)&hugetlb_infinity,
> +	},
> +#endif
>  	 {
>  		.ctl_name	= VM_HUGETLB_GROUP,
>  		.procname	= "hugetlb_shm_group",
> 

There's some whitespace damage in the nr_hugepages_mempolicy hunk, it 
needs tabs instead of spaces for alignment.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/11] hugetlb:  add nodemask arg to huge page alloc, free and surplus adjust fcns
  2009-10-06  3:17 ` [PATCH 2/11] hugetlb: add nodemask arg to huge page alloc, free and surplus adjust fcns Lee Schermerhorn
  2009-10-06  9:09   ` David Rientjes
@ 2009-10-07  3:26   ` David Rientjes
  2009-10-07 14:13     ` Lee Schermerhorn
  1 sibling, 1 reply; 31+ messages in thread
From: David Rientjes @ 2009-10-07  3:26 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Mon, 5 Oct 2009, Lee Schermerhorn wrote:

> Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-28 10:12:20.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-09-30 11:41:36.000000000 -0400
> @@ -622,48 +622,56 @@ static struct page *alloc_fresh_huge_pag
>  }
>  
>  /*
> - * common helper function for hstate_next_node_to_{alloc|free}.
> - * return next node in node_online_map, wrapping at end.
> + * common helper functions for hstate_next_node_to_{alloc|free}.
> + * We may have allocated or freed a huge page based on a different
> + * nodes_allowed previously, so h->next_node_to_{alloc|free} might
> + * be outside of *nodes_allowed.  Ensure that we use an allowed
> + * node for alloc or free.
>   */
> -static int next_node_allowed(int nid)
> +static int next_node_allowed(int nid, nodemask_t *nodes_allowed)
>  {
> -	nid = next_node(nid, node_online_map);
> +	nid = next_node(nid, *nodes_allowed);
>  	if (nid == MAX_NUMNODES)
> -		nid = first_node(node_online_map);
> +		nid = first_node(*nodes_allowed);
>  	VM_BUG_ON(nid >= MAX_NUMNODES);
>  
>  	return nid;
>  }
>  
> +static int get_valid_node_allowed(int nid, nodemask_t *nodes_allowed)
> +{
> +	if (!node_isset(nid, *nodes_allowed))
> +		nid = next_node_allowed(nid, nodes_allowed);
> +	return nid;
> +}
> +
>  /*
> - * Use a helper variable to find the next node and then
> - * copy it back to next_nid_to_alloc afterwards:
> - * otherwise there's a window in which a racer might
> - * pass invalid nid MAX_NUMNODES to alloc_pages_exact_node.
> - * But we don't need to use a spin_lock here: it really
> - * doesn't matter if occasionally a racer chooses the
> - * same nid as we do.  Move nid forward in the mask even
> - * if we just successfully allocated a hugepage so that
> - * the next caller gets hugepages on the next node.
> + * returns the previously saved node ["this node"] from which to
> + * allocate a persistent huge page for the pool and advance the
> + * next node from which to allocate, handling wrap at end of node
> + * mask.
>   */
> -static int hstate_next_node_to_alloc(struct hstate *h)
> +static int hstate_next_node_to_alloc(struct hstate *h,
> +					nodemask_t *nodes_allowed)
>  {
> -	int nid, next_nid;
> +	int nid;
> +
> +	VM_BUG_ON(!nodes_allowed);
> +
> +	nid = get_valid_node_allowed(h->next_nid_to_alloc, nodes_allowed);
> +	h->next_nid_to_alloc = next_node_allowed(nid, nodes_allowed);
>  
> -	nid = h->next_nid_to_alloc;
> -	next_nid = next_node_allowed(nid);
> -	h->next_nid_to_alloc = next_nid;
>  	return nid;
>  }
>  
> -static int alloc_fresh_huge_page(struct hstate *h)
> +static int alloc_fresh_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>  {
>  	struct page *page;
>  	int start_nid;
>  	int next_nid;
>  	int ret = 0;
>  
> -	start_nid = hstate_next_node_to_alloc(h);
> +	start_nid = hstate_next_node_to_alloc(h, nodes_allowed);
>  	next_nid = start_nid;
>  
>  	do {
> @@ -672,7 +680,7 @@ static int alloc_fresh_huge_page(struct
>  			ret = 1;
>  			break;
>  		}
> -		next_nid = hstate_next_node_to_alloc(h);
> +		next_nid = hstate_next_node_to_alloc(h, nodes_allowed);
>  	} while (next_nid != start_nid);
>  
>  	if (ret)
> @@ -684,18 +692,20 @@ static int alloc_fresh_huge_page(struct
>  }
>  
>  /*
> - * helper for free_pool_huge_page() - return the next node
> - * from which to free a huge page.  Advance the next node id
> - * whether or not we find a free huge page to free so that the
> - * next attempt to free addresses the next node.
> + * helper for free_pool_huge_page() - return the previously saved
> + * node ["this node"] from which to free a huge page.  Advance the
> + * next node id whether or not we find a free huge page to free so
> + * that the next attempt to free addresses the next node.
>   */
> -static int hstate_next_node_to_free(struct hstate *h)
> +static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed)
>  {
> -	int nid, next_nid;
> +	int nid;
> +
> +	VM_BUG_ON(!nodes_allowed);
> +
> +	nid = get_valid_node_allowed(h->next_nid_to_free, nodes_allowed);
> +	h->next_nid_to_free = next_node_allowed(nid, nodes_allowed);
>  
> -	nid = h->next_nid_to_free;
> -	next_nid = next_node_allowed(nid);
> -	h->next_nid_to_free = next_nid;
>  	return nid;
>  }
>  
> @@ -705,13 +715,14 @@ static int hstate_next_node_to_free(stru
>   * balanced over allowed nodes.
>   * Called with hugetlb_lock locked.
>   */
> -static int free_pool_huge_page(struct hstate *h, bool acct_surplus)
> +static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> +							 bool acct_surplus)
>  {
>  	int start_nid;
>  	int next_nid;
>  	int ret = 0;
>  
> -	start_nid = hstate_next_node_to_free(h);
> +	start_nid = hstate_next_node_to_free(h, nodes_allowed);
>  	next_nid = start_nid;
>  
>  	do {
> @@ -735,7 +746,7 @@ static int free_pool_huge_page(struct hs
>  			ret = 1;
>  			break;
>  		}
> -		next_nid = hstate_next_node_to_free(h);
> +		next_nid = hstate_next_node_to_free(h, nodes_allowed);
>  	} while (next_nid != start_nid);
>  
>  	return ret;
> @@ -937,7 +948,7 @@ static void return_unused_surplus_pages(
>  	 * on-line nodes for us and will handle the hstate accounting.
>  	 */
>  	while (nr_pages--) {
> -		if (!free_pool_huge_page(h, 1))
> +		if (!free_pool_huge_page(h, &node_online_map, 1))
>  			break;
>  	}
>  }
> @@ -1047,7 +1058,7 @@ int __weak alloc_bootmem_huge_page(struc
>  		void *addr;
>  
>  		addr = __alloc_bootmem_node_nopanic(
> -				NODE_DATA(hstate_next_node_to_alloc(h)),
> +				NODE_DATA(hstate_next_node_to_alloc(h, NULL)),
>  				huge_page_size(h), huge_page_size(h), 0);
>  
>  		if (addr) {
> @@ -1102,7 +1113,7 @@ static void __init hugetlb_hstate_alloc_
>  		if (h->order >= MAX_ORDER) {
>  			if (!alloc_bootmem_huge_page(h))
>  				break;
> -		} else if (!alloc_fresh_huge_page(h))
> +		} else if (!alloc_fresh_huge_page(h, &node_online_map))
>  			break;
>  	}
>  	h->max_huge_pages = i;
> @@ -1144,16 +1155,22 @@ static void __init report_hugepages(void
>  }
>  
>  #ifdef CONFIG_HIGHMEM
> -static void try_to_free_low(struct hstate *h, unsigned long count)
> +static void try_to_free_low(struct hstate *h, unsigned long count,
> +						nodemask_t *nodes_allowed)
>  {
>  	int i;
>  
>  	if (h->order >= MAX_ORDER)
>  		return;
>  
> +	if (!nodes_allowed)
> +		nodes_allowed = &node_online_map;
> +
>  	for (i = 0; i < MAX_NUMNODES; ++i) {
>  		struct page *page, *next;
>  		struct list_head *freel = &h->hugepage_freelists[i];
> +		if (!node_isset(i, *nodes_allowed))
> +			continue;
>  		list_for_each_entry_safe(page, next, freel, lru) {
>  			if (count >= h->nr_huge_pages)
>  				return;

Simply converting the iteration to use
for_each_node_mask(i, *nodes_allowed) would be cleaner.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 7/11] hugetlb:  add per node hstate attributes
  2009-10-06  3:18 ` [PATCH 7/11] hugetlb: add per node hstate attributes Lee Schermerhorn
@ 2009-10-07  4:04   ` David Rientjes
  0 siblings, 0 replies; 31+ messages in thread
From: David Rientjes @ 2009-10-07  4:04 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, Andrew Morton, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Mon, 5 Oct 2009, Lee Schermerhorn wrote:

> Index: linux-2.6.31-mmotm-090925-1435/drivers/base/node.c
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/drivers/base/node.c	2009-10-05 13:33:00.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/drivers/base/node.c	2009-10-05 14:00:32.000000000 -0400
> @@ -173,6 +173,43 @@ static ssize_t node_read_distance(struct
>  }
>  static SYSDEV_ATTR(distance, S_IRUGO, node_read_distance, NULL);
>  
> +#ifdef CONFIG_HUGETLBFS
> +/*
> + * hugetlbfs per node attributes registration interface:
> + * When/if hugetlb[fs] subsystem initializes [sometime after this module],
> + * it will register its per node attributes for all nodes online at that
> + * time.  It will also call register_hugetlbfs_with_node(), below, to
> + * register its attribute registration functions with this node driver.
> + * Once these hooks have been initialized, the node driver will call into
> + * the hugetlb module to [un]register attributes for hot-plugged nodes.
> + */
> +static node_registration_func_t __hugetlb_register_node;
> +static node_registration_func_t __hugetlb_unregister_node;
> +
> +static inline void hugetlb_register_node(struct node *node)
> +{
> +	if (__hugetlb_register_node)
> +		__hugetlb_register_node(node);
> +}
> +
> +static inline void hugetlb_unregister_node(struct node *node)
> +{
> +	if (__hugetlb_unregister_node)
> +		__hugetlb_unregister_node(node);
> +}
> +
> +void register_hugetlbfs_with_node(node_registration_func_t doregister,
> +				  node_registration_func_t unregister)
> +{
> +	__hugetlb_register_node   = doregister;
> +	__hugetlb_unregister_node = unregister;
> +}
> +#else
> +static inline void hugetlb_register_node(struct node *node) {}
> +
> +static inline void hugetlb_unregister_node(struct node *node) {}
> +#endif
> +
>  
>  /*
>   * register_node - Setup a sysfs device for a node.
> @@ -196,6 +233,7 @@ int register_node(struct node *node, int
>  		sysdev_create_file(&node->sysdev, &attr_distance);
>  
>  		scan_unevictable_register_node(node);
> +		hugetlb_register_node(node);
>  	}
>  	return error;
>  }
> @@ -216,6 +254,7 @@ void unregister_node(struct node *node)
>  	sysdev_remove_file(&node->sysdev, &attr_distance);
>  
>  	scan_unevictable_unregister_node(node);
> +	hugetlb_unregister_node(node);
>  
>  	sysdev_unregister(&node->sysdev);
>  }
> Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-10-05 13:33:21.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-10-05 14:00:57.000000000 -0400
> @@ -24,6 +24,7 @@
>  #include <asm/io.h>
>  
>  #include <linux/hugetlb.h>
> +#include <linux/node.h>
>  #include "internal.h"
>  
>  const unsigned long hugetlb_zero = 0, hugetlb_infinity = ~0UL;
> @@ -1324,38 +1325,68 @@ out:
>  static struct kobject *hugepages_kobj;
>  static struct kobject *hstate_kobjs[HUGE_MAX_HSTATE];
>  
> -static struct hstate *kobj_to_hstate(struct kobject *kobj)
> +static struct hstate *kobj_to_node_hstate(struct kobject *kobj, int *nidp);
> +
> +static struct hstate *kobj_to_hstate(struct kobject *kobj, int *nidp)
>  {
>  	int i;
> +
>  	for (i = 0; i < HUGE_MAX_HSTATE; i++)
> -		if (hstate_kobjs[i] == kobj)
> +		if (hstate_kobjs[i] == kobj) {
> +			if (nidp)
> +				*nidp = NUMA_NO_NODE;
>  			return &hstates[i];
> -	BUG();
> -	return NULL;
> +		}
> +
> +	return kobj_to_node_hstate(kobj, nidp);
>  }
>  
>  static ssize_t nr_hugepages_show_common(struct kobject *kobj,
>  					struct kobj_attribute *attr, char *buf)
>  {
> -	struct hstate *h = kobj_to_hstate(kobj);
> -	return sprintf(buf, "%lu\n", h->nr_huge_pages);
> +	struct hstate *h;
> +	unsigned long nr_huge_pages;
> +	int nid;
> +
> +	h = kobj_to_hstate(kobj, &nid);
> +	if (nid == NUMA_NO_NODE)
> +		nr_huge_pages = h->nr_huge_pages;
> +	else
> +		nr_huge_pages = h->nr_huge_pages_node[nid];
> +
> +	return sprintf(buf, "%lu\n", nr_huge_pages);
>  }
>  static ssize_t nr_hugepages_store_common(bool obey_mempolicy,
>  			struct kobject *kobj, struct kobj_attribute *attr,
>  			const char *buf, size_t len)
>  {
>  	int err;
> +	int nid;
>  	unsigned long count;
> -	struct hstate *h = kobj_to_hstate(kobj);
> +	struct hstate *h;
>  	NODEMASK_ALLOC(nodemask_t, nodes_allowed);
>  
>  	err = strict_strtoul(buf, 10, &count);
>  	if (err)
>  		return 0;
>  
> -	if (!(obey_mempolicy && init_nodemask_of_mempolicy(nodes_allowed))) {
> -		NODEMASK_FREE(nodes_allowed);
> -		nodes_allowed = &node_online_map;
> +	h = kobj_to_hstate(kobj, &nid);
> +	if (nid == NUMA_NO_NODE) {
> +		/*
> +		 * global hstate attribute
> +		 */
> +		if (!(obey_mempolicy &&
> +				init_nodemask_of_mempolicy(nodes_allowed))) {
> +			NODEMASK_FREE(nodes_allowed);
> +			nodes_allowed = &node_states[N_HIGH_MEMORY];
> +		}
> +	} else {
> +		/*
> +		 * per node hstate attribute: adjust count to global,
> +		 * but restrict alloc/free to the specified node.
> +		 */
> +		count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> +		init_nodemask_of_node(nodes_allowed, nid);
>  	}
>  	h->max_huge_pages = set_max_huge_pages(h, count, nodes_allowed);
>  
> @@ -1402,7 +1433,7 @@ HSTATE_ATTR(nr_hugepages_mempolicy);
>  static ssize_t nr_overcommit_hugepages_show(struct kobject *kobj,
>  					struct kobj_attribute *attr, char *buf)
>  {
> -	struct hstate *h = kobj_to_hstate(kobj);
> +	struct hstate *h = kobj_to_hstate(kobj, NULL);
>  	return sprintf(buf, "%lu\n", h->nr_overcommit_huge_pages);
>  }
>  static ssize_t nr_overcommit_hugepages_store(struct kobject *kobj,
> @@ -1410,7 +1441,7 @@ static ssize_t nr_overcommit_hugepages_s
>  {
>  	int err;
>  	unsigned long input;
> -	struct hstate *h = kobj_to_hstate(kobj);
> +	struct hstate *h = kobj_to_hstate(kobj, NULL);
>  
>  	err = strict_strtoul(buf, 10, &input);
>  	if (err)
> @@ -1427,15 +1458,24 @@ HSTATE_ATTR(nr_overcommit_hugepages);
>  static ssize_t free_hugepages_show(struct kobject *kobj,
>  					struct kobj_attribute *attr, char *buf)
>  {
> -	struct hstate *h = kobj_to_hstate(kobj);
> -	return sprintf(buf, "%lu\n", h->free_huge_pages);
> +	struct hstate *h;
> +	unsigned long free_huge_pages;
> +	int nid;
> +
> +	h = kobj_to_hstate(kobj, &nid);
> +	if (nid == NUMA_NO_NODE)
> +		free_huge_pages = h->free_huge_pages;
> +	else
> +		free_huge_pages = h->free_huge_pages_node[nid];
> +
> +	return sprintf(buf, "%lu\n", free_huge_pages);
>  }
>  HSTATE_ATTR_RO(free_hugepages);
>  
>  static ssize_t resv_hugepages_show(struct kobject *kobj,
>  					struct kobj_attribute *attr, char *buf)
>  {
> -	struct hstate *h = kobj_to_hstate(kobj);
> +	struct hstate *h = kobj_to_hstate(kobj, NULL);
>  	return sprintf(buf, "%lu\n", h->resv_huge_pages);
>  }
>  HSTATE_ATTR_RO(resv_hugepages);
> @@ -1443,8 +1483,17 @@ HSTATE_ATTR_RO(resv_hugepages);
>  static ssize_t surplus_hugepages_show(struct kobject *kobj,
>  					struct kobj_attribute *attr, char *buf)
>  {
> -	struct hstate *h = kobj_to_hstate(kobj);
> -	return sprintf(buf, "%lu\n", h->surplus_huge_pages);
> +	struct hstate *h;
> +	unsigned long surplus_huge_pages;
> +	int nid;
> +
> +	h = kobj_to_hstate(kobj, &nid);
> +	if (nid == NUMA_NO_NODE)
> +		surplus_huge_pages = h->surplus_huge_pages;
> +	else
> +		surplus_huge_pages = h->surplus_huge_pages_node[nid];
> +
> +	return sprintf(buf, "%lu\n", surplus_huge_pages);
>  }
>  HSTATE_ATTR_RO(surplus_hugepages);
>  
> @@ -1464,19 +1513,21 @@ static struct attribute_group hstate_att
>  	.attrs = hstate_attrs,
>  };
>  
> -static int __init hugetlb_sysfs_add_hstate(struct hstate *h)
> +static int __init hugetlb_sysfs_add_hstate(struct hstate *h,
> +				struct kobject *parent,
> +				struct kobject **hstate_kobjs,
> +				struct attribute_group *hstate_attr_group)
>  {
>  	int retval;
> +	int hi = h - hstates;
>  
> -	hstate_kobjs[h - hstates] = kobject_create_and_add(h->name,
> -							hugepages_kobj);
> -	if (!hstate_kobjs[h - hstates])
> +	hstate_kobjs[hi] = kobject_create_and_add(h->name, parent);
> +	if (!hstate_kobjs[hi])
>  		return -ENOMEM;
>  
> -	retval = sysfs_create_group(hstate_kobjs[h - hstates],
> -							&hstate_attr_group);
> +	retval = sysfs_create_group(hstate_kobjs[hi], hstate_attr_group);
>  	if (retval)
> -		kobject_put(hstate_kobjs[h - hstates]);
> +		kobject_put(hstate_kobjs[hi]);
>  
>  	return retval;
>  }
> @@ -1491,17 +1542,184 @@ static void __init hugetlb_sysfs_init(vo
>  		return;
>  
>  	for_each_hstate(h) {
> -		err = hugetlb_sysfs_add_hstate(h);
> +		err = hugetlb_sysfs_add_hstate(h, hugepages_kobj,
> +					 hstate_kobjs, &hstate_attr_group);
>  		if (err)
>  			printk(KERN_ERR "Hugetlb: Unable to add hstate %s",
>  								h->name);
>  	}
>  }
>  
> +#ifdef CONFIG_NUMA
> +
> +/*
> + * node_hstate/s - associate per node hstate attributes, via their kobjects,
> + * with node sysdevs in node_devices[] using a parallel array.  The array
> + * index of a node sysdev or _hstate == node id.
> + * This is here to avoid any static dependency of the node sysdev driver, in
> + * the base kernel, on the hugetlb module.
> + */
> +struct node_hstate {
> +	struct kobject		*hugepages_kobj;
> +	struct kobject		*hstate_kobjs[HUGE_MAX_HSTATE];
> +};
> +struct node_hstate node_hstates[MAX_NUMNODES];
> +
> +/*
> + * A subset of global hstate attributes for node sysdevs
> + */
> +static struct attribute *per_node_hstate_attrs[] = {
> +	&nr_hugepages_attr.attr,
> +	&free_hugepages_attr.attr,
> +	&surplus_hugepages_attr.attr,
> +	NULL,
> +};
> +
> +static struct attribute_group per_node_hstate_attr_group = {
> +	.attrs = per_node_hstate_attrs,
> +};
> +
> +/*
> + * kobj_to_node_hstate - lookup global hstate for node sysdev hstate attr kobj.
> + * Returns node id via non-NULL nidp.
> + */
> +static struct hstate *kobj_to_node_hstate(struct kobject *kobj, int *nidp)
> +{
> +	int nid;
> +
> +	for (nid = 0; nid < nr_node_ids; nid++) {

This should be for_each_node_mask() over node_states[N_HIGH_MEMORY], 
right?

> +		struct node_hstate *nhs = &node_hstates[nid];
> +		int i;
> +		for (i = 0; i < HUGE_MAX_HSTATE; i++)
> +			if (nhs->hstate_kobjs[i] == kobj) {
> +				if (nidp)
> +					*nidp = nid;
> +				return &hstates[i];
> +			}
> +	}
> +
> +	BUG();
> +	return NULL;
> +}
> +
> +/*
> + * Unregister hstate attributes from a single node sysdev.
> + * No-op if no hstate attributes attached.
> + */
> +void hugetlb_unregister_node(struct node *node)
> +{
> +	struct hstate *h;
> +	struct node_hstate *nhs = &node_hstates[node->sysdev.id];
> +
> +	if (!nhs->hugepages_kobj)
> +		return;
> +
> +	for_each_hstate(h)
> +		if (nhs->hstate_kobjs[h - hstates]) {
> +			kobject_put(nhs->hstate_kobjs[h - hstates]);
> +			nhs->hstate_kobjs[h - hstates] = NULL;
> +		}
> +
> +	kobject_put(nhs->hugepages_kobj);
> +	nhs->hugepages_kobj = NULL;
> +}
> +
> +/*
> + * hugetlb module exit:  unregister hstate attributes from node sysdevs
> + * that have them.
> + */
> +static void hugetlb_unregister_all_nodes(void)
> +{
> +	int nid;
> +
> +	/*
> +	 * disable node sysdev registrations.
> +	 */
> +	register_hugetlbfs_with_node(NULL, NULL);
> +
> +	/*
> +	 * remove hstate attributes from any nodes that have them.
> +	 */
> +	for (nid = 0; nid < nr_node_ids; nid++)
> +		hugetlb_unregister_node(&node_devices[nid]);
> +}
> +
> +/*
> + * Register hstate attributes for a single node sysdev.
> + * No-op if attributes already registered.
> + */
> +void hugetlb_register_node(struct node *node)
> +{
> +	struct hstate *h;
> +	struct node_hstate *nhs = &node_hstates[node->sysdev.id];
> +	int err;
> +
> +	if (nhs->hugepages_kobj)
> +		return;		/* already allocated */
> +
> +	nhs->hugepages_kobj = kobject_create_and_add("hugepages",
> +							&node->sysdev.kobj);
> +	if (!nhs->hugepages_kobj)
> +		return;
> +
> +	for_each_hstate(h) {
> +		err = hugetlb_sysfs_add_hstate(h, nhs->hugepages_kobj,
> +						nhs->hstate_kobjs,
> +						&per_node_hstate_attr_group);
> +		if (err) {
> +			printk(KERN_ERR "Hugetlb: Unable to add hstate %s"
> +					" for node %d\n",
> +						h->name, node->sysdev.id);
> +			hugetlb_unregister_node(node);
> +			break;
> +		}
> +	}
> +}
> +
> +/*
> + * hugetlb init time:  register hstate attributes for all registered
> + * node sysdevs.  All on-line nodes should have registered their
> + * associated sysdev by the time the hugetlb module initializes.
> + */
> +static void hugetlb_register_all_nodes(void)
> +{
> +	int nid;
> +
> +	for (nid = 0; nid < nr_node_ids; nid++) {
> +		struct node *node = &node_devices[nid];
> +		if (node->sysdev.id == nid)
> +			hugetlb_register_node(node);
> +	}

Do you need this check for node->sysdev.id == nid?  This only iterates 
over N_HIGH_MEMORY in a later patch, so I don't understand why this is 
just a silent error if it's false.

> +
> +	/*
> +	 * Let the node sysdev driver know we're here so it can
> +	 * [un]register hstate attributes on node hotplug.
> +	 */
> +	register_hugetlbfs_with_node(hugetlb_register_node,
> +				     hugetlb_unregister_node);
> +}
> +#else	/* !CONFIG_NUMA */
> +
> +static struct hstate *kobj_to_node_hstate(struct kobject *kobj, int *nidp)
> +{
> +	BUG();
> +	if (nidp)
> +		*nidp = -1;
> +	return NULL;
> +}
> +
> +static void hugetlb_unregister_all_nodes(void) { }
> +
> +static void hugetlb_register_all_nodes(void) { }
> +
> +#endif
> +
>  static void __exit hugetlb_exit(void)
>  {
>  	struct hstate *h;
>  
> +	hugetlb_unregister_all_nodes();
> +
>  	for_each_hstate(h) {
>  		kobject_put(hstate_kobjs[h - hstates]);
>  	}
> @@ -1536,6 +1754,8 @@ static int __init hugetlb_init(void)
>  
>  	hugetlb_sysfs_init();
>  
> +	hugetlb_register_all_nodes();
> +
>  	return 0;
>  }
>  module_init(hugetlb_init);
> Index: linux-2.6.31-mmotm-090925-1435/include/linux/node.h
> ===================================================================
> --- linux-2.6.31-mmotm-090925-1435.orig/include/linux/node.h	2009-10-05 13:33:00.000000000 -0400
> +++ linux-2.6.31-mmotm-090925-1435/include/linux/node.h	2009-10-05 14:00:32.000000000 -0400
> @@ -28,6 +28,7 @@ struct node {
>  
>  struct memory_block;
>  extern struct node node_devices[];
> +typedef  void (*node_registration_func_t)(struct node *);
>  
>  extern int register_node(struct node *, int, struct node *);
>  extern void unregister_node(struct node *node);

This could probably be extracted to be more general than just for 
hugetlbfs; the scan_unevictable registration could probably be made to 
use the new interface.  That's beyond the scope of this patchset and 
could be a future addition, but that's the only reason I can think why you 
defined this as a typedef.  Until it's more generic, I think the typedef 
should be removed and everything use void (*reg)(struct node *).

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 10/11] hugetlb:  handle memory hot-plug events
  2009-10-06  3:18 ` [PATCH 10/11] hugetlb: handle memory hot-plug events Lee Schermerhorn
@ 2009-10-07  4:12   ` David Rientjes
  0 siblings, 0 replies; 31+ messages in thread
From: David Rientjes @ 2009-10-07  4:12 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, Andrew Morton, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Mon, 5 Oct 2009, Lee Schermerhorn wrote:

> [PATCH 10/11] hugetlb:  per node attributes -- handle memory hot plug
> 
> Against:  2.6.31-mmotm-090925-1435
> 
> Register per node hstate attributes only for nodes with memory.
> 
> With Memory Hotplug, memory can be added to a memoryless node and
> a node with memory can become memoryless.  Therefore, add a memory
> on/off-line notifier callback to [un]register a node's attributes
> on transition to/from memoryless state.
> 
> N.B.,  Only tested build, boot, libhugetlbfs regression.
>        i.e., no memory hotplug testing.
> 
> Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined
  2009-10-06  3:19 ` [PATCH 11/11] hugetlb: offload per node attribute registrations Lee Schermerhorn
  2009-10-06 16:01   ` Andi Kleen
@ 2009-10-07  8:24   ` David Rientjes
  2009-10-07 14:25     ` Christoph Lameter
  2009-10-07 16:48     ` Lee Schermerhorn
  1 sibling, 2 replies; 31+ messages in thread
From: David Rientjes @ 2009-10-07  8:24 UTC (permalink / raw)
  To: Lee Schermerhorn, Andrew Morton
  Cc: linux-mm, linux-numa, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft,
	Christoph Lameter, eric.whitney, Yasunori Goto

On Mon, 5 Oct 2009, Lee Schermerhorn wrote:

> [PATCH 11/11] hugetlb:  offload [un]registration of sysfs attr to worker thread
> 
> Against:  2.6.31-mmotm-090925-1435
> 
> New in V6
> 
> V7:  + remove redundant check for memory{ful|less} node from 
>        node_hugetlb_work().  Rely on [added] return from
>        hugetlb_register_node() to differentiate between transitions
>        to/from memoryless state.
> 

That doesn't work because the memory hotplug code doesn't clear the 
N_HIGH_MEMORY bit for status_change_nid on MEM_OFFLINE, so 
hugetlb_register_node() will always return true under such conditions.

The following should fix it.  Christoph?



mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined

When memory is hot-removed, its node must be cleared in N_HIGH_MEMORY if
there are no present pages left.

In such a situation, kswapd must also be stopped since it has nothing
left to do.

Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: David Rientjes <rientjes@google.com>
---
 include/linux/swap.h |    1 +
 mm/memory_hotplug.c  |    4 ++++
 mm/vmscan.c          |   28 ++++++++++++++++++++++------
 3 files changed, 27 insertions(+), 6 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -273,6 +273,7 @@ extern int scan_unevictable_register_node(struct node *node);
 extern void scan_unevictable_unregister_node(struct node *node);
 
 extern int kswapd_run(int nid);
+extern void kswapd_stop(int nid);
 
 #ifdef CONFIG_MMU
 /* linux/mm/shmem.c */
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -838,6 +838,10 @@ repeat:
 
 	setup_per_zone_wmarks();
 	calculate_zone_inactive_ratio(zone);
+	if (!node_present_pages(node)) {
+		node_clear_state(node, N_HIGH_MEMORY);
+		kswapd_stop(node);
+	}
 
 	vm_total_pages = nr_free_pagecache_pages();
 	writeback_set_ratelimit();
diff --git a/mm/vmscan.c b/mm/vmscan.c
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2163,6 +2163,7 @@ static int kswapd(void *p)
 	order = 0;
 	for ( ; ; ) {
 		unsigned long new_order;
+		int ret;
 
 		prepare_to_wait(&pgdat->kswapd_wait, &wait, TASK_INTERRUPTIBLE);
 		new_order = pgdat->kswapd_max_order;
@@ -2174,19 +2175,23 @@ static int kswapd(void *p)
 			 */
 			order = new_order;
 		} else {
-			if (!freezing(current))
+			if (!freezing(current) && !kthread_should_stop())
 				schedule();
 
 			order = pgdat->kswapd_max_order;
 		}
 		finish_wait(&pgdat->kswapd_wait, &wait);
 
-		if (!try_to_freeze()) {
-			/* We can speed up thawing tasks if we don't call
-			 * balance_pgdat after returning from the refrigerator
-			 */
+		ret = try_to_freeze();
+		if (kthread_should_stop())
+			break;
+
+		/*
+		 * We can speed up thawing tasks if we don't call balance_pgdat
+		 * after returning from the refrigerator
+		 */
+		if (!ret)
 			balance_pgdat(pgdat, order);
-		}
 	}
 	return 0;
 }
@@ -2441,6 +2446,17 @@ int kswapd_run(int nid)
 	return ret;
 }
 
+/*
+ * Called by memory hotplug when all memory in a node is offlined.
+ */
+void kswapd_stop(int nid)
+{
+	struct task_struct *kswapd = NODE_DATA(nid)->kswapd;
+
+	if (kswapd)
+		kthread_stop(kswapd);
+}
+
 static int __init kswapd_init(void)
 {
 	int nid;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 2/11] hugetlb:  add nodemask arg to huge page alloc, free and surplus adjust fcns
  2009-10-07  3:26   ` David Rientjes
@ 2009-10-07 14:13     ` Lee Schermerhorn
  0 siblings, 0 replies; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-07 14:13 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Tue, 2009-10-06 at 20:26 -0700, David Rientjes wrote:
> On Mon, 5 Oct 2009, Lee Schermerhorn wrote:
> 
> > Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
> > ===================================================================
> > --- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-28 10:12:20.000000000 -0400
> > +++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-09-30 11:41:36.000000000 -0400

> >  #ifdef CONFIG_HIGHMEM
> > -static void try_to_free_low(struct hstate *h, unsigned long count)
> > +static void try_to_free_low(struct hstate *h, unsigned long count,
> > +						nodemask_t *nodes_allowed)
> >  {
> >  	int i;
> >  
> >  	if (h->order >= MAX_ORDER)
> >  		return;
> >  
> > +	if (!nodes_allowed)
> > +		nodes_allowed = &node_online_map;
> > +
> >  	for (i = 0; i < MAX_NUMNODES; ++i) {
> >  		struct page *page, *next;
> >  		struct list_head *freel = &h->hugepage_freelists[i];
> > +		if (!node_isset(i, *nodes_allowed))
> > +			continue;
> >  		list_for_each_entry_safe(page, next, freel, lru) {
> >  			if (count >= h->nr_huge_pages)
> >  				return;
> 
> Simply converting the iteration to use
> for_each_node_mask(i, *nodes_allowed) would be cleaner.

OK.  That's equivalent.  Anyway, MAX_NUMNODES should probably have been
'numa_node_ids" or such.  And, now, nodes_allowed can't [shouldn't!] be
NULL here, as we default way up in the sysctl/sysfs handlers.  I'll fix
this.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined
  2009-10-07  8:24   ` [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined David Rientjes
@ 2009-10-07 14:25     ` Christoph Lameter
  2009-10-07 16:48     ` Lee Schermerhorn
  1 sibling, 0 replies; 31+ messages in thread
From: Christoph Lameter @ 2009-10-07 14:25 UTC (permalink / raw)
  To: David Rientjes
  Cc: Lee Schermerhorn, Andrew Morton, linux-mm, linux-numa,
	Mel Gorman, Randy Dunlap, Nishanth Aravamudan, Adam Litke,
	Andy Whitcroft, eric.whitney, Yasunori Goto

On Wed, 7 Oct 2009, David Rientjes wrote:

> The following should fix it.  Christoph?

As far as I can see it looks good. Someone verify the kswapd details
please.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/11] hugetlb:  derive huge pages nodes allowed from task mempolicy
  2009-10-07  3:26   ` David Rientjes
@ 2009-10-07 16:30     ` Lee Schermerhorn
  2009-10-07 20:09       ` David Rientjes
  0 siblings, 1 reply; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-07 16:30 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Tue, 2009-10-06 at 20:26 -0700, David Rientjes wrote:
> On Mon, 5 Oct 2009, Lee Schermerhorn wrote:
> 
> > Index: linux-2.6.31-mmotm-090925-1435/mm/mempolicy.c
> > ===================================================================
> > --- linux-2.6.31-mmotm-090925-1435.orig/mm/mempolicy.c	2009-09-30 12:48:45.000000000 -0400
> > +++ linux-2.6.31-mmotm-090925-1435/mm/mempolicy.c	2009-09-30 12:48:46.000000000 -0400
> > @@ -1564,6 +1564,53 @@ struct zonelist *huge_zonelist(struct vm
> >  	}
> >  	return zl;
> >  }
> > +
> > +/*
> > + * init_nodemask_of_mempolicy
> > + *
> > + * If the current task's mempolicy is "default" [NULL], return 'false'
> > + * to indicate * default policy.  Otherwise, extract the policy nodemask
> > + * for 'bind' * or 'interleave' policy into the argument nodemask, or
> > + * initialize the argument nodemask to contain the single node for
> > + * 'preferred' or * 'local' policy and return 'true' to indicate presence
> > + * of non-default mempolicy.
> > + *
> 
> Looks like some mangling of the comment, there's spurious '*' throughout.

Fixed.  

<snip>
> > Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
> > ===================================================================
> > --- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-30 12:48:45.000000000 -0400
> > +++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-10-02 21:22:04.000000000 -0400
> > @@ -1334,29 +1334,71 @@ static struct hstate *kobj_to_hstate(str
> >  	return NULL;
> >  }
> >  
> > -static ssize_t nr_hugepages_show(struct kobject *kobj,
> > +static ssize_t nr_hugepages_show_common(struct kobject *kobj,
> >  					struct kobj_attribute *attr, char *buf)
> >  {
> >  	struct hstate *h = kobj_to_hstate(kobj);
> >  	return sprintf(buf, "%lu\n", h->nr_huge_pages);
> >  }
> > -static ssize_t nr_hugepages_store(struct kobject *kobj,
> > -		struct kobj_attribute *attr, const char *buf, size_t count)
> > +static ssize_t nr_hugepages_store_common(bool obey_mempolicy,
> > +			struct kobject *kobj, struct kobj_attribute *attr,
> > +			const char *buf, size_t len)
> >  {
> >  	int err;
> > -	unsigned long input;
> > +	unsigned long count;
> >  	struct hstate *h = kobj_to_hstate(kobj);
> > +	NODEMASK_ALLOC(nodemask, nodes_allowed);
> >  
> 
> In the two places you do NODEMASK_ALLOC(), here and 
> hugetlb_sysctl_handler(), you'll need to check that nodes_allowed is 
> non-NULL since it's possible that kmalloc() will return NULL for 
> CONFIG_NODES_SHIFT > 8.
> 
> In such a case, it's probably sufficient to simply set nodes_allowed to 
> node_states[N_HIGH_MEMORY] so that we can still free hugepages when we're 
> oom, a common memory freeing tactic.
> 
> You could do that by simply returning false from 
> init_nodemask_of_mempolicy() if !nodes_allowed since NODEMASK_FREE() can 
> take a NULL pointer, but it may be easier to factor that logic into your 
> conditional below:
> 
> > -	err = strict_strtoul(buf, 10, &input);
> > +	err = strict_strtoul(buf, 10, &count);
> >  	if (err)
> >  		return 0;
> >  
> > -	h->max_huge_pages = set_max_huge_pages(h, input, &node_online_map);
> > +	if (!(obey_mempolicy && init_nodemask_of_mempolicy(nodes_allowed))) {
> > +		NODEMASK_FREE(nodes_allowed);
> > +		nodes_allowed = &node_online_map;
> > +	}
> > +	h->max_huge_pages = set_max_huge_pages(h, count, nodes_allowed);
> >
> 
> You can get away with just testing !nodes_allowed here since the stack 
> allocation variation of NODEMASK_ALLOC() is such that nodes_allowed will 
> always be an initialized pointer pointing to _nodes_allowed so you won't 
> have an uninitialized warning.
> 
> Once that's done, you can get rid of the check for a NULL nodes_allowed in 
> try_to_free_low() from patch 2 since it will always be valid in 
> set_max_huge_pages().


OK.  already removed the NULL check from try_to_free_low().  And I made
the change to init_nodemask_of_mempolicy to return false on NULL mask.

I'm not completely happy with dropping back to default behavior
[node_online_map here, replaced with node_states[N_HIGH_MEMORY] in
subsequent patch] on failure to allocate nodes_allowed.  We only do the
NODEMASK_ALLOC when we've come in from either nr_hugepages_mempolicy or
a per node attribute [subsequent patch], so I'm not sure that ignoring
the mempolicy, if any, or the specified node id, is a good thing here.
Not silently, at least.  I haven't addressed this, yet.  We can submit
an incremental patch.  Thoughts?

Note that these chunks will get reworked in the subsequent patch that
adds the per node attributes.  I'll need to handle this there, as well.

> 
> > -	return count;
> > +	if (nodes_allowed != &node_online_map)
> > +		NODEMASK_FREE(nodes_allowed);
> > +
> > +	return len;
> > +}
> > +
<snip>
> > Index: linux-2.6.31-mmotm-090925-1435/kernel/sysctl.c
> > ===================================================================
> > --- linux-2.6.31-mmotm-090925-1435.orig/kernel/sysctl.c	2009-09-30 12:48:45.000000000 -0400
> > +++ linux-2.6.31-mmotm-090925-1435/kernel/sysctl.c	2009-09-30 12:48:46.000000000 -0400
> > @@ -1164,7 +1164,7 @@ static struct ctl_table vm_table[] = {
> >  		.extra2		= &one_hundred,
> >  	},
> >  #ifdef CONFIG_HUGETLB_PAGE
> > -	 {
> > +	{
> >  		.procname	= "nr_hugepages",
> >  		.data		= NULL,
> >  		.maxlen		= sizeof(unsigned long),
> > @@ -1172,7 +1172,19 @@ static struct ctl_table vm_table[] = {
> >  		.proc_handler	= &hugetlb_sysctl_handler,
> >  		.extra1		= (void *)&hugetlb_zero,
> >  		.extra2		= (void *)&hugetlb_infinity,
> > -	 },
> > +	},
> > +#ifdef CONFIG_NUMA
> > +	{
> > +	       .ctl_name       = CTL_UNNUMBERED,
> > +	       .procname       = "nr_hugepages_mempolicy",
> > +	       .data           = NULL,
> > +	       .maxlen         = sizeof(unsigned long),
> > +	       .mode           = 0644,
> > +	       .proc_handler   = &hugetlb_mempolicy_sysctl_handler,
> > +	       .extra1	 = (void *)&hugetlb_zero,
> > +	       .extra2	 = (void *)&hugetlb_infinity,
> > +	},
> > +#endif
> >  	 {
> >  		.ctl_name	= VM_HUGETLB_GROUP,
> >  		.procname	= "hugetlb_shm_group",
> > 
> 
> There's some whitespace damage in the nr_hugepages_mempolicy hunk, it 
> needs tabs instead of spaces for alignment.

Fixed.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined
  2009-10-07  8:24   ` [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined David Rientjes
  2009-10-07 14:25     ` Christoph Lameter
@ 2009-10-07 16:48     ` Lee Schermerhorn
  2009-10-07 19:53       ` David Rientjes
  1 sibling, 1 reply; 31+ messages in thread
From: Lee Schermerhorn @ 2009-10-07 16:48 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andrew Morton, linux-mm, linux-numa, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft,
	Christoph Lameter, eric.whitney, Yasunori Goto

On Wed, 2009-10-07 at 01:24 -0700, David Rientjes wrote:
> On Mon, 5 Oct 2009, Lee Schermerhorn wrote:
> 
> > [PATCH 11/11] hugetlb:  offload [un]registration of sysfs attr to worker thread
> > 
> > Against:  2.6.31-mmotm-090925-1435
> > 
> > New in V6
> > 
> > V7:  + remove redundant check for memory{ful|less} node from 
> >        node_hugetlb_work().  Rely on [added] return from
> >        hugetlb_register_node() to differentiate between transitions
> >        to/from memoryless state.
> > 
> 
> That doesn't work because the memory hotplug code doesn't clear the 
> N_HIGH_MEMORY bit for status_change_nid on MEM_OFFLINE, so 
> hugetlb_register_node() will always return true under such conditions.
> 
> The following should fix it.  Christoph?
> 
> 

Almost missed this one because of the subject.  

What shall we do with this for the huge pages controls series?  

Options:

1) leave series as is, and note that it depends on this patch?

2) Include this patch [or the subset that clears the N_HIGH_MEMORY node
state--maybe leave the kswapd handling separate?] in the series?


Lee

> 
> mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined
> 
> When memory is hot-removed, its node must be cleared in N_HIGH_MEMORY if
> there are no present pages left.
> 
> In such a situation, kswapd must also be stopped since it has nothing
> left to do.
> 
> Cc: Christoph Lameter <cl@linux-foundation.org>
> Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
> Cc: Mel Gorman <mel@csn.ul.ie>
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
>  include/linux/swap.h |    1 +
>  mm/memory_hotplug.c  |    4 ++++
>  mm/vmscan.c          |   28 ++++++++++++++++++++++------
>  3 files changed, 27 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -273,6 +273,7 @@ extern int scan_unevictable_register_node(struct node *node);
>  extern void scan_unevictable_unregister_node(struct node *node);
>  
>  extern int kswapd_run(int nid);
> +extern void kswapd_stop(int nid);
>  
>  #ifdef CONFIG_MMU
>  /* linux/mm/shmem.c */
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -838,6 +838,10 @@ repeat:
>  
>  	setup_per_zone_wmarks();
>  	calculate_zone_inactive_ratio(zone);
> +	if (!node_present_pages(node)) {
> +		node_clear_state(node, N_HIGH_MEMORY);
> +		kswapd_stop(node);
> +	}
>  
>  	vm_total_pages = nr_free_pagecache_pages();
>  	writeback_set_ratelimit();
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2163,6 +2163,7 @@ static int kswapd(void *p)
>  	order = 0;
>  	for ( ; ; ) {
>  		unsigned long new_order;
> +		int ret;
>  
>  		prepare_to_wait(&pgdat->kswapd_wait, &wait, TASK_INTERRUPTIBLE);
>  		new_order = pgdat->kswapd_max_order;
> @@ -2174,19 +2175,23 @@ static int kswapd(void *p)
>  			 */
>  			order = new_order;
>  		} else {
> -			if (!freezing(current))
> +			if (!freezing(current) && !kthread_should_stop())
>  				schedule();
>  
>  			order = pgdat->kswapd_max_order;
>  		}
>  		finish_wait(&pgdat->kswapd_wait, &wait);
>  
> -		if (!try_to_freeze()) {
> -			/* We can speed up thawing tasks if we don't call
> -			 * balance_pgdat after returning from the refrigerator
> -			 */
> +		ret = try_to_freeze();
> +		if (kthread_should_stop())
> +			break;
> +
> +		/*
> +		 * We can speed up thawing tasks if we don't call balance_pgdat
> +		 * after returning from the refrigerator
> +		 */
> +		if (!ret)
>  			balance_pgdat(pgdat, order);
> -		}
>  	}
>  	return 0;
>  }
> @@ -2441,6 +2446,17 @@ int kswapd_run(int nid)
>  	return ret;
>  }
>  
> +/*
> + * Called by memory hotplug when all memory in a node is offlined.
> + */
> +void kswapd_stop(int nid)
> +{
> +	struct task_struct *kswapd = NODE_DATA(nid)->kswapd;
> +
> +	if (kswapd)
> +		kthread_stop(kswapd);
> +}
> +
>  static int __init kswapd_init(void)
>  {
>  	int nid;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined
  2009-10-07 16:48     ` Lee Schermerhorn
@ 2009-10-07 19:53       ` David Rientjes
  0 siblings, 0 replies; 31+ messages in thread
From: David Rientjes @ 2009-10-07 19:53 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: Andrew Morton, linux-mm, linux-numa, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft,
	Christoph Lameter, eric.whitney, Yasunori Goto

On Wed, 7 Oct 2009, Lee Schermerhorn wrote:

> What shall we do with this for the huge pages controls series?  
> 
> Options:
> 
> 1) leave series as is, and note that it depends on this patch?
> 
> 2) Include this patch [or the subset that clears the N_HIGH_MEMORY node
> state--maybe leave the kswapd handling separate?] in the series?
> 

Probably do the same thing as my "nodemask: make NODEMASK_ALLOC more 
general" patch: add it to your series as a predecessor to v9's patch 11 in 
v10 with

	From: David Rientjes <rientjes@google.com>

as the very first line and pick up my sign-off.  Please cc the same people 
that I did on this patch and add a couple more for the kswapd review that 
Christoph requested:

	Cc: Rafael J. Wysocki <rjw@sisk.pl>
	Cc: Rik van Riel <riel@redhat.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH 4/11] hugetlb:  derive huge pages nodes allowed from task mempolicy
  2009-10-07 16:30     ` Lee Schermerhorn
@ 2009-10-07 20:09       ` David Rientjes
  0 siblings, 0 replies; 31+ messages in thread
From: David Rientjes @ 2009-10-07 20:09 UTC (permalink / raw)
  To: Lee Schermerhorn
  Cc: linux-mm, linux-numa, akpm, Mel Gorman, Randy Dunlap,
	Nishanth Aravamudan, Adam Litke, Andy Whitcroft, eric.whitney

On Wed, 7 Oct 2009, Lee Schermerhorn wrote:

> > > Index: linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c
> > > ===================================================================
> > > --- linux-2.6.31-mmotm-090925-1435.orig/mm/hugetlb.c	2009-09-30 12:48:45.000000000 -0400
> > > +++ linux-2.6.31-mmotm-090925-1435/mm/hugetlb.c	2009-10-02 21:22:04.000000000 -0400
> > > @@ -1334,29 +1334,71 @@ static struct hstate *kobj_to_hstate(str
> > >  	return NULL;
> > >  }
> > >  
> > > -static ssize_t nr_hugepages_show(struct kobject *kobj,
> > > +static ssize_t nr_hugepages_show_common(struct kobject *kobj,
> > >  					struct kobj_attribute *attr, char *buf)
> > >  {
> > >  	struct hstate *h = kobj_to_hstate(kobj);
> > >  	return sprintf(buf, "%lu\n", h->nr_huge_pages);
> > >  }
> > > -static ssize_t nr_hugepages_store(struct kobject *kobj,
> > > -		struct kobj_attribute *attr, const char *buf, size_t count)
> > > +static ssize_t nr_hugepages_store_common(bool obey_mempolicy,
> > > +			struct kobject *kobj, struct kobj_attribute *attr,
> > > +			const char *buf, size_t len)
> > >  {
> > >  	int err;
> > > -	unsigned long input;
> > > +	unsigned long count;
> > >  	struct hstate *h = kobj_to_hstate(kobj);
> > > +	NODEMASK_ALLOC(nodemask, nodes_allowed);
> > >  
> > 
> > In the two places you do NODEMASK_ALLOC(), here and 
> > hugetlb_sysctl_handler(), you'll need to check that nodes_allowed is 
> > non-NULL since it's possible that kmalloc() will return NULL for 
> > CONFIG_NODES_SHIFT > 8.
> > 
> > In such a case, it's probably sufficient to simply set nodes_allowed to 
> > node_states[N_HIGH_MEMORY] so that we can still free hugepages when we're 
> > oom, a common memory freeing tactic.
> > 
> > You could do that by simply returning false from 
> > init_nodemask_of_mempolicy() if !nodes_allowed since NODEMASK_FREE() can 
> > take a NULL pointer, but it may be easier to factor that logic into your 
> > conditional below:
> > 
> > > -	err = strict_strtoul(buf, 10, &input);
> > > +	err = strict_strtoul(buf, 10, &count);
> > >  	if (err)
> > >  		return 0;
> > >  
> > > -	h->max_huge_pages = set_max_huge_pages(h, input, &node_online_map);
> > > +	if (!(obey_mempolicy && init_nodemask_of_mempolicy(nodes_allowed))) {
> > > +		NODEMASK_FREE(nodes_allowed);
> > > +		nodes_allowed = &node_online_map;
> > > +	}
> > > +	h->max_huge_pages = set_max_huge_pages(h, count, nodes_allowed);
> > >
> > 
> > You can get away with just testing !nodes_allowed here since the stack 
> > allocation variation of NODEMASK_ALLOC() is such that nodes_allowed will 
> > always be an initialized pointer pointing to _nodes_allowed so you won't 
> > have an uninitialized warning.
> > 
> > Once that's done, you can get rid of the check for a NULL nodes_allowed in 
> > try_to_free_low() from patch 2 since it will always be valid in 
> > set_max_huge_pages().
> 
> 
> OK.  already removed the NULL check from try_to_free_low().  And I made
> the change to init_nodemask_of_mempolicy to return false on NULL mask.
> 
> I'm not completely happy with dropping back to default behavior
> [node_online_map here, replaced with node_states[N_HIGH_MEMORY] in
> subsequent patch] on failure to allocate nodes_allowed.  We only do the
> NODEMASK_ALLOC when we've come in from either nr_hugepages_mempolicy or
> a per node attribute [subsequent patch], so I'm not sure that ignoring
> the mempolicy, if any, or the specified node id, is a good thing here.
> Not silently, at least.  I haven't addressed this, yet.  We can submit
> an incremental patch.  Thoughts?
> 

Hmm, it's debatable since the NODEMASK_ALLOC() slab allocation is 
GFP_KERNEL which would cause direct reclaim (and perhaps even the oom 
killer) to free memory.  If the oom killer were invoked, current would 
probably even be killed because of how the oom killer works for 
CONSTRAINT_MEMORY_POLICY.  So the end result is that the pages would 
eventually be freed because current would get access to memory reserves 
via TIF_MEMDIE but would die immediately after returning.  It was nice of 
current to sacrifice itself like that.

Unfortunately, I think the long term solution is that NODEMASK_ALLOC() is 
going to require a gfp parameter to pass to kmalloc() and in this case we 
should union __GFP_NORETRY.  Then, if nodes_allowed can't be allocated I 
think it would be better to simply return -ENOMEM to userspace so it can 
either reduce the number of global hugepages or free memory in another 
way.  (There might be a caveat where the user's mempolicy already includes 
all online nodes and they use nr_hugepages_mempolicy where they couldn't 
free hugepages because of -ENOMEM but could via nr_hugepages, but I don't 
think you need to address that.)

The worst case allocation is probably 512 bytes for CONFIG_NODES_SHIFT of 
12 so I don't think using __GFP_NORETRY here is going to be that 
ridiculous.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2009-10-07 20:09 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-10-06  3:17 [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Lee Schermerhorn
2009-10-06  3:17 ` [PATCH 1/11] hugetlb: rework hstate_next_node_* functions Lee Schermerhorn
2009-10-06  3:17 ` [PATCH 2/11] hugetlb: add nodemask arg to huge page alloc, free and surplus adjust fcns Lee Schermerhorn
2009-10-06  9:09   ` David Rientjes
2009-10-07  3:26   ` David Rientjes
2009-10-07 14:13     ` Lee Schermerhorn
2009-10-06  3:17 ` [PATCH 3/11] hugetlb: factor init_nodemask_of_node Lee Schermerhorn
2009-10-07  3:21   ` David Rientjes
2009-10-06  3:18 ` [PATCH 4/11] hugetlb: derive huge pages nodes allowed from task mempolicy Lee Schermerhorn
2009-10-07  3:26   ` David Rientjes
2009-10-07 16:30     ` Lee Schermerhorn
2009-10-07 20:09       ` David Rientjes
2009-10-06  3:18 ` [PATCH 5/11] hugetlb: accomodate reworked NODEMASK_ALLOC Lee Schermerhorn
2009-10-06  3:18 ` [PATCH 6/11] hugetlb: add generic definition of NUMA_NO_NODE Lee Schermerhorn
2009-10-06  9:28   ` David Rientjes
2009-10-06  3:18 ` [PATCH 7/11] hugetlb: add per node hstate attributes Lee Schermerhorn
2009-10-07  4:04   ` David Rientjes
2009-10-06  3:18 ` [PATCH 8/11] hugetlb: update hugetlb documentation for NUMA controls Lee Schermerhorn
2009-10-06  3:18 ` [PATCH 9/11] hugetlb: use only nodes with memory for huge pages Lee Schermerhorn
2009-10-06  3:18 ` [PATCH 10/11] hugetlb: handle memory hot-plug events Lee Schermerhorn
2009-10-07  4:12   ` David Rientjes
2009-10-06  3:19 ` [PATCH 11/11] hugetlb: offload per node attribute registrations Lee Schermerhorn
2009-10-06 16:01   ` Andi Kleen
2009-10-06 16:28     ` Lee Schermerhorn
2009-10-06 16:46       ` Andi Kleen
2009-10-06 17:57         ` Lee Schermerhorn
2009-10-07  8:24   ` [patch] mm: clear node in N_HIGH_MEMORY and stop kswapd when all memory is offlined David Rientjes
2009-10-07 14:25     ` Christoph Lameter
2009-10-07 16:48     ` Lee Schermerhorn
2009-10-07 19:53       ` David Rientjes
2009-10-06 16:02 ` [PATCH 0/11] hugetlb: V9 numa control of persistent huge pages alloc/free Andi Kleen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.