linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: mingo@redhat.com, hpa@zytor.com, tglx@linutronix.de,
	benh@kernel.crashing.org, yinghai@kernel.org,
	davem@davemloft.net
Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
	x86@kernel.org, Tejun Heo <tj@kernel.org>
Subject: [PATCH 7/8] memblock: Separate out memblock_find_in_range_node()
Date: Tue, 12 Jul 2011 10:46:34 +0200	[thread overview]
Message-ID: <1310460395-30913-8-git-send-email-tj@kernel.org> (raw)
In-Reply-To: <1310460395-30913-1-git-send-email-tj@kernel.org>

Node affine memblock allocation logic is currently implemented across
memblock_alloc_nid() and memblock_alloc_nid_region().  This
reorganizes it such that it resembles that of non-NUMA allocation API.

Area finding is collected and moved into new exported function
memblock_find_in_range_node() which is symmetrical to non-NUMA
counterpart - it handles @start/@end and understands ANYWHERE and
ACCESSIBLE.  memblock_alloc_nid() now simply calls
memblock_find_in_range_node() and reserves the returned area.

This makes memblock_alloc[_try]_nid() observe ACCESSIBLE limit on node
affine allocations too (again, this doesn't make any difference for
the current sole user - sparc64).

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
 include/linux/memblock.h |    4 +++
 mm/memblock.c            |   57 +++++++++++++++++++++++++--------------------
 2 files changed, 36 insertions(+), 25 deletions(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 329ffb2..7400d02 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -61,6 +61,10 @@ extern long memblock_reserve(phys_addr_t base, phys_addr_t size);
 /* The numa aware allocator is only available if
  * CONFIG_ARCH_POPULATES_NODE_MAP is set
  */
+extern phys_addr_t memblock_find_in_range_node(phys_addr_t start,
+					       phys_addr_t end,
+					       phys_addr_t size,
+					       phys_addr_t align, int nid);
 extern phys_addr_t memblock_alloc_nid(phys_addr_t size, phys_addr_t align,
 					int nid);
 extern phys_addr_t memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align,
diff --git a/mm/memblock.c b/mm/memblock.c
index 447cf64..a8edb42 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -521,49 +521,56 @@ static phys_addr_t __init memblock_nid_range_rev(phys_addr_t start,
 	return start;
 }
 
-static phys_addr_t __init memblock_alloc_nid_region(struct memblock_region *mp,
+phys_addr_t __init memblock_find_in_range_node(phys_addr_t start,
+					       phys_addr_t end,
 					       phys_addr_t size,
 					       phys_addr_t align, int nid)
 {
-	phys_addr_t start, end;
+	struct memblock_type *mem = &memblock.memory;
+	int i;
 
-	start = mp->base;
-	end = start + mp->size;
+	BUG_ON(0 == size);
 
-	while (start < end) {
-		phys_addr_t this_start;
-		int this_nid;
+	/* Pump up max_addr */
+	if (end == MEMBLOCK_ALLOC_ACCESSIBLE)
+		end = memblock.current_limit;
 
-		this_start = memblock_nid_range_rev(start, end, &this_nid);
-		if (this_nid == nid) {
-			phys_addr_t ret = memblock_find_region(this_start, end, size, align);
-			if (ret &&
-			    !memblock_add_region(&memblock.reserved, ret, size))
-				return ret;
+	for (i = mem->cnt - 1; i >= 0; i--) {
+		struct memblock_region *r = &mem->regions[i];
+		phys_addr_t base = max(start, r->base);
+		phys_addr_t top = min(end, r->base + r->size);
+
+		while (base < top) {
+			phys_addr_t tbase, ret;
+			int tnid;
+
+			tbase = memblock_nid_range_rev(base, top, &tnid);
+			if (nid == MAX_NUMNODES || tnid == nid) {
+				ret = memblock_find_region(tbase, top, size, align);
+				if (ret)
+					return ret;
+			}
+			top = tbase;
 		}
-		end = this_start;
 	}
+
 	return 0;
 }
 
 phys_addr_t __init memblock_alloc_nid(phys_addr_t size, phys_addr_t align, int nid)
 {
-	struct memblock_type *mem = &memblock.memory;
-	int i;
-
-	BUG_ON(0 == size);
+	phys_addr_t found;
 
-	/* We align the size to limit fragmentation. Without this, a lot of
+	/*
+	 * We align the size to limit fragmentation. Without this, a lot of
 	 * small allocs quickly eat up the whole reserve array on sparc
 	 */
 	size = round_up(size, align);
 
-	for (i = mem->cnt - 1; i >= 0; i--) {
-		phys_addr_t ret = memblock_alloc_nid_region(&mem->regions[i],
-					       size, align, nid);
-		if (ret)
-			return ret;
-	}
+	found = memblock_find_in_range_node(0, MEMBLOCK_ALLOC_ACCESSIBLE,
+					    size, align, nid);
+	if (found && !memblock_add_region(&memblock.reserved, found, size))
+		return found;
 
 	return 0;
 }
-- 
1.7.6

  parent reply	other threads:[~2011-07-12  8:46 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-12  8:46 [PATCHSET x86/mm] memblock, x86: Implement for_each_mem_pfn_range() and use it to improve memblock allocator Tejun Heo
2011-07-12  8:46 ` [PATCH 1/8] bootmem: Replace work_with_active_regions() with for_each_mem_pfn_range() Tejun Heo
2011-07-12  8:46   ` Tejun Heo
2011-07-14  2:11   ` H. Peter Anvin
2011-07-14  7:44     ` Tejun Heo
2011-07-14  7:46   ` [PATCH UPDATED " Tejun Heo
2011-07-12  8:46 ` [PATCH 2/8] bootmem: Reimplement __absent_pages_in_range() using for_each_mem_pfn_range() Tejun Heo
2011-07-12  8:46   ` Tejun Heo
2011-07-12  8:46 ` [PATCH 3/8] bootmem: Use for_each_mem_pfn_range() in page_alloc.c Tejun Heo
2011-07-12  8:46   ` Tejun Heo
2011-07-12  8:46 ` [PATCH 4/8] memblock: Improve generic memblock_nid_range() using for_each_mem_pfn_range() Tejun Heo
2011-07-12  8:46   ` Tejun Heo
2011-07-12  8:46 ` [PATCH 5/8] memblock: Don't allow archs to override memblock_nid_range() Tejun Heo
2011-07-12  8:46   ` Tejun Heo
2011-07-12  8:46 ` [PATCH 6/8] memblock: Make memblock_alloc_[try_]nid() top-down Tejun Heo
2011-07-12  8:46 ` Tejun Heo [this message]
2011-07-12  8:46 ` [PATCH 8/8] memblock, x86: Replace memblock_x86_find_in_range_node() with generic memblock calls Tejun Heo
2011-07-12  8:46   ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1310460395-30913-8-git-send-email-tj@kernel.org \
    --to=tj@kernel.org \
    --cc=benh@kernel.crashing.org \
    --cc=davem@davemloft.net \
    --cc=hpa@zytor.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).