All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] powerpc: Introduce address space "slices"
@ 2007-02-19  6:43 Benjamin Herrenschmidt
  2007-02-19 13:23 ` Jimi Xenidis
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-19  6:43 UTC (permalink / raw)
  To: linuxppc-dev list; +Cc: cbe-oss-dev

powerpc: Introduce address space "slices"

This patch provide some infrastructure that will allow proper creation
of special VMAs with different page sizes on powerpc.

The basic issue is to be able to do what hugetlbfs does but with
different page sizes for some other special filesystems, more
specifically, my need is:

 - hugetlbfs should still work of course :-)

 - SPE local store mappings using 64K pages on a 4K base page size
kernel on Cell

 - Some special 4K segments in 64K pages kernels for mapping a dodgy
specie of powerpc specific infiniband hardware that requires 4K MMU
mappings for various reasons I won't explain here.

The main issues are:

 - To maintain/keep track of the page size per "segments" (as we can
only have one page size per segment on powerpc, which are 256MB
divisions of the address space).

 - To make sure special mappings stay within their alloted
"segments" (including MAP_FIXED crap)

 - To make sure everybody else doesn't mmap/brk/grow_stack into a
"segment" that is used for a special mapping

Some of the necessary mecanisms to handle that were present in the
hugetlbfs code, but mostly in ways not suitable for anything else.

The patch provides an infrastructure that addresses most of these
in various ways described quickly below that hijack some of the
existing hugetlbfs callbacks. It does not implement any new feature
like SPE 64K mappings, that will be done by separate patch. Only
hugetlbfs is actually hooked on the slices mecanism by this patch.

The ideal solution requires some changes to the generic
get_unmapped_area(), among others, to get rid of the hugetlbfs hacks in
there, and instead, make sure that the fs and mm get_unmapped_area are
also called for MAP_FIXED. We might also need to add an mm callback to
validate a mapping.

I intend to do those changes separately and then adapt this work to use
them.

So what is a slice ? Well, I re-used the mecanism used formerly by our
hugetlbfs implementation which divides the address space in
"meta-segments" which I called "slices". The division is done using
256MB slices below 4G, and 1T slices above. Thus the address space is
divided currently into 16 "low" slices and 16 "high" slices. (Special
case: high slice 0 is the area between 4G and 1T).

Doing so simplifies significantly the tracking of segments and avoid
having to keep track of all the 256MB segments in the address space.

While I used the "concepts" of hugetlbfs, I mostly re-implemented
everything in a more generic way and "ported" hugetlbfs to it. 

slices can have an associated page size, which is encoded in the mmu
context and used by the SLB miss handler to set the segment sizes. The
hash code currently doesn't care, it has a specific check for hugepages,
though I might add a mecanism to provide per-slice hash mapping
functions in the future.

The slice code provide a pair of "generic" get_unmapped_area() (bottomup
and topdown) functions that should work with any slice size. There is
some trickyness here so I would appreciate people to have a look at the
implementation of these and let me know if I got something wrong.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

This version of the patch have been tested quite a bit, it passes
David's libhugetlbfs test suite among others. I haven't done long term
kernel stability or other torture tests though.

Index: linux-cell/arch/powerpc/Kconfig
===================================================================
--- linux-cell.orig/arch/powerpc/Kconfig	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/arch/powerpc/Kconfig	2007-02-19 17:25:29.000000000 +1100
@@ -318,6 +318,11 @@ config PPC_STD_MMU_32
 	def_bool y
 	depends on PPC_STD_MMU && PPC32
 
+config PPC_MM_SLICES
+	bool
+	default y if HUGETLB_PAGE
+	default n
+
 config VIRT_CPU_ACCOUNTING
 	bool "Deterministic task and CPU time accounting"
 	depends on PPC64
Index: linux-cell/include/asm-powerpc/mmu.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/mmu.h	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/include/asm-powerpc/mmu.h	2007-02-19 17:25:29.000000000 +1100
@@ -355,15 +355,17 @@ typedef unsigned long mm_context_id_t;
 
 typedef struct {
 	mm_context_id_t id;
-	u16 user_psize;			/* page size index */
-	u16 sllp;			/* SLB entry page size encoding */
-#ifdef CONFIG_HUGETLB_PAGE
-	u16 low_htlb_areas, high_htlb_areas;
+	u16 user_psize;		/* base page size index */
+
+#ifdef CONFIG_PPC_MM_SLICES
+	u64 low_slices_psize;	/* SLB page size encodings */
+	u64 high_slices_psize;  /* 4 bits per slice for now */
+#else
+	u16 sllp;		/* SLB page size encoding */
 #endif
 	unsigned long vdso_base;
 } mm_context_t;
 
-
 static inline unsigned long vsid_scramble(unsigned long protovsid)
 {
 #if 0
Index: linux-cell/include/asm-powerpc/paca.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/paca.h	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/include/asm-powerpc/paca.h	2007-02-19 17:25:29.000000000 +1100
@@ -82,8 +82,8 @@ struct paca_struct {
 
 	mm_context_t context;
 	u16 vmalloc_sllp;
-	u16 slb_cache[SLB_CACHE_ENTRIES];
 	u16 slb_cache_ptr;
+	u16 slb_cache[SLB_CACHE_ENTRIES];
 
 	/*
 	 * then miscellaneous read-write fields
Index: linux-cell/include/asm-powerpc/page_64.h
===================================================================
--- linux-cell.orig/include/asm-powerpc/page_64.h	2007-02-19 17:24:19.000000000 +1100
+++ linux-cell/include/asm-powerpc/page_64.h	2007-02-19 17:25:42.000000000 +1100
@@ -88,58 +88,57 @@ extern unsigned int HPAGE_SHIFT;
 
 #endif /* __ASSEMBLY__ */
 
-#ifdef CONFIG_HUGETLB_PAGE
+#ifdef CONFIG_PPC_MM_SLICES
+
+#define SLICE_LOW_SHIFT		28
+#define SLICE_HIGH_SHIFT	40
+
+#define SLICE_LOW_TOP		(0x100000000ul)
+#define SLICE_NUM_LOW		(SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
+#define SLICE_NUM_HIGH		(PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
+
+#define GET_LOW_SLICE_INDEX(addr)	((addr) >> SLICE_LOW_SHIFT)
+#define GET_HIGH_SLICE_INDEX(addr)	((addr) >> SLICE_HIGH_SHIFT)
+
+#ifndef __ASSEMBLY__
 
-#define HTLB_AREA_SHIFT		40
-#define HTLB_AREA_SIZE		(1UL << HTLB_AREA_SHIFT)
-#define GET_HTLB_AREA(x)	((x) >> HTLB_AREA_SHIFT)
-
-#define LOW_ESID_MASK(addr, len)    \
-	(((1U << (GET_ESID(min((addr)+(len)-1, 0x100000000UL))+1)) \
-	  - (1U << GET_ESID(min((addr), 0x100000000UL)))) & 0xffff)
-#define HTLB_AREA_MASK(addr, len)   (((1U << (GET_HTLB_AREA(addr+len-1)+1)) \
-		                      - (1U << GET_HTLB_AREA(addr))) & 0xffff)
+struct slice_mask {
+	u16 low_slices;
+	u16 high_slices;
+};
+
+struct mm_struct;
+
+extern unsigned long slice_get_unmapped_area(unsigned long addr,
+					     unsigned long len,
+					     unsigned long flags,
+					     unsigned int psize,
+					     int topdown,
+					     int use_cache);
+
+extern unsigned int get_slice_psize(struct mm_struct *mm,
+				    unsigned long addr);
+
+extern void slice_init_context(struct mm_struct *mm, unsigned int psize);
+extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize);
 
 #define ARCH_HAS_HUGEPAGE_ONLY_RANGE
+extern int is_hugepage_only_range(struct mm_struct *m,
+				  unsigned long addr,
+				  unsigned long len);
+
+#endif /* __ASSEMBLY__ */
+#else
+#define slice_init()
+#endif /* CONFIG_PPC_MM_SLICES */
+
+#ifdef CONFIG_HUGETLB_PAGE
+
 #define ARCH_HAS_HUGETLB_FREE_PGD_RANGE
 #define ARCH_HAS_PREPARE_HUGEPAGE_RANGE
 #define ARCH_HAS_SETCLEAR_HUGE_PTE
-
-#define touches_hugepage_low_range(mm, addr, len) \
-	(((addr) < 0x100000000UL) \
-	 && (LOW_ESID_MASK((addr), (len)) & (mm)->context.low_htlb_areas))
-#define touches_hugepage_high_range(mm, addr, len) \
-	((((addr) + (len)) > 0x100000000UL) \
-	  && (HTLB_AREA_MASK((addr), (len)) & (mm)->context.high_htlb_areas))
-
-#define __within_hugepage_low_range(addr, len, segmask) \
-	( (((addr)+(len)) <= 0x100000000UL) \
-	  && ((LOW_ESID_MASK((addr), (len)) | (segmask)) == (segmask)))
-#define within_hugepage_low_range(addr, len) \
-	__within_hugepage_low_range((addr), (len), \
-				    current->mm->context.low_htlb_areas)
-#define __within_hugepage_high_range(addr, len, zonemask) \
-	( ((addr) >= 0x100000000UL) \
-	  && ((HTLB_AREA_MASK((addr), (len)) | (zonemask)) == (zonemask)))
-#define within_hugepage_high_range(addr, len) \
-	__within_hugepage_high_range((addr), (len), \
-				    current->mm->context.high_htlb_areas)
-
-#define is_hugepage_only_range(mm, addr, len) \
-	(touches_hugepage_high_range((mm), (addr), (len)) || \
-	  touches_hugepage_low_range((mm), (addr), (len)))
 #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
 
-#define in_hugepage_area(context, addr) \
-	(cpu_has_feature(CPU_FTR_16M_PAGE) && \
-	 ( ( (addr) >= 0x100000000UL) \
-	   ? ((1 << GET_HTLB_AREA(addr)) & (context).high_htlb_areas) \
-	   : ((1 << GET_ESID(addr)) & (context).low_htlb_areas) ) )
-
-#else /* !CONFIG_HUGETLB_PAGE */
-
-#define in_hugepage_area(mm, addr)	0
-
 #endif /* !CONFIG_HUGETLB_PAGE */
 
 #ifdef MODULE
Index: linux-cell/arch/powerpc/mm/Makefile
===================================================================
--- linux-cell.orig/arch/powerpc/mm/Makefile	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/arch/powerpc/mm/Makefile	2007-02-19 17:25:29.000000000 +1100
@@ -18,4 +18,5 @@ obj-$(CONFIG_40x)		+= 4xx_mmu.o
 obj-$(CONFIG_44x)		+= 44x_mmu.o
 obj-$(CONFIG_FSL_BOOKE)		+= fsl_booke_mmu.o
 obj-$(CONFIG_NEED_MULTIPLE_NODES) += numa.o
+obj-$(CONFIG_PPC_MM_SLICES)	+= slice.o
 obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
Index: linux-cell/arch/powerpc/mm/slice.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-cell/arch/powerpc/mm/slice.c	2007-02-19 17:36:09.000000000 +1100
@@ -0,0 +1,625 @@
+/*
+ * address space "slices" (meta-segments) support
+ *
+ * Copyright (C) 2007 Benjamin Herrenschmidt, IBM Corporation.
+ *
+ * Based on hugetlb implementation
+ *
+ * Copyright (C) 2003 David Gibson, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+#undef DEBUG
+
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/pagemap.h>
+#include <linux/err.h>
+#include <linux/spinlock.h>
+#include <asm/mman.h>
+#include <asm/mmu.h>
+
+static spinlock_t slice_convert_lock = SPIN_LOCK_UNLOCKED;
+
+
+#ifdef DEBUG
+int _slice_debug = 1;
+
+static void slice_print_mask(const char *label, struct slice_mask mask)
+{
+	char	*p, buf[16 + 3 + 16 + 1];
+	int	i;
+
+	if (!_slice_debug)
+		return;
+	p = buf;
+	for (i = 0; i < SLICE_NUM_LOW; i++)
+		*(p++) = (mask.low_slices & (1 << i)) ? '1' : '0';
+	*(p++) = ' ';
+	*(p++) = '-';
+	*(p++) = ' ';
+	for (i = 0; i < SLICE_NUM_HIGH; i++)
+		*(p++) = (mask.high_slices & (1 << i)) ? '1' : '0';
+	*(p++) = 0;
+
+	printk(KERN_DEBUG "%s:%s\n", label, buf);
+}
+
+#define slice_dbg(fmt...) do { if (_slice_debug) pr_debug(fmt); } while(0)
+
+#else
+
+static void slice_print_mask(const char *label, struct slice_mask mask) {}
+#define slice_dbg(fmt...)
+
+#endif
+
+static struct slice_mask slice_range_to_mask(unsigned long start,
+					     unsigned long len)
+{
+	unsigned long end = start + len - 1;
+	struct slice_mask ret = { 0, 0 };
+
+	if (start < SLICE_LOW_TOP) {
+		unsigned long mend = min(end, SLICE_LOW_TOP);
+		unsigned long mstart = min(start, SLICE_LOW_TOP);
+
+		ret.low_slices = (1u << (GET_LOW_SLICE_INDEX(mend) + 1))
+			- (1u << GET_LOW_SLICE_INDEX(mstart));
+	}
+
+	if ((start + len) > SLICE_LOW_TOP)
+		ret.high_slices = (1u << (GET_HIGH_SLICE_INDEX(end) + 1))
+			- (1u << GET_HIGH_SLICE_INDEX(start));
+
+	return ret;
+}
+
+static int slice_area_is_free(struct mm_struct *mm, unsigned long addr,
+			      unsigned long len)
+{
+	struct vm_area_struct *vma;
+
+	if ((mm->task_size - len) < addr)
+		return 0;
+	vma = find_vma(mm, addr);
+	return (!vma || (addr + len) <= vma->vm_start);
+}
+
+static int slice_low_has_vma(struct mm_struct *mm, unsigned long slice)
+{
+	return !slice_area_is_free(mm, slice << SLICE_LOW_SHIFT,
+				   1ul << SLICE_LOW_SHIFT);
+}
+
+static int slice_high_has_vma(struct mm_struct *mm, unsigned long slice)
+{
+	unsigned long start = slice << SLICE_HIGH_SHIFT;
+	unsigned long end = start + (1ul << SLICE_HIGH_SHIFT);
+
+	/* Hack, so that each addresses is controlled by exactly one
+	 * of the high or low area bitmaps, the first high area starts
+	 * at 4GB, not 0 */
+	if (start == 0)
+		start = SLICE_LOW_TOP;
+
+	return !slice_area_is_free(mm, start, end - start);
+}
+
+static struct slice_mask slice_mask_for_free(struct mm_struct *mm)
+{
+	struct slice_mask ret = { 0, 0 };
+	unsigned long i;
+
+	for (i = 0; i < SLICE_NUM_LOW; i++)
+		if (!slice_low_has_vma(mm, i))
+			ret.low_slices |= 1u << i;
+
+	if (mm->task_size <= SLICE_LOW_TOP)
+		return ret;
+
+	for (i = 0; i < SLICE_NUM_HIGH; i++)
+		if (!slice_high_has_vma(mm, i))
+			ret.high_slices |= 1u << i;
+
+	return ret;
+}
+
+static struct slice_mask slice_mask_for_size(struct mm_struct *mm, int psize)
+{
+	struct slice_mask ret = { 0, 0 };
+	unsigned long i;
+	u64 psizes;
+
+	psizes = mm->context.low_slices_psize;
+	for (i = 0; i < SLICE_NUM_LOW; i++)
+		if (((psizes >> (i * 4)) & 0xf) == psize)
+			ret.low_slices |= 1u << i;
+
+	psizes = mm->context.high_slices_psize;
+	for (i = 0; i < SLICE_NUM_HIGH; i++)
+		if (((psizes >> (i * 4)) & 0xf) == psize)
+			ret.high_slices |= 1u << i;
+
+	return ret;
+}
+
+static int slice_check_fit(struct slice_mask mask, struct slice_mask available)
+{
+	return (mask.low_slices & available.low_slices) == mask.low_slices &&
+		(mask.high_slices & available.high_slices) == mask.high_slices;
+}
+
+static void slice_flush_segments(void *parm)
+{
+	struct mm_struct *mm = parm;
+	unsigned long flags;
+
+	if (mm != current->active_mm)
+		return;
+
+	/* update the paca copy of the context struct */
+	get_paca()->context = current->active_mm->context;
+
+	local_irq_save(flags);
+	slb_flush_and_rebolt();
+	local_irq_restore(flags);
+}
+
+static void slice_convert(struct mm_struct *mm, struct slice_mask mask, int psize)
+{
+	/* Write the new slice psize bits */
+	u64 lpsizes, hpsizes;
+	unsigned long i, flags;
+
+	slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize);
+	slice_print_mask(" mask", mask);
+
+	/* We need to use a spinlock here to protect against
+	 * concurrent 64k -> 4k demotion ...
+	 */
+	spin_lock_irqsave(&slice_convert_lock, flags);
+
+	lpsizes = mm->context.low_slices_psize;
+	for (i = 0; i < SLICE_NUM_LOW; i++)
+		if (mask.low_slices & (1u << i))
+			lpsizes = (lpsizes & ~(0xful << (i * 4))) |
+				(((unsigned long)psize) << (i * 4));
+
+	hpsizes = mm->context.high_slices_psize;
+	for (i = 0; i < SLICE_NUM_HIGH; i++)
+		if (mask.high_slices & (1u << i))
+			hpsizes = (hpsizes & ~(0xful << (i * 4))) |
+				(((unsigned long)psize) << (i * 4));
+
+	mm->context.low_slices_psize = lpsizes;
+	mm->context.high_slices_psize = hpsizes;
+
+	slice_dbg(" lsps=%lx, hsps=%lx\n",
+		  mm->context.low_slices_psize,
+		  mm->context.high_slices_psize);
+
+	spin_unlock_irqrestore(&slice_convert_lock, flags);
+	mb();
+	/* XXX this is sub-optimal but will do for now */
+	on_each_cpu(slice_flush_segments, mm, 0, 1);
+}
+
+static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
+					      unsigned long len,
+					      struct slice_mask available,
+					      int psize, int use_cache)
+{
+	struct vm_area_struct *vma;
+	unsigned long start_addr, addr;
+	struct slice_mask mask;
+	int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+
+	if (use_cache) {
+		if (len <= mm->cached_hole_size) {
+			start_addr = addr = TASK_UNMAPPED_BASE;
+			mm->cached_hole_size = 0;
+		} else
+			start_addr = addr = mm->free_area_cache;
+	} else
+		start_addr = addr = TASK_UNMAPPED_BASE;
+
+full_search:
+	for (;;) {
+		addr = _ALIGN_UP(addr, 1ul << pshift);
+		if ((TASK_SIZE - len) < addr)
+			break;
+		vma = find_vma(mm, addr);
+		BUG_ON(vma && (addr >= vma->vm_end));
+
+		mask = slice_range_to_mask(addr, len);
+		if (!slice_check_fit(mask, available)) {
+			if (addr < SLICE_LOW_TOP)
+				addr = _ALIGN_UP(addr + 1,  1ul << SLICE_LOW_SHIFT);
+			else
+				addr = _ALIGN_UP(addr + 1,  1ul << SLICE_HIGH_SHIFT);
+			continue;
+		}
+		if (!vma || addr + len <= vma->vm_start) {
+			/*
+			 * Remember the place where we stopped the search:
+			 */
+			if (use_cache)
+				mm->free_area_cache = addr + len;
+			return addr;
+		}
+		if (use_cache && (addr + mm->cached_hole_size) < vma->vm_start)
+		        mm->cached_hole_size = vma->vm_start - addr;
+		addr = vma->vm_end;
+	}
+
+	/* Make sure we didn't miss any holes */
+	if (use_cache && start_addr != TASK_UNMAPPED_BASE) {
+		start_addr = addr = TASK_UNMAPPED_BASE;
+		mm->cached_hole_size = 0;
+		goto full_search;
+	}
+	return -ENOMEM;
+}
+
+static unsigned long slice_find_area_topdown(struct mm_struct *mm,
+					     unsigned long len,
+					     struct slice_mask available,
+					     int psize, int use_cache)
+{
+	struct vm_area_struct *vma;
+	unsigned long addr;
+	struct slice_mask mask;
+	int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+
+	/* check if free_area_cache is useful for us */
+	if (use_cache) {
+		if (len <= mm->cached_hole_size) {
+			mm->cached_hole_size = 0;
+			mm->free_area_cache = mm->mmap_base;
+		}
+
+		/* either no address requested or can't fit in requested
+		 * address hole
+		 */
+		addr = mm->free_area_cache;
+
+		/* make sure it can fit in the remaining address space */
+		if (addr > len) {
+			addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
+			mask = slice_range_to_mask(addr, len);
+			if (slice_check_fit(mask, available) &&
+			    slice_area_is_free(mm, addr, len))
+					/* remember the address as a hint for
+					 * next time
+					 */
+					return (mm->free_area_cache = addr);
+		}
+	}
+
+	addr = mm->mmap_base;
+	while (addr > len) {
+		/* Go down by chunk size */
+		addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
+
+		/* Check for hit with different page size */
+		mask = slice_range_to_mask(addr, len);
+		if (!slice_check_fit(mask, available)) {
+			if (addr < SLICE_LOW_TOP)
+				addr = _ALIGN_DOWN(addr, 1ul << SLICE_LOW_SHIFT);
+			else if (addr < (1ul << SLICE_HIGH_SHIFT))
+				addr = SLICE_LOW_TOP;
+			else
+				addr = _ALIGN_DOWN(addr, 1ul << SLICE_HIGH_SHIFT);
+			continue;
+		}
+
+		/*
+		 * Lookup failure means no vma is above this address,
+		 * else if new region fits below vma->vm_start,
+		 * return with success:
+		 */
+		vma = find_vma(mm, addr);
+		if (!vma || (addr + len) <= vma->vm_start) {
+			/* remember the address as a hint for next time */
+			if (use_cache)
+				mm->free_area_cache = addr;
+			return addr;
+		}
+
+ 		/* remember the largest hole we saw so far */
+ 		if (use_cache && (addr + mm->cached_hole_size) < vma->vm_start)
+ 		        mm->cached_hole_size = vma->vm_start - addr;
+
+		/* try just below the current vma->vm_start */
+		addr = vma->vm_start;
+	}
+
+	/*
+	 * A failed mmap() very likely causes application failure,
+	 * so fall back to the bottom-up function here. This scenario
+	 * can happen with large stack limits and large mmap()
+	 * allocations.
+	 */
+	addr = slice_find_area_bottomup(mm, len, available, psize, 0);
+
+	/*
+	 * Restore the topdown base:
+	 */
+	if (use_cache) {
+		mm->free_area_cache = mm->mmap_base;
+		mm->cached_hole_size = ~0UL;
+	}
+
+	return addr;
+}
+
+
+static unsigned long slice_find_area(struct mm_struct *mm, unsigned long len,
+				     struct slice_mask mask, int psize,
+				     int topdown, int use_cache)
+{
+	if (topdown)
+		return slice_find_area_topdown(mm, len, mask, psize, use_cache);
+	else
+		return slice_find_area_bottomup(mm, len, mask, psize, use_cache);
+}
+
+unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
+				      unsigned long flags, unsigned int psize,
+				      int topdown, int use_cache)
+{
+	struct slice_mask mask;
+	struct slice_mask good_mask;
+	struct slice_mask potential_mask = {0,0} /* silence stupid warning */;
+	int pmask_set = 0;
+	int fixed = (flags & MAP_FIXED);
+	int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+	struct mm_struct *mm = current->mm;
+
+	/* Sanity checks */
+	BUG_ON(mm->task_size == 0);
+
+	slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
+	slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d, use_cache=%d\n",
+		  addr, len, flags, topdown, use_cache);
+
+	if (len > mm->task_size)
+		return -ENOMEM;
+	if (fixed && (addr & ((1ul << pshift) - 1)))
+		return -EINVAL;
+	if (fixed && addr > (mm->task_size - len))
+		return -EINVAL;
+
+	/* If hint, make sure it matches our alignment restrictions */
+	if (!fixed && addr) {
+		addr = _ALIGN_UP(addr, 1ul << pshift);
+		slice_dbg(" aligned addr=%lx\n", addr);
+	}
+
+	/* First makeup a "good" mask of slices that have the right size
+	 * already
+	 */
+	good_mask = slice_mask_for_size(mm, psize);
+	slice_print_mask(" good_mask", good_mask);
+
+	/* First check hint if it's valid or if we have MAP_FIXED */
+	if ((addr != 0 || fixed) && (mm->task_size - len) >= addr) {
+
+		/* Don't bother with hint if it overlaps a VMA */
+		if (!fixed && !slice_area_is_free(mm, addr, len))
+			goto search;
+
+		/* Build a mask for the requested range */
+		mask = slice_range_to_mask(addr, len);
+		slice_print_mask(" mask", mask);
+
+		/* Check if we fit in the good mask. If we do, we just return,
+		 * nothing else to do
+		 */
+		if (slice_check_fit(mask, good_mask)) {
+			slice_dbg(" fits good !\n");
+			return addr;
+		}
+
+		/* We don't fit in the good mask, check what other slices are
+		 * empty and thus can be converted
+		 */
+		potential_mask = slice_mask_for_free(mm);
+		potential_mask.low_slices |= good_mask.low_slices;
+		potential_mask.high_slices |= good_mask.high_slices;
+		pmask_set = 1;
+		slice_print_mask(" potential", potential_mask);
+		if (slice_check_fit(mask, potential_mask)) {
+			slice_dbg(" fits potential !\n");
+			goto convert;
+		}
+	}
+
+	/* If we have MAP_FIXED and failed the above step, then error out */
+	if (fixed)
+		return -EBUSY;
+
+ search:
+	slice_dbg(" search...\n");
+
+	/* Now let's see if we can find something in the existing slices
+	 * for that size
+	 */
+	addr = slice_find_area(mm, len, good_mask, psize, topdown, use_cache);
+	if (addr != -ENOMEM) {
+		/* Found within the good mask, we don't have to setup,
+		 * we thus return directly
+		 */
+		slice_dbg(" found area at 0x%lx\n", addr);
+		return addr;
+	}
+
+	/* Won't fit, check what can be converted */
+	if (!pmask_set) {
+		potential_mask = slice_mask_for_free(mm);
+		potential_mask.low_slices |= good_mask.low_slices;
+		potential_mask.high_slices |= good_mask.high_slices;
+		pmask_set = 1;
+		slice_print_mask(" potential", potential_mask);
+	}
+
+	/* Now let's see if we can find something in the existing slices
+	 * for that size
+	 */
+	addr = slice_find_area(mm, len, potential_mask, psize, topdown,
+			       use_cache);
+	if (addr == -ENOMEM)
+		return -ENOMEM;
+
+	mask = slice_range_to_mask(addr, len);
+	slice_dbg(" found potential area at 0x%lx\n", addr);
+	slice_print_mask(" mask", mask);
+
+ convert:
+	slice_convert(mm, mask, psize);
+	return addr;
+
+}
+
+unsigned long arch_get_unmapped_area(struct file *filp,
+				     unsigned long addr,
+				     unsigned long len,
+				     unsigned long pgoff,
+				     unsigned long flags)
+{
+	return slice_get_unmapped_area(addr, len, flags,
+				       current->mm->context.user_psize,
+				       0, 1);
+}
+
+unsigned long arch_get_unmapped_area_topdown(struct file *filp,
+					     const unsigned long addr0,
+					     const unsigned long len,
+					     const unsigned long pgoff,
+					     const unsigned long flags)
+{
+	return slice_get_unmapped_area(addr0, len, flags,
+				       current->mm->context.user_psize,
+				       1, 1);
+}
+
+unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
+{
+	u64 psizes;
+	int index;
+
+	if (addr < SLICE_LOW_TOP) {
+		psizes = mm->context.low_slices_psize;
+		index = GET_LOW_SLICE_INDEX(addr);
+	} else {
+		psizes = mm->context.high_slices_psize;
+		index = GET_HIGH_SLICE_INDEX(addr);
+	}
+
+	return (psizes >> (index * 4)) & 0xf;
+}
+
+/*
+ * This is called by hash_page when it needs to do a lazy conversion of
+ * an address space from real 64K pages to combo 4K pages (typically
+ * when hitting a non cacheable mapping on a processor or hypervisor
+ * that won't allow them for 64K pages).
+ *
+ * This is also called in init_new_context() to change back the user
+ * psize from whatever the parent context had it set to
+ *
+ * This function will only change the content of the {low,high)_slice_psize
+ * masks, it will not flush SLBs as this shall be handled lazily by the
+ * caller
+ */
+void slice_set_user_psize(struct mm_struct *mm, unsigned int psize)
+{
+	unsigned long flags, lpsizes, hpsizes;
+	unsigned int old_psize;
+	int i;
+
+	slice_dbg("slice_set_user_psize(mm=%p, psize=%d)\n", mm, psize);
+
+	spin_lock_irqsave(&slice_convert_lock, flags);
+
+	old_psize = mm->context.user_psize;
+	slice_dbg(" old_psize=%d\n", old_psize);
+	if (old_psize == psize)
+		goto bail;
+
+	mm->context.user_psize = psize;
+	wmb();
+
+	lpsizes = mm->context.low_slices_psize;
+	for (i = 0; i < SLICE_NUM_LOW; i++)
+		if (((lpsizes >> (i * 4)) & 0xf) == old_psize)
+			lpsizes = (lpsizes & ~(0xful << (i * 4))) |
+				(((unsigned long)psize) << (i * 4));
+
+	hpsizes = mm->context.high_slices_psize;
+	for (i = 0; i < SLICE_NUM_HIGH; i++)
+		if (((hpsizes >> (i * 4)) & 0xf) == old_psize)
+			hpsizes = (hpsizes & ~(0xful << (i * 4))) |
+				(((unsigned long)psize) << (i * 4));
+
+	mm->context.low_slices_psize = lpsizes;
+	mm->context.high_slices_psize = hpsizes;
+
+	slice_dbg(" lsps=%lx, hsps=%lx\n",
+		  mm->context.low_slices_psize,
+		  mm->context.high_slices_psize);
+
+ bail:
+	spin_unlock_irqrestore(&slice_convert_lock, flags);
+}
+
+/*
+ * is_hugepage_only_range() is used by generic code to verify wether
+ * a normal mmap mapping (non hugetlbfs) is valid on a given area.
+ *
+ * until the generic code provides a more generic hook and/or starts
+ * calling arch get_unmapped_area for MAP_FIXED (which our implementation
+ * here knows how to deal with), we hijack it to keep standard mappings
+ * away from us.
+ *
+ * because of that generic code limitation, MAP_FIXED mapping cannot
+ * "convert" back a slice with no VMAs to the standard page size, only
+ * get_unmapped_area() can. It would be possible to fix it here but I
+ * prefer working on fixing the generic code instead.
+ *
+ * WARNING: This will not work if hugetlbfs isn't enabled since the
+ * generic code will redefine that function as 0 in that. This is ok
+ * for now as we only use slices with hugetlbfs enabled. This should
+ * be fixed as the generic code gets fixed.
+ */
+int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
+			   unsigned long len)
+{
+	struct slice_mask mask, available;
+
+	mask = slice_range_to_mask(addr, len);
+	available = slice_mask_for_size(mm, mm->context.user_psize);
+
+#if 0 /* too verbose */
+	slice_dbg("is_hugepage_only_range(mm=%p, addr=%lx, len=%lx)\n",
+		 mm, addr, len);
+	slice_print_mask(" mask", mask);
+	slice_print_mask(" available", available);
+#endif
+	return !slice_check_fit(mask, available);
+}
+
Index: linux-cell/arch/powerpc/kernel/asm-offsets.c
===================================================================
--- linux-cell.orig/arch/powerpc/kernel/asm-offsets.c	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/arch/powerpc/kernel/asm-offsets.c	2007-02-19 17:25:29.000000000 +1100
@@ -123,12 +123,18 @@ int main(void)
 	DEFINE(PACASLBCACHE, offsetof(struct paca_struct, slb_cache));
 	DEFINE(PACASLBCACHEPTR, offsetof(struct paca_struct, slb_cache_ptr));
 	DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id));
-	DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, context.sllp));
 	DEFINE(PACAVMALLOCSLLP, offsetof(struct paca_struct, vmalloc_sllp));
-#ifdef CONFIG_HUGETLB_PAGE
-	DEFINE(PACALOWHTLBAREAS, offsetof(struct paca_struct, context.low_htlb_areas));
-	DEFINE(PACAHIGHHTLBAREAS, offsetof(struct paca_struct, context.high_htlb_areas));
-#endif /* CONFIG_HUGETLB_PAGE */
+#ifdef CONFIG_PPC_MM_SLICES
+	DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct,
+					    context.low_slices_psize));
+	DEFINE(PACAHIGHSLICEPSIZE, offsetof(struct paca_struct,
+					    context.high_slices_psize));
+	DEFINE(MMUPSIZEDEFSIZE, sizeof(struct mmu_psize_def));
+	DEFINE(MMUPSIZESLLP, offsetof(struct mmu_psize_def, sllp));
+#else
+	DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, context.sllp));
+
+#endif /* CONFIG_PPC_MM_SLICES */
 	DEFINE(PACA_EXGEN, offsetof(struct paca_struct, exgen));
 	DEFINE(PACA_EXMC, offsetof(struct paca_struct, exmc));
 	DEFINE(PACA_EXSLB, offsetof(struct paca_struct, exslb));
Index: linux-cell/arch/powerpc/mm/hugetlbpage.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/hugetlbpage.c	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/arch/powerpc/mm/hugetlbpage.c	2007-02-19 17:36:03.000000000 +1100
@@ -91,7 +91,7 @@ pte_t *huge_pte_offset(struct mm_struct 
 	pgd_t *pg;
 	pud_t *pu;
 
-	BUG_ON(! in_hugepage_area(mm->context, addr));
+	BUG_ON(get_slice_psize(mm, addr) != mmu_huge_psize);
 
 	addr &= HPAGE_MASK;
 
@@ -119,7 +119,7 @@ pte_t *huge_pte_alloc(struct mm_struct *
 	pud_t *pu;
 	hugepd_t *hpdp = NULL;
 
-	BUG_ON(! in_hugepage_area(mm->context, addr));
+	BUG_ON(get_slice_psize(mm, addr) != mmu_huge_psize);
 
 	addr &= HPAGE_MASK;
 
@@ -302,7 +302,7 @@ void hugetlb_free_pgd_range(struct mmu_g
 	start = addr;
 	pgd = pgd_offset((*tlb)->mm, addr);
 	do {
-		BUG_ON(! in_hugepage_area((*tlb)->mm->context, addr));
+		BUG_ON(get_slice_psize((*tlb)->mm, addr) != mmu_huge_psize);
 		next = pgd_addr_end(addr, end);
 		if (pgd_none_or_clear_bad(pgd))
 			continue;
@@ -342,186 +342,17 @@ struct slb_flush_info {
 	u16 newareas;
 };
 
-static void flush_low_segments(void *parm)
-{
-	struct slb_flush_info *fi = parm;
-	unsigned long i;
-
-	BUILD_BUG_ON((sizeof(fi->newareas)*8) != NUM_LOW_AREAS);
-
-	if (current->active_mm != fi->mm)
-		return;
-
-	/* Only need to do anything if this CPU is working in the same
-	 * mm as the one which has changed */
-
-	/* update the paca copy of the context struct */
-	get_paca()->context = current->active_mm->context;
-
-	asm volatile("isync" : : : "memory");
-	for (i = 0; i < NUM_LOW_AREAS; i++) {
-		if (! (fi->newareas & (1U << i)))
-			continue;
-		asm volatile("slbie %0"
-			     : : "r" ((i << SID_SHIFT) | SLBIE_C));
-	}
-	asm volatile("isync" : : : "memory");
-}
-
-static void flush_high_segments(void *parm)
-{
-	struct slb_flush_info *fi = parm;
-	unsigned long i, j;
-
-
-	BUILD_BUG_ON((sizeof(fi->newareas)*8) != NUM_HIGH_AREAS);
-
-	if (current->active_mm != fi->mm)
-		return;
-
-	/* Only need to do anything if this CPU is working in the same
-	 * mm as the one which has changed */
-
-	/* update the paca copy of the context struct */
-	get_paca()->context = current->active_mm->context;
-
-	asm volatile("isync" : : : "memory");
-	for (i = 0; i < NUM_HIGH_AREAS; i++) {
-		if (! (fi->newareas & (1U << i)))
-			continue;
-		for (j = 0; j < (1UL << (HTLB_AREA_SHIFT-SID_SHIFT)); j++)
-			asm volatile("slbie %0"
-				     :: "r" (((i << HTLB_AREA_SHIFT)
-					      + (j << SID_SHIFT)) | SLBIE_C));
-	}
-	asm volatile("isync" : : : "memory");
-}
-
-static int prepare_low_area_for_htlb(struct mm_struct *mm, unsigned long area)
-{
-	unsigned long start = area << SID_SHIFT;
-	unsigned long end = (area+1) << SID_SHIFT;
-	struct vm_area_struct *vma;
-
-	BUG_ON(area >= NUM_LOW_AREAS);
-
-	/* Check no VMAs are in the region */
-	vma = find_vma(mm, start);
-	if (vma && (vma->vm_start < end))
-		return -EBUSY;
-
-	return 0;
-}
-
-static int prepare_high_area_for_htlb(struct mm_struct *mm, unsigned long area)
-{
-	unsigned long start = area << HTLB_AREA_SHIFT;
-	unsigned long end = (area+1) << HTLB_AREA_SHIFT;
-	struct vm_area_struct *vma;
-
-	BUG_ON(area >= NUM_HIGH_AREAS);
-
-	/* Hack, so that each addresses is controlled by exactly one
-	 * of the high or low area bitmaps, the first high area starts
-	 * at 4GB, not 0 */
-	if (start == 0)
-		start = 0x100000000UL;
-
-	/* Check no VMAs are in the region */
-	vma = find_vma(mm, start);
-	if (vma && (vma->vm_start < end))
-		return -EBUSY;
-
-	return 0;
-}
-
-static int open_low_hpage_areas(struct mm_struct *mm, u16 newareas)
-{
-	unsigned long i;
-	struct slb_flush_info fi;
-
-	BUILD_BUG_ON((sizeof(newareas)*8) != NUM_LOW_AREAS);
-	BUILD_BUG_ON((sizeof(mm->context.low_htlb_areas)*8) != NUM_LOW_AREAS);
-
-	newareas &= ~(mm->context.low_htlb_areas);
-	if (! newareas)
-		return 0; /* The segments we want are already open */
-
-	for (i = 0; i < NUM_LOW_AREAS; i++)
-		if ((1 << i) & newareas)
-			if (prepare_low_area_for_htlb(mm, i) != 0)
-				return -EBUSY;
-
-	mm->context.low_htlb_areas |= newareas;
-
-	/* the context change must make it to memory before the flush,
-	 * so that further SLB misses do the right thing. */
-	mb();
-
-	fi.mm = mm;
-	fi.newareas = newareas;
-	on_each_cpu(flush_low_segments, &fi, 0, 1);
-
-	return 0;
-}
-
-static int open_high_hpage_areas(struct mm_struct *mm, u16 newareas)
-{
-	struct slb_flush_info fi;
-	unsigned long i;
-
-	BUILD_BUG_ON((sizeof(newareas)*8) != NUM_HIGH_AREAS);
-	BUILD_BUG_ON((sizeof(mm->context.high_htlb_areas)*8)
-		     != NUM_HIGH_AREAS);
-
-	newareas &= ~(mm->context.high_htlb_areas);
-	if (! newareas)
-		return 0; /* The areas we want are already open */
-
-	for (i = 0; i < NUM_HIGH_AREAS; i++)
-		if ((1 << i) & newareas)
-			if (prepare_high_area_for_htlb(mm, i) != 0)
-				return -EBUSY;
-
-	mm->context.high_htlb_areas |= newareas;
-
-	/* the context change must make it to memory before the flush,
-	 * so that further SLB misses do the right thing. */
-	mb();
-
-	fi.mm = mm;
-	fi.newareas = newareas;
-	on_each_cpu(flush_high_segments, &fi, 0, 1);
-
-	return 0;
-}
 
 int prepare_hugepage_range(unsigned long addr, unsigned long len, pgoff_t pgoff)
 {
-	int err = 0;
+	unsigned long gua_addr;
 
-	if (pgoff & (~HPAGE_MASK >> PAGE_SHIFT))
-		return -EINVAL;
-	if (len & ~HPAGE_MASK)
-		return -EINVAL;
-	if (addr & ~HPAGE_MASK)
-		return -EINVAL;
-
-	if (addr < 0x100000000UL)
-		err = open_low_hpage_areas(current->mm,
-					  LOW_ESID_MASK(addr, len));
-	if ((addr + len) > 0x100000000UL)
-		err = open_high_hpage_areas(current->mm,
-					    HTLB_AREA_MASK(addr, len));
-	if (err) {
-		printk(KERN_DEBUG "prepare_hugepage_range(%lx, %lx)"
-		       " failed (lowmask: 0x%04hx, highmask: 0x%04hx)\n",
-		       addr, len,
-		       LOW_ESID_MASK(addr, len), HTLB_AREA_MASK(addr, len));
-		return err;
-	}
+	printk("prepare_hugepage_range(addr=0x%lx, len=0x%lx\n", addr, len);
 
-	return 0;
+	/* This is only useful for MAP_FIXED so we turn it into that */
+	gua_addr = slice_get_unmapped_area(addr, len, MAP_FIXED,
+					   mmu_huge_psize, 1, 0);
+	return gua_addr == addr ? 0 : -EINVAL;
 }
 
 struct page *
@@ -530,7 +361,7 @@ follow_huge_addr(struct mm_struct *mm, u
 	pte_t *ptep;
 	struct page *page;
 
-	if (! in_hugepage_area(mm->context, address))
+	if (get_slice_psize(mm, address) != mmu_huge_psize)
 		return ERR_PTR(-EINVAL);
 
 	ptep = huge_pte_offset(mm, address);
@@ -554,338 +385,12 @@ follow_huge_pmd(struct mm_struct *mm, un
 	return NULL;
 }
 
-/* Because we have an exclusive hugepage region which lies within the
- * normal user address space, we have to take special measures to make
- * non-huge mmap()s evade the hugepage reserved regions. */
-unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
-				     unsigned long len, unsigned long pgoff,
-				     unsigned long flags)
-{
-	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
-	unsigned long start_addr;
-
-	if (len > TASK_SIZE)
-		return -ENOMEM;
-
-	if (addr) {
-		addr = PAGE_ALIGN(addr);
-		vma = find_vma(mm, addr);
-		if (((TASK_SIZE - len) >= addr)
-		    && (!vma || (addr+len) <= vma->vm_start)
-		    && !is_hugepage_only_range(mm, addr,len))
-			return addr;
-	}
-	if (len > mm->cached_hole_size) {
-	        start_addr = addr = mm->free_area_cache;
-	} else {
-	        start_addr = addr = TASK_UNMAPPED_BASE;
-	        mm->cached_hole_size = 0;
-	}
-
-full_search:
-	vma = find_vma(mm, addr);
-	while (TASK_SIZE - len >= addr) {
-		BUG_ON(vma && (addr >= vma->vm_end));
-
-		if (touches_hugepage_low_range(mm, addr, len)) {
-			addr = ALIGN(addr+1, 1<<SID_SHIFT);
-			vma = find_vma(mm, addr);
-			continue;
-		}
-		if (touches_hugepage_high_range(mm, addr, len)) {
-			addr = ALIGN(addr+1, 1UL<<HTLB_AREA_SHIFT);
-			vma = find_vma(mm, addr);
-			continue;
-		}
-		if (!vma || addr + len <= vma->vm_start) {
-			/*
-			 * Remember the place where we stopped the search:
-			 */
-			mm->free_area_cache = addr + len;
-			return addr;
-		}
-		if (addr + mm->cached_hole_size < vma->vm_start)
-		        mm->cached_hole_size = vma->vm_start - addr;
-		addr = vma->vm_end;
-		vma = vma->vm_next;
-	}
-
-	/* Make sure we didn't miss any holes */
-	if (start_addr != TASK_UNMAPPED_BASE) {
-		start_addr = addr = TASK_UNMAPPED_BASE;
-		mm->cached_hole_size = 0;
-		goto full_search;
-	}
-	return -ENOMEM;
-}
-
-/*
- * This mmap-allocator allocates new areas top-down from below the
- * stack's low limit (the base):
- *
- * Because we have an exclusive hugepage region which lies within the
- * normal user address space, we have to take special measures to make
- * non-huge mmap()s evade the hugepage reserved regions.
- */
-unsigned long
-arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
-			  const unsigned long len, const unsigned long pgoff,
-			  const unsigned long flags)
-{
-	struct vm_area_struct *vma, *prev_vma;
-	struct mm_struct *mm = current->mm;
-	unsigned long base = mm->mmap_base, addr = addr0;
-	unsigned long largest_hole = mm->cached_hole_size;
-	int first_time = 1;
-
-	/* requested length too big for entire address space */
-	if (len > TASK_SIZE)
-		return -ENOMEM;
-
-	/* dont allow allocations above current base */
-	if (mm->free_area_cache > base)
-		mm->free_area_cache = base;
-
-	/* requesting a specific address */
-	if (addr) {
-		addr = PAGE_ALIGN(addr);
-		vma = find_vma(mm, addr);
-		if (TASK_SIZE - len >= addr &&
-				(!vma || addr + len <= vma->vm_start)
-				&& !is_hugepage_only_range(mm, addr,len))
-			return addr;
-	}
-
-	if (len <= largest_hole) {
-	        largest_hole = 0;
-		mm->free_area_cache = base;
-	}
-try_again:
-	/* make sure it can fit in the remaining address space */
-	if (mm->free_area_cache < len)
-		goto fail;
-
-	/* either no address requested or cant fit in requested address hole */
-	addr = (mm->free_area_cache - len) & PAGE_MASK;
-	do {
-hugepage_recheck:
-		if (touches_hugepage_low_range(mm, addr, len)) {
-			addr = (addr & ((~0) << SID_SHIFT)) - len;
-			goto hugepage_recheck;
-		} else if (touches_hugepage_high_range(mm, addr, len)) {
-			addr = (addr & ((~0UL) << HTLB_AREA_SHIFT)) - len;
-			goto hugepage_recheck;
-		}
-
-		/*
-		 * Lookup failure means no vma is above this address,
-		 * i.e. return with success:
-		 */
- 	 	if (!(vma = find_vma_prev(mm, addr, &prev_vma)))
-			return addr;
-
-		/*
-		 * new region fits between prev_vma->vm_end and
-		 * vma->vm_start, use it:
-		 */
-		if (addr+len <= vma->vm_start &&
-		          (!prev_vma || (addr >= prev_vma->vm_end))) {
-			/* remember the address as a hint for next time */
-		        mm->cached_hole_size = largest_hole;
-		        return (mm->free_area_cache = addr);
-		} else {
-			/* pull free_area_cache down to the first hole */
-		        if (mm->free_area_cache == vma->vm_end) {
-				mm->free_area_cache = vma->vm_start;
-				mm->cached_hole_size = largest_hole;
-			}
-		}
-
-		/* remember the largest hole we saw so far */
-		if (addr + largest_hole < vma->vm_start)
-		        largest_hole = vma->vm_start - addr;
-
-		/* try just below the current vma->vm_start */
-		addr = vma->vm_start-len;
-	} while (len <= vma->vm_start);
-
-fail:
-	/*
-	 * if hint left us with no space for the requested
-	 * mapping then try again:
-	 */
-	if (first_time) {
-		mm->free_area_cache = base;
-		largest_hole = 0;
-		first_time = 0;
-		goto try_again;
-	}
-	/*
-	 * A failed mmap() very likely causes application failure,
-	 * so fall back to the bottom-up function here. This scenario
-	 * can happen with large stack limits and large mmap()
-	 * allocations.
-	 */
-	mm->free_area_cache = TASK_UNMAPPED_BASE;
-	mm->cached_hole_size = ~0UL;
-	addr = arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
-	/*
-	 * Restore the topdown base:
-	 */
-	mm->free_area_cache = base;
-	mm->cached_hole_size = ~0UL;
-
-	return addr;
-}
-
-static int htlb_check_hinted_area(unsigned long addr, unsigned long len)
-{
-	struct vm_area_struct *vma;
-
-	vma = find_vma(current->mm, addr);
-	if (TASK_SIZE - len >= addr &&
-	    (!vma || ((addr + len) <= vma->vm_start)))
-		return 0;
-
-	return -ENOMEM;
-}
-
-static unsigned long htlb_get_low_area(unsigned long len, u16 segmask)
-{
-	unsigned long addr = 0;
-	struct vm_area_struct *vma;
-
-	vma = find_vma(current->mm, addr);
-	while (addr + len <= 0x100000000UL) {
-		BUG_ON(vma && (addr >= vma->vm_end)); /* invariant */
-
-		if (! __within_hugepage_low_range(addr, len, segmask)) {
-			addr = ALIGN(addr+1, 1<<SID_SHIFT);
-			vma = find_vma(current->mm, addr);
-			continue;
-		}
-
-		if (!vma || (addr + len) <= vma->vm_start)
-			return addr;
-		addr = ALIGN(vma->vm_end, HPAGE_SIZE);
-		/* Depending on segmask this might not be a confirmed
-		 * hugepage region, so the ALIGN could have skipped
-		 * some VMAs */
-		vma = find_vma(current->mm, addr);
-	}
-
-	return -ENOMEM;
-}
-
-static unsigned long htlb_get_high_area(unsigned long len, u16 areamask)
-{
-	unsigned long addr = 0x100000000UL;
-	struct vm_area_struct *vma;
-
-	vma = find_vma(current->mm, addr);
-	while (addr + len <= TASK_SIZE_USER64) {
-		BUG_ON(vma && (addr >= vma->vm_end)); /* invariant */
-
-		if (! __within_hugepage_high_range(addr, len, areamask)) {
-			addr = ALIGN(addr+1, 1UL<<HTLB_AREA_SHIFT);
-			vma = find_vma(current->mm, addr);
-			continue;
-		}
-
-		if (!vma || (addr + len) <= vma->vm_start)
-			return addr;
-		addr = ALIGN(vma->vm_end, HPAGE_SIZE);
-		/* Depending on segmask this might not be a confirmed
-		 * hugepage region, so the ALIGN could have skipped
-		 * some VMAs */
-		vma = find_vma(current->mm, addr);
-	}
-
-	return -ENOMEM;
-}
-
 unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
 					unsigned long len, unsigned long pgoff,
 					unsigned long flags)
 {
-	int lastshift;
-	u16 areamask, curareas;
-
-	if (HPAGE_SHIFT == 0)
-		return -EINVAL;
-	if (len & ~HPAGE_MASK)
-		return -EINVAL;
-	if (len > TASK_SIZE)
-		return -ENOMEM;
-
-	if (!cpu_has_feature(CPU_FTR_16M_PAGE))
-		return -EINVAL;
-
-	/* Paranoia, caller should have dealt with this */
-	BUG_ON((addr + len)  < addr);
-
-	if (test_thread_flag(TIF_32BIT)) {
-		curareas = current->mm->context.low_htlb_areas;
-
-		/* First see if we can use the hint address */
-		if (addr && (htlb_check_hinted_area(addr, len) == 0)) {
-			areamask = LOW_ESID_MASK(addr, len);
-			if (open_low_hpage_areas(current->mm, areamask) == 0)
-				return addr;
-		}
-
-		/* Next see if we can map in the existing low areas */
-		addr = htlb_get_low_area(len, curareas);
-		if (addr != -ENOMEM)
-			return addr;
-
-		/* Finally go looking for areas to open */
-		lastshift = 0;
-		for (areamask = LOW_ESID_MASK(0x100000000UL-len, len);
-		     ! lastshift; areamask >>=1) {
-			if (areamask & 1)
-				lastshift = 1;
-
-			addr = htlb_get_low_area(len, curareas | areamask);
-			if ((addr != -ENOMEM)
-			    && open_low_hpage_areas(current->mm, areamask) == 0)
-				return addr;
-		}
-	} else {
-		curareas = current->mm->context.high_htlb_areas;
-
-		/* First see if we can use the hint address */
-		/* We discourage 64-bit processes from doing hugepage
-		 * mappings below 4GB (must use MAP_FIXED) */
-		if ((addr >= 0x100000000UL)
-		    && (htlb_check_hinted_area(addr, len) == 0)) {
-			areamask = HTLB_AREA_MASK(addr, len);
-			if (open_high_hpage_areas(current->mm, areamask) == 0)
-				return addr;
-		}
-
-		/* Next see if we can map in the existing high areas */
-		addr = htlb_get_high_area(len, curareas);
-		if (addr != -ENOMEM)
-			return addr;
-
-		/* Finally go looking for areas to open */
-		lastshift = 0;
-		for (areamask = HTLB_AREA_MASK(TASK_SIZE_USER64-len, len);
-		     ! lastshift; areamask >>=1) {
-			if (areamask & 1)
-				lastshift = 1;
-
-			addr = htlb_get_high_area(len, curareas | areamask);
-			if ((addr != -ENOMEM)
-			    && open_high_hpage_areas(current->mm, areamask) == 0)
-				return addr;
-		}
-	}
-	printk(KERN_DEBUG "hugetlb_get_unmapped_area() unable to open"
-	       " enough areas\n");
-	return -ENOMEM;
+	return slice_get_unmapped_area(addr, len, flags,
+				       mmu_huge_psize, 1, 0);
 }
 
 /*
Index: linux-cell/arch/powerpc/mm/mmu_context_64.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/mmu_context_64.c	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/arch/powerpc/mm/mmu_context_64.c	2007-02-19 17:25:30.000000000 +1100
@@ -28,6 +28,7 @@ int init_new_context(struct task_struct 
 {
 	int index;
 	int err;
+	int new_context = (mm->context.id == 0);
 
 again:
 	if (!idr_pre_get(&mmu_context_idr, GFP_KERNEL))
@@ -50,9 +51,18 @@ again:
 	}
 
 	mm->context.id = index;
+#ifdef CONFIG_PPC_MM_SLICES
+	/* The old code would re-promote on fork, we don't do that
+	 * when using slices as it could cause problem promoting slices
+	 * that have been forced down to 4K
+	 */
+	if (new_context)
+		slice_set_user_psize(mm, mmu_virtual_psize);
+#else
 	mm->context.user_psize = mmu_virtual_psize;
 	mm->context.sllp = SLB_VSID_USER |
 		mmu_psize_defs[mmu_virtual_psize].sllp;
+#endif
 
 	return 0;
 }
Index: linux-cell/arch/powerpc/mm/slb_low.S
===================================================================
--- linux-cell.orig/arch/powerpc/mm/slb_low.S	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/arch/powerpc/mm/slb_low.S	2007-02-19 17:25:30.000000000 +1100
@@ -82,31 +82,45 @@ _GLOBAL(slb_miss_kernel_load_io)
 	srdi.	r9,r10,USER_ESID_BITS
 	bne-	8f			/* invalid ea bits set */
 
-	/* Figure out if the segment contains huge pages */
-#ifdef CONFIG_HUGETLB_PAGE
-BEGIN_FTR_SECTION
-	b	1f
-END_FTR_SECTION_IFCLR(CPU_FTR_16M_PAGE)
+
+	/* when using slices, we extract the psize off the slice bitmaps
+	 * and then we need to get the sllp encoding off the mmu_psize_defs
+	 * array.
+	 *
+	 * XXX This is a bit inefficient especially for the normal case,
+	 * so we should try to implement a fast path for the standard page
+	 * size using the old sllp value so we avoid the array. We cannot
+	 * really do dynamic patching unfortunately as processes might flip
+	 * between 4k and 64k standard page size
+	 */
+#ifdef CONFIG_PPC_MM_SLICES
 	cmpldi	r10,16
 
-	lhz	r9,PACALOWHTLBAREAS(r13)
-	mr	r11,r10
+	/* Get the slice index * 4 in r11 and matching slice size mask in r9 */
+	ld	r9,PACALOWSLICESPSIZE(r13)
+	sldi	r11,r10,2
 	blt	5f
+	ld	r9,PACAHIGHSLICEPSIZE(r13)
+	srdi	r11,r10,(SLICE_HIGH_SHIFT - SLICE_LOW_SHIFT - 2)
+	andi.	r11,r11,0x3c
+
+5:	/* Extract the psize and multiply to get an array offset */
+	srd	r9,r9,r11
+	andi.	r9,r9,0xf
+	mulli	r9,r9,MMUPSIZEDEFSIZE
 
-	lhz	r9,PACAHIGHHTLBAREAS(r13)
-	srdi	r11,r10,(HTLB_AREA_SHIFT-SID_SHIFT)
-
-5:	srd	r9,r9,r11
-	andi.	r9,r9,1
-	beq	1f
-_GLOBAL(slb_miss_user_load_huge)
-	li	r11,0
-	b	2f
-1:
-#endif /* CONFIG_HUGETLB_PAGE */
-
+	/* Now get to the array and obtain the sllp
+	 */
+	ld	r11,PACATOC(r13)
+	ld	r11,mmu_psize_defs@got(r11)
+	add	r11,r11,r9
+	ld	r11,MMUPSIZESLLP(r11)
+	ori	r11,r11,SLB_VSID_USER
+#else
+	/* paca context sllp already contains the SLB_VSID_USER bits */
 	lhz	r11,PACACONTEXTSLLP(r13)
-2:
+#endif /* CONFIG_PPC_MM_SLICES */
+
 	ld	r9,PACACONTEXTID(r13)
 	rldimi	r10,r9,USER_ESID_BITS,0
 	b	slb_finish_load
Index: linux-cell/arch/powerpc/mm/hash_utils_64.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/hash_utils_64.c	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/arch/powerpc/mm/hash_utils_64.c	2007-02-19 17:33:23.000000000 +1100
@@ -573,6 +573,51 @@ unsigned int hash_page_do_lazy_icache(un
 	return pp;
 }
 
+#ifdef CONFIG_PPC_64K_PAGES
+static int hash_handle_ci_restrictions(struct mm_struct *mm, unsigned long ea,
+				       pte_t *ptep, int psize, int user)
+{
+	/* If this PTE is non-cacheable, switch to 4k */
+	if (psize == MMU_PAGE_64K &&
+	    (pte_val(*ptep) & _PAGE_NO_CACHE)) {
+		if (user) {
+			printk(KERN_INFO "Demoting page size of %s\n",
+			       current->comm);
+			psize = MMU_PAGE_4K;
+#ifdef CONFIG_PPC_MM_SLICES
+			slice_set_user_psize(mm, psize);
+#else
+			mm->context.user_psize = MMU_PAGE_4K;
+			mm->context.sllp = SLB_VSID_USER |
+				mmu_psize_defs[MMU_PAGE_4K].sllp;
+#endif
+		} else if (ea < VMALLOC_END) {
+			/*
+			 * some driver did a non-cacheable mapping
+			 * in vmalloc space, so switch vmalloc
+			 * to 4k pages
+			 */
+			printk(KERN_ALERT "Reducing vmalloc segment "
+			       "to 4kB pages because of "
+			       "non-cacheable mapping\n");
+			psize = mmu_vmalloc_psize = MMU_PAGE_4K;
+		}
+	}
+	if (user) {
+		if (psize != get_paca()->context.user_psize) {
+			get_paca()->context = mm->context;
+			slb_flush_and_rebolt();
+		}
+	} else if (get_paca()->vmalloc_sllp !=
+		   mmu_psize_defs[mmu_vmalloc_psize].sllp) {
+		get_paca()->vmalloc_sllp =
+			mmu_psize_defs[mmu_vmalloc_psize].sllp;
+		slb_flush_and_rebolt();
+	}
+	return psize;
+}
+#endif /* CONFIG_PPC_64K_PAGES */
+
 /* Result code is:
  *  0 - handled
  *  1 - normal page fault
@@ -635,7 +680,8 @@ int hash_page(unsigned long ea, unsigned
 		local = 1;
 
 	/* Handle hugepage regions */
-	if (unlikely(in_hugepage_area(mm->context, ea))) {
+	if (HPAGE_SHIFT &&
+	    unlikely(get_slice_psize(mm, ea) == mmu_huge_psize)) {
 		DBG_LOW(" -> huge page !\n");
 		return hash_huge_page(mm, access, ea, vsid, local, trap);
 	}
@@ -665,39 +711,9 @@ int hash_page(unsigned long ea, unsigned
 #ifndef CONFIG_PPC_64K_PAGES
 	rc = __hash_page_4K(ea, access, vsid, ptep, trap, local);
 #else
-	if (mmu_ci_restrictions) {
-		/* If this PTE is non-cacheable, switch to 4k */
-		if (psize == MMU_PAGE_64K &&
-		    (pte_val(*ptep) & _PAGE_NO_CACHE)) {
-			if (user_region) {
-				psize = MMU_PAGE_4K;
-				mm->context.user_psize = MMU_PAGE_4K;
-				mm->context.sllp = SLB_VSID_USER |
-					mmu_psize_defs[MMU_PAGE_4K].sllp;
-			} else if (ea < VMALLOC_END) {
-				/*
-				 * some driver did a non-cacheable mapping
-				 * in vmalloc space, so switch vmalloc
-				 * to 4k pages
-				 */
-				printk(KERN_ALERT "Reducing vmalloc segment "
-				       "to 4kB pages because of "
-				       "non-cacheable mapping\n");
-				psize = mmu_vmalloc_psize = MMU_PAGE_4K;
-			}
-		}
-		if (user_region) {
-			if (psize != get_paca()->context.user_psize) {
-				get_paca()->context = mm->context;
-				slb_flush_and_rebolt();
-			}
-		} else if (get_paca()->vmalloc_sllp !=
-			   mmu_psize_defs[mmu_vmalloc_psize].sllp) {
-			get_paca()->vmalloc_sllp =
-				mmu_psize_defs[mmu_vmalloc_psize].sllp;
-			slb_flush_and_rebolt();
-		}
-	}
+	if (mmu_ci_restrictions)
+		psize = hash_handle_ci_restrictions(mm, ea, ptep, psize, user_region);
+
 	if (psize == MMU_PAGE_64K)
 		rc = __hash_page_64K(ea, access, vsid, ptep, trap, local);
 	else
@@ -723,13 +739,16 @@ void hash_preload(struct mm_struct *mm, 
 	pte_t *ptep;
 	cpumask_t mask;
 	unsigned long flags;
+	int psize;
 	int local = 0;
 
 	/* We don't want huge pages prefaulted for now
 	 */
-	if (unlikely(in_hugepage_area(mm->context, ea)))
+	if (HPAGE_SHIFT && unlikely(get_slice_psize(mm, ea) == mmu_huge_psize))
 		return;
 
+	BUG_ON(REGION_ID(ea) != USER_REGION_ID);
+
 	DBG_LOW("hash_preload(mm=%p, mm->pgdir=%p, ea=%016lx, access=%lx,"
 		" trap=%lx\n", mm, mm->pgd, ea, access, trap);
 
@@ -747,21 +766,13 @@ void hash_preload(struct mm_struct *mm, 
 	mask = cpumask_of_cpu(smp_processor_id());
 	if (cpus_equal(mm->cpu_vm_mask, mask))
 		local = 1;
+	psize = mm->context.user_psize;
 #ifndef CONFIG_PPC_64K_PAGES
 	__hash_page_4K(ea, access, vsid, ptep, trap, local);
 #else
-	if (mmu_ci_restrictions) {
-		/* If this PTE is non-cacheable, switch to 4k */
-		if (mm->context.user_psize == MMU_PAGE_64K &&
-		    (pte_val(*ptep) & _PAGE_NO_CACHE)) {
-			mm->context.user_psize = MMU_PAGE_4K;
-			mm->context.sllp = SLB_VSID_USER |
-				mmu_psize_defs[MMU_PAGE_4K].sllp;
-			get_paca()->context = mm->context;
-			slb_flush_and_rebolt();
-		}
-	}
-	if (mm->context.user_psize == MMU_PAGE_64K)
+	if (mmu_ci_restrictions)
+		psize = hash_handle_ci_restrictions(mm, ea, ptep, psize, 1);
+	if (psize == MMU_PAGE_64K)
 		__hash_page_64K(ea, access, vsid, ptep, trap, local);
 	else
 		__hash_page_4K(ea, access, vsid, ptep, trap, local);
Index: linux-cell/arch/powerpc/mm/slb.c
===================================================================
--- linux-cell.orig/arch/powerpc/mm/slb.c	2007-02-19 17:24:18.000000000 +1100
+++ linux-cell/arch/powerpc/mm/slb.c	2007-02-19 17:25:30.000000000 +1100
@@ -198,12 +198,6 @@ void slb_initialize(void)
 	static int slb_encoding_inited;
 	extern unsigned int *slb_miss_kernel_load_linear;
 	extern unsigned int *slb_miss_kernel_load_io;
-#ifdef CONFIG_HUGETLB_PAGE
-	extern unsigned int *slb_miss_user_load_huge;
-	unsigned long huge_llp;
-
-	huge_llp = mmu_psize_defs[mmu_huge_psize].sllp;
-#endif
 
 	/* Prepare our SLB miss handler based on our page size */
 	linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
@@ -220,11 +214,6 @@ void slb_initialize(void)
 
 		DBG("SLB: linear  LLP = %04x\n", linear_llp);
 		DBG("SLB: io      LLP = %04x\n", io_llp);
-#ifdef CONFIG_HUGETLB_PAGE
-		patch_slb_encoding(slb_miss_user_load_huge,
-				   SLB_VSID_USER | huge_llp);
-		DBG("SLB: huge    LLP = %04x\n", huge_llp);
-#endif
 	}
 
 	get_paca()->stab_rr = SLB_NUM_BOLTED;

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19  6:43 [PATCH] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
@ 2007-02-19 13:23 ` Jimi Xenidis
  2007-02-19 19:42   ` Benjamin Herrenschmidt
  2007-02-19 15:33 ` Olof Johansson
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 18+ messages in thread
From: Jimi Xenidis @ 2007-02-19 13:23 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Hollis R. Blanchard
  Cc: linuxppc-dev list, cbe-oss-dev

Hey Ben,
is it your intention to eventually support "/memory@x/ 
ibm,expected#pages" and will these rules effect the kernel's linear map?
My interest is with hypervisors that would like to restrict certain  
LMBs to 4k so we can do our nasty memory tricks, but still have most  
of the memory use large pages.
-JX

On Feb 19, 2007, at 1:43 AM, Benjamin Herrenschmidt wrote:

> powerpc: Introduce address space "slices"
>
> This patch provide some infrastructure that will allow proper creation
> of special VMAs with different page sizes on powerpc.
>
> The basic issue is to be able to do what hugetlbfs does but with
> different page sizes for some other special filesystems, more
> specifically, my need is:
>
>  - hugetlbfs should still work of course :-)
>
>  - SPE local store mappings using 64K pages on a 4K base page size
> kernel on Cell
>
>  - Some special 4K segments in 64K pages kernels for mapping a dodgy
> specie of powerpc specific infiniband hardware that requires 4K MMU
> mappings for various reasons I won't explain here.
>
> The main issues are:
>
>  - To maintain/keep track of the page size per "segments" (as we can
> only have one page size per segment on powerpc, which are 256MB
> divisions of the address space).
>
>  - To make sure special mappings stay within their alloted
> "segments" (including MAP_FIXED crap)
>
>  - To make sure everybody else doesn't mmap/brk/grow_stack into a
> "segment" that is used for a special mapping
>
> Some of the necessary mecanisms to handle that were present in the
> hugetlbfs code, but mostly in ways not suitable for anything else.
>
> The patch provides an infrastructure that addresses most of these
> in various ways described quickly below that hijack some of the
> existing hugetlbfs callbacks. It does not implement any new feature
> like SPE 64K mappings, that will be done by separate patch. Only
> hugetlbfs is actually hooked on the slices mecanism by this patch.
>
> The ideal solution requires some changes to the generic
> get_unmapped_area(), among others, to get rid of the hugetlbfs  
> hacks in
> there, and instead, make sure that the fs and mm get_unmapped_area are
> also called for MAP_FIXED. We might also need to add an mm callback to
> validate a mapping.
>
> I intend to do those changes separately and then adapt this work to  
> use
> them.
>
> So what is a slice ? Well, I re-used the mecanism used formerly by our
> hugetlbfs implementation which divides the address space in
> "meta-segments" which I called "slices". The division is done using
> 256MB slices below 4G, and 1T slices above. Thus the address space is
> divided currently into 16 "low" slices and 16 "high" slices. (Special
> case: high slice 0 is the area between 4G and 1T).
>
> Doing so simplifies significantly the tracking of segments and avoid
> having to keep track of all the 256MB segments in the address space.
>
> While I used the "concepts" of hugetlbfs, I mostly re-implemented
> everything in a more generic way and "ported" hugetlbfs to it.
>
> slices can have an associated page size, which is encoded in the mmu
> context and used by the SLB miss handler to set the segment sizes. The
> hash code currently doesn't care, it has a specific check for  
> hugepages,
> though I might add a mecanism to provide per-slice hash mapping
> functions in the future.
>
> The slice code provide a pair of "generic" get_unmapped_area()  
> (bottomup
> and topdown) functions that should work with any slice size. There is
> some trickyness here so I would appreciate people to have a look at  
> the
> implementation of these and let me know if I got something wrong.
>
> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> ---
>
> This version of the patch have been tested quite a bit, it passes
> David's libhugetlbfs test suite among others. I haven't done long term
> kernel stability or other torture tests though.
>
> Index: linux-cell/arch/powerpc/Kconfig
> ===================================================================
> --- linux-cell.orig/arch/powerpc/Kconfig	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/arch/powerpc/Kconfig	2007-02-19 17:25:29.000000000  
> +1100
> @@ -318,6 +318,11 @@ config PPC_STD_MMU_32
>  	def_bool y
>  	depends on PPC_STD_MMU && PPC32
>
> +config PPC_MM_SLICES
> +	bool
> +	default y if HUGETLB_PAGE
> +	default n
> +
>  config VIRT_CPU_ACCOUNTING
>  	bool "Deterministic task and CPU time accounting"
>  	depends on PPC64
> Index: linux-cell/include/asm-powerpc/mmu.h
> ===================================================================
> --- linux-cell.orig/include/asm-powerpc/mmu.h	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/include/asm-powerpc/mmu.h	2007-02-19  
> 17:25:29.000000000 +1100
> @@ -355,15 +355,17 @@ typedef unsigned long mm_context_id_t;
>
>  typedef struct {
>  	mm_context_id_t id;
> -	u16 user_psize;			/* page size index */
> -	u16 sllp;			/* SLB entry page size encoding */
> -#ifdef CONFIG_HUGETLB_PAGE
> -	u16 low_htlb_areas, high_htlb_areas;
> +	u16 user_psize;		/* base page size index */
> +
> +#ifdef CONFIG_PPC_MM_SLICES
> +	u64 low_slices_psize;	/* SLB page size encodings */
> +	u64 high_slices_psize;  /* 4 bits per slice for now */
> +#else
> +	u16 sllp;		/* SLB page size encoding */
>  #endif
>  	unsigned long vdso_base;
>  } mm_context_t;
>
> -
>  static inline unsigned long vsid_scramble(unsigned long protovsid)
>  {
>  #if 0
> Index: linux-cell/include/asm-powerpc/paca.h
> ===================================================================
> --- linux-cell.orig/include/asm-powerpc/paca.h	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/include/asm-powerpc/paca.h	2007-02-19  
> 17:25:29.000000000 +1100
> @@ -82,8 +82,8 @@ struct paca_struct {
>
>  	mm_context_t context;
>  	u16 vmalloc_sllp;
> -	u16 slb_cache[SLB_CACHE_ENTRIES];
>  	u16 slb_cache_ptr;
> +	u16 slb_cache[SLB_CACHE_ENTRIES];
>
>  	/*
>  	 * then miscellaneous read-write fields
> Index: linux-cell/include/asm-powerpc/page_64.h
> ===================================================================
> --- linux-cell.orig/include/asm-powerpc/page_64.h	2007-02-19  
> 17:24:19.000000000 +1100
> +++ linux-cell/include/asm-powerpc/page_64.h	2007-02-19  
> 17:25:42.000000000 +1100
> @@ -88,58 +88,57 @@ extern unsigned int HPAGE_SHIFT;
>
>  #endif /* __ASSEMBLY__ */
>
> -#ifdef CONFIG_HUGETLB_PAGE
> +#ifdef CONFIG_PPC_MM_SLICES
> +
> +#define SLICE_LOW_SHIFT		28
> +#define SLICE_HIGH_SHIFT	40
> +
> +#define SLICE_LOW_TOP		(0x100000000ul)
> +#define SLICE_NUM_LOW		(SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
> +#define SLICE_NUM_HIGH		(PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
> +
> +#define GET_LOW_SLICE_INDEX(addr)	((addr) >> SLICE_LOW_SHIFT)
> +#define GET_HIGH_SLICE_INDEX(addr)	((addr) >> SLICE_HIGH_SHIFT)
> +
> +#ifndef __ASSEMBLY__
>
> -#define HTLB_AREA_SHIFT		40
> -#define HTLB_AREA_SIZE		(1UL << HTLB_AREA_SHIFT)
> -#define GET_HTLB_AREA(x)	((x) >> HTLB_AREA_SHIFT)
> -
> -#define LOW_ESID_MASK(addr, len)    \
> -	(((1U << (GET_ESID(min((addr)+(len)-1, 0x100000000UL))+1)) \
> -	  - (1U << GET_ESID(min((addr), 0x100000000UL)))) & 0xffff)
> -#define HTLB_AREA_MASK(addr, len)   (((1U << (GET_HTLB_AREA(addr 
> +len-1)+1)) \
> -		                      - (1U << GET_HTLB_AREA(addr))) & 0xffff)
> +struct slice_mask {
> +	u16 low_slices;
> +	u16 high_slices;
> +};
> +
> +struct mm_struct;
> +
> +extern unsigned long slice_get_unmapped_area(unsigned long addr,
> +					     unsigned long len,
> +					     unsigned long flags,
> +					     unsigned int psize,
> +					     int topdown,
> +					     int use_cache);
> +
> +extern unsigned int get_slice_psize(struct mm_struct *mm,
> +				    unsigned long addr);
> +
> +extern void slice_init_context(struct mm_struct *mm, unsigned int  
> psize);
> +extern void slice_set_user_psize(struct mm_struct *mm, unsigned  
> int psize);
>
>  #define ARCH_HAS_HUGEPAGE_ONLY_RANGE
> +extern int is_hugepage_only_range(struct mm_struct *m,
> +				  unsigned long addr,
> +				  unsigned long len);
> +
> +#endif /* __ASSEMBLY__ */
> +#else
> +#define slice_init()
> +#endif /* CONFIG_PPC_MM_SLICES */
> +
> +#ifdef CONFIG_HUGETLB_PAGE
> +
>  #define ARCH_HAS_HUGETLB_FREE_PGD_RANGE
>  #define ARCH_HAS_PREPARE_HUGEPAGE_RANGE
>  #define ARCH_HAS_SETCLEAR_HUGE_PTE
> -
> -#define touches_hugepage_low_range(mm, addr, len) \
> -	(((addr) < 0x100000000UL) \
> -	 && (LOW_ESID_MASK((addr), (len)) & (mm)->context.low_htlb_areas))
> -#define touches_hugepage_high_range(mm, addr, len) \
> -	((((addr) + (len)) > 0x100000000UL) \
> -	  && (HTLB_AREA_MASK((addr), (len)) & (mm)- 
> >context.high_htlb_areas))
> -
> -#define __within_hugepage_low_range(addr, len, segmask) \
> -	( (((addr)+(len)) <= 0x100000000UL) \
> -	  && ((LOW_ESID_MASK((addr), (len)) | (segmask)) == (segmask)))
> -#define within_hugepage_low_range(addr, len) \
> -	__within_hugepage_low_range((addr), (len), \
> -				    current->mm->context.low_htlb_areas)
> -#define __within_hugepage_high_range(addr, len, zonemask) \
> -	( ((addr) >= 0x100000000UL) \
> -	  && ((HTLB_AREA_MASK((addr), (len)) | (zonemask)) == (zonemask)))
> -#define within_hugepage_high_range(addr, len) \
> -	__within_hugepage_high_range((addr), (len), \
> -				    current->mm->context.high_htlb_areas)
> -
> -#define is_hugepage_only_range(mm, addr, len) \
> -	(touches_hugepage_high_range((mm), (addr), (len)) || \
> -	  touches_hugepage_low_range((mm), (addr), (len)))
>  #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
>
> -#define in_hugepage_area(context, addr) \
> -	(cpu_has_feature(CPU_FTR_16M_PAGE) && \
> -	 ( ( (addr) >= 0x100000000UL) \
> -	   ? ((1 << GET_HTLB_AREA(addr)) & (context).high_htlb_areas) \
> -	   : ((1 << GET_ESID(addr)) & (context).low_htlb_areas) ) )
> -
> -#else /* !CONFIG_HUGETLB_PAGE */
> -
> -#define in_hugepage_area(mm, addr)	0
> -
>  #endif /* !CONFIG_HUGETLB_PAGE */
>
>  #ifdef MODULE
> Index: linux-cell/arch/powerpc/mm/Makefile
> ===================================================================
> --- linux-cell.orig/arch/powerpc/mm/Makefile	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/arch/powerpc/mm/Makefile	2007-02-19  
> 17:25:29.000000000 +1100
> @@ -18,4 +18,5 @@ obj-$(CONFIG_40x)		+= 4xx_mmu.o
>  obj-$(CONFIG_44x)		+= 44x_mmu.o
>  obj-$(CONFIG_FSL_BOOKE)		+= fsl_booke_mmu.o
>  obj-$(CONFIG_NEED_MULTIPLE_NODES) += numa.o
> +obj-$(CONFIG_PPC_MM_SLICES)	+= slice.o
>  obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
> Index: linux-cell/arch/powerpc/mm/slice.c
> ===================================================================
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ linux-cell/arch/powerpc/mm/slice.c	2007-02-19  
> 17:36:09.000000000 +1100
> @@ -0,0 +1,625 @@
> +/*
> + * address space "slices" (meta-segments) support
> + *
> + * Copyright (C) 2007 Benjamin Herrenschmidt, IBM Corporation.
> + *
> + * Based on hugetlb implementation
> + *
> + * Copyright (C) 2003 David Gibson, IBM Corporation.
> + *
> + * This program is free software; you can redistribute it and/or  
> modify
> + * it under the terms of the GNU General Public License as  
> published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA   
> 02111-1307  USA
> + */
> +
> +#undef DEBUG
> +
> +#include <linux/kernel.h>
> +#include <linux/mm.h>
> +#include <linux/pagemap.h>
> +#include <linux/err.h>
> +#include <linux/spinlock.h>
> +#include <asm/mman.h>
> +#include <asm/mmu.h>
> +
> +static spinlock_t slice_convert_lock = SPIN_LOCK_UNLOCKED;
> +
> +
> +#ifdef DEBUG
> +int _slice_debug = 1;
> +
> +static void slice_print_mask(const char *label, struct slice_mask  
> mask)
> +{
> +	char	*p, buf[16 + 3 + 16 + 1];
> +	int	i;
> +
> +	if (!_slice_debug)
> +		return;
> +	p = buf;
> +	for (i = 0; i < SLICE_NUM_LOW; i++)
> +		*(p++) = (mask.low_slices & (1 << i)) ? '1' : '0';
> +	*(p++) = ' ';
> +	*(p++) = '-';
> +	*(p++) = ' ';
> +	for (i = 0; i < SLICE_NUM_HIGH; i++)
> +		*(p++) = (mask.high_slices & (1 << i)) ? '1' : '0';
> +	*(p++) = 0;
> +
> +	printk(KERN_DEBUG "%s:%s\n", label, buf);
> +}
> +
> +#define slice_dbg(fmt...) do { if (_slice_debug) pr_debug(fmt); }  
> while(0)
> +
> +#else
> +
> +static void slice_print_mask(const char *label, struct slice_mask  
> mask) {}
> +#define slice_dbg(fmt...)
> +
> +#endif
> +
> +static struct slice_mask slice_range_to_mask(unsigned long start,
> +					     unsigned long len)
> +{
> +	unsigned long end = start + len - 1;
> +	struct slice_mask ret = { 0, 0 };
> +
> +	if (start < SLICE_LOW_TOP) {
> +		unsigned long mend = min(end, SLICE_LOW_TOP);
> +		unsigned long mstart = min(start, SLICE_LOW_TOP);
> +
> +		ret.low_slices = (1u << (GET_LOW_SLICE_INDEX(mend) + 1))
> +			- (1u << GET_LOW_SLICE_INDEX(mstart));
> +	}
> +
> +	if ((start + len) > SLICE_LOW_TOP)
> +		ret.high_slices = (1u << (GET_HIGH_SLICE_INDEX(end) + 1))
> +			- (1u << GET_HIGH_SLICE_INDEX(start));
> +
> +	return ret;
> +}
> +
> +static int slice_area_is_free(struct mm_struct *mm, unsigned long  
> addr,
> +			      unsigned long len)
> +{
> +	struct vm_area_struct *vma;
> +
> +	if ((mm->task_size - len) < addr)
> +		return 0;
> +	vma = find_vma(mm, addr);
> +	return (!vma || (addr + len) <= vma->vm_start);
> +}
> +
> +static int slice_low_has_vma(struct mm_struct *mm, unsigned long  
> slice)
> +{
> +	return !slice_area_is_free(mm, slice << SLICE_LOW_SHIFT,
> +				   1ul << SLICE_LOW_SHIFT);
> +}
> +
> +static int slice_high_has_vma(struct mm_struct *mm, unsigned long  
> slice)
> +{
> +	unsigned long start = slice << SLICE_HIGH_SHIFT;
> +	unsigned long end = start + (1ul << SLICE_HIGH_SHIFT);
> +
> +	/* Hack, so that each addresses is controlled by exactly one
> +	 * of the high or low area bitmaps, the first high area starts
> +	 * at 4GB, not 0 */
> +	if (start == 0)
> +		start = SLICE_LOW_TOP;
> +
> +	return !slice_area_is_free(mm, start, end - start);
> +}
> +
> +static struct slice_mask slice_mask_for_free(struct mm_struct *mm)
> +{
> +	struct slice_mask ret = { 0, 0 };
> +	unsigned long i;
> +
> +	for (i = 0; i < SLICE_NUM_LOW; i++)
> +		if (!slice_low_has_vma(mm, i))
> +			ret.low_slices |= 1u << i;
> +
> +	if (mm->task_size <= SLICE_LOW_TOP)
> +		return ret;
> +
> +	for (i = 0; i < SLICE_NUM_HIGH; i++)
> +		if (!slice_high_has_vma(mm, i))
> +			ret.high_slices |= 1u << i;
> +
> +	return ret;
> +}
> +
> +static struct slice_mask slice_mask_for_size(struct mm_struct *mm,  
> int psize)
> +{
> +	struct slice_mask ret = { 0, 0 };
> +	unsigned long i;
> +	u64 psizes;
> +
> +	psizes = mm->context.low_slices_psize;
> +	for (i = 0; i < SLICE_NUM_LOW; i++)
> +		if (((psizes >> (i * 4)) & 0xf) == psize)
> +			ret.low_slices |= 1u << i;
> +
> +	psizes = mm->context.high_slices_psize;
> +	for (i = 0; i < SLICE_NUM_HIGH; i++)
> +		if (((psizes >> (i * 4)) & 0xf) == psize)
> +			ret.high_slices |= 1u << i;
> +
> +	return ret;
> +}
> +
> +static int slice_check_fit(struct slice_mask mask, struct  
> slice_mask available)
> +{
> +	return (mask.low_slices & available.low_slices) ==  
> mask.low_slices &&
> +		(mask.high_slices & available.high_slices) == mask.high_slices;
> +}
> +
> +static void slice_flush_segments(void *parm)
> +{
> +	struct mm_struct *mm = parm;
> +	unsigned long flags;
> +
> +	if (mm != current->active_mm)
> +		return;
> +
> +	/* update the paca copy of the context struct */
> +	get_paca()->context = current->active_mm->context;
> +
> +	local_irq_save(flags);
> +	slb_flush_and_rebolt();
> +	local_irq_restore(flags);
> +}
> +
> +static void slice_convert(struct mm_struct *mm, struct slice_mask  
> mask, int psize)
> +{
> +	/* Write the new slice psize bits */
> +	u64 lpsizes, hpsizes;
> +	unsigned long i, flags;
> +
> +	slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize);
> +	slice_print_mask(" mask", mask);
> +
> +	/* We need to use a spinlock here to protect against
> +	 * concurrent 64k -> 4k demotion ...
> +	 */
> +	spin_lock_irqsave(&slice_convert_lock, flags);
> +
> +	lpsizes = mm->context.low_slices_psize;
> +	for (i = 0; i < SLICE_NUM_LOW; i++)
> +		if (mask.low_slices & (1u << i))
> +			lpsizes = (lpsizes & ~(0xful << (i * 4))) |
> +				(((unsigned long)psize) << (i * 4));
> +
> +	hpsizes = mm->context.high_slices_psize;
> +	for (i = 0; i < SLICE_NUM_HIGH; i++)
> +		if (mask.high_slices & (1u << i))
> +			hpsizes = (hpsizes & ~(0xful << (i * 4))) |
> +				(((unsigned long)psize) << (i * 4));
> +
> +	mm->context.low_slices_psize = lpsizes;
> +	mm->context.high_slices_psize = hpsizes;
> +
> +	slice_dbg(" lsps=%lx, hsps=%lx\n",
> +		  mm->context.low_slices_psize,
> +		  mm->context.high_slices_psize);
> +
> +	spin_unlock_irqrestore(&slice_convert_lock, flags);
> +	mb();
> +	/* XXX this is sub-optimal but will do for now */
> +	on_each_cpu(slice_flush_segments, mm, 0, 1);
> +}
> +
> +static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
> +					      unsigned long len,
> +					      struct slice_mask available,
> +					      int psize, int use_cache)
> +{
> +	struct vm_area_struct *vma;
> +	unsigned long start_addr, addr;
> +	struct slice_mask mask;
> +	int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
> +
> +	if (use_cache) {
> +		if (len <= mm->cached_hole_size) {
> +			start_addr = addr = TASK_UNMAPPED_BASE;
> +			mm->cached_hole_size = 0;
> +		} else
> +			start_addr = addr = mm->free_area_cache;
> +	} else
> +		start_addr = addr = TASK_UNMAPPED_BASE;
> +
> +full_search:
> +	for (;;) {
> +		addr = _ALIGN_UP(addr, 1ul << pshift);
> +		if ((TASK_SIZE - len) < addr)
> +			break;
> +		vma = find_vma(mm, addr);
> +		BUG_ON(vma && (addr >= vma->vm_end));
> +
> +		mask = slice_range_to_mask(addr, len);
> +		if (!slice_check_fit(mask, available)) {
> +			if (addr < SLICE_LOW_TOP)
> +				addr = _ALIGN_UP(addr + 1,  1ul << SLICE_LOW_SHIFT);
> +			else
> +				addr = _ALIGN_UP(addr + 1,  1ul << SLICE_HIGH_SHIFT);
> +			continue;
> +		}
> +		if (!vma || addr + len <= vma->vm_start) {
> +			/*
> +			 * Remember the place where we stopped the search:
> +			 */
> +			if (use_cache)
> +				mm->free_area_cache = addr + len;
> +			return addr;
> +		}
> +		if (use_cache && (addr + mm->cached_hole_size) < vma->vm_start)
> +		        mm->cached_hole_size = vma->vm_start - addr;
> +		addr = vma->vm_end;
> +	}
> +
> +	/* Make sure we didn't miss any holes */
> +	if (use_cache && start_addr != TASK_UNMAPPED_BASE) {
> +		start_addr = addr = TASK_UNMAPPED_BASE;
> +		mm->cached_hole_size = 0;
> +		goto full_search;
> +	}
> +	return -ENOMEM;
> +}
> +
> +static unsigned long slice_find_area_topdown(struct mm_struct *mm,
> +					     unsigned long len,
> +					     struct slice_mask available,
> +					     int psize, int use_cache)
> +{
> +	struct vm_area_struct *vma;
> +	unsigned long addr;
> +	struct slice_mask mask;
> +	int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
> +
> +	/* check if free_area_cache is useful for us */
> +	if (use_cache) {
> +		if (len <= mm->cached_hole_size) {
> +			mm->cached_hole_size = 0;
> +			mm->free_area_cache = mm->mmap_base;
> +		}
> +
> +		/* either no address requested or can't fit in requested
> +		 * address hole
> +		 */
> +		addr = mm->free_area_cache;
> +
> +		/* make sure it can fit in the remaining address space */
> +		if (addr > len) {
> +			addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
> +			mask = slice_range_to_mask(addr, len);
> +			if (slice_check_fit(mask, available) &&
> +			    slice_area_is_free(mm, addr, len))
> +					/* remember the address as a hint for
> +					 * next time
> +					 */
> +					return (mm->free_area_cache = addr);
> +		}
> +	}
> +
> +	addr = mm->mmap_base;
> +	while (addr > len) {
> +		/* Go down by chunk size */
> +		addr = _ALIGN_DOWN(addr - len, 1ul << pshift);
> +
> +		/* Check for hit with different page size */
> +		mask = slice_range_to_mask(addr, len);
> +		if (!slice_check_fit(mask, available)) {
> +			if (addr < SLICE_LOW_TOP)
> +				addr = _ALIGN_DOWN(addr, 1ul << SLICE_LOW_SHIFT);
> +			else if (addr < (1ul << SLICE_HIGH_SHIFT))
> +				addr = SLICE_LOW_TOP;
> +			else
> +				addr = _ALIGN_DOWN(addr, 1ul << SLICE_HIGH_SHIFT);
> +			continue;
> +		}
> +
> +		/*
> +		 * Lookup failure means no vma is above this address,
> +		 * else if new region fits below vma->vm_start,
> +		 * return with success:
> +		 */
> +		vma = find_vma(mm, addr);
> +		if (!vma || (addr + len) <= vma->vm_start) {
> +			/* remember the address as a hint for next time */
> +			if (use_cache)
> +				mm->free_area_cache = addr;
> +			return addr;
> +		}
> +
> + 		/* remember the largest hole we saw so far */
> + 		if (use_cache && (addr + mm->cached_hole_size) < vma->vm_start)
> + 		        mm->cached_hole_size = vma->vm_start - addr;
> +
> +		/* try just below the current vma->vm_start */
> +		addr = vma->vm_start;
> +	}
> +
> +	/*
> +	 * A failed mmap() very likely causes application failure,
> +	 * so fall back to the bottom-up function here. This scenario
> +	 * can happen with large stack limits and large mmap()
> +	 * allocations.
> +	 */
> +	addr = slice_find_area_bottomup(mm, len, available, psize, 0);
> +
> +	/*
> +	 * Restore the topdown base:
> +	 */
> +	if (use_cache) {
> +		mm->free_area_cache = mm->mmap_base;
> +		mm->cached_hole_size = ~0UL;
> +	}
> +
> +	return addr;
> +}
> +
> +
> +static unsigned long slice_find_area(struct mm_struct *mm,  
> unsigned long len,
> +				     struct slice_mask mask, int psize,
> +				     int topdown, int use_cache)
> +{
> +	if (topdown)
> +		return slice_find_area_topdown(mm, len, mask, psize, use_cache);
> +	else
> +		return slice_find_area_bottomup(mm, len, mask, psize, use_cache);
> +}
> +
> +unsigned long slice_get_unmapped_area(unsigned long addr, unsigned  
> long len,
> +				      unsigned long flags, unsigned int psize,
> +				      int topdown, int use_cache)
> +{
> +	struct slice_mask mask;
> +	struct slice_mask good_mask;
> +	struct slice_mask potential_mask = {0,0} /* silence stupid  
> warning */;
> +	int pmask_set = 0;
> +	int fixed = (flags & MAP_FIXED);
> +	int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
> +	struct mm_struct *mm = current->mm;
> +
> +	/* Sanity checks */
> +	BUG_ON(mm->task_size == 0);
> +
> +	slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm,  
> psize);
> +	slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d, use_cache=%d 
> \n",
> +		  addr, len, flags, topdown, use_cache);
> +
> +	if (len > mm->task_size)
> +		return -ENOMEM;
> +	if (fixed && (addr & ((1ul << pshift) - 1)))
> +		return -EINVAL;
> +	if (fixed && addr > (mm->task_size - len))
> +		return -EINVAL;
> +
> +	/* If hint, make sure it matches our alignment restrictions */
> +	if (!fixed && addr) {
> +		addr = _ALIGN_UP(addr, 1ul << pshift);
> +		slice_dbg(" aligned addr=%lx\n", addr);
> +	}
> +
> +	/* First makeup a "good" mask of slices that have the right size
> +	 * already
> +	 */
> +	good_mask = slice_mask_for_size(mm, psize);
> +	slice_print_mask(" good_mask", good_mask);
> +
> +	/* First check hint if it's valid or if we have MAP_FIXED */
> +	if ((addr != 0 || fixed) && (mm->task_size - len) >= addr) {
> +
> +		/* Don't bother with hint if it overlaps a VMA */
> +		if (!fixed && !slice_area_is_free(mm, addr, len))
> +			goto search;
> +
> +		/* Build a mask for the requested range */
> +		mask = slice_range_to_mask(addr, len);
> +		slice_print_mask(" mask", mask);
> +
> +		/* Check if we fit in the good mask. If we do, we just return,
> +		 * nothing else to do
> +		 */
> +		if (slice_check_fit(mask, good_mask)) {
> +			slice_dbg(" fits good !\n");
> +			return addr;
> +		}
> +
> +		/* We don't fit in the good mask, check what other slices are
> +		 * empty and thus can be converted
> +		 */
> +		potential_mask = slice_mask_for_free(mm);
> +		potential_mask.low_slices |= good_mask.low_slices;
> +		potential_mask.high_slices |= good_mask.high_slices;
> +		pmask_set = 1;
> +		slice_print_mask(" potential", potential_mask);
> +		if (slice_check_fit(mask, potential_mask)) {
> +			slice_dbg(" fits potential !\n");
> +			goto convert;
> +		}
> +	}
> +
> +	/* If we have MAP_FIXED and failed the above step, then error out */
> +	if (fixed)
> +		return -EBUSY;
> +
> + search:
> +	slice_dbg(" search...\n");
> +
> +	/* Now let's see if we can find something in the existing slices
> +	 * for that size
> +	 */
> +	addr = slice_find_area(mm, len, good_mask, psize, topdown,  
> use_cache);
> +	if (addr != -ENOMEM) {
> +		/* Found within the good mask, we don't have to setup,
> +		 * we thus return directly
> +		 */
> +		slice_dbg(" found area at 0x%lx\n", addr);
> +		return addr;
> +	}
> +
> +	/* Won't fit, check what can be converted */
> +	if (!pmask_set) {
> +		potential_mask = slice_mask_for_free(mm);
> +		potential_mask.low_slices |= good_mask.low_slices;
> +		potential_mask.high_slices |= good_mask.high_slices;
> +		pmask_set = 1;
> +		slice_print_mask(" potential", potential_mask);
> +	}
> +
> +	/* Now let's see if we can find something in the existing slices
> +	 * for that size
> +	 */
> +	addr = slice_find_area(mm, len, potential_mask, psize, topdown,
> +			       use_cache);
> +	if (addr == -ENOMEM)
> +		return -ENOMEM;
> +
> +	mask = slice_range_to_mask(addr, len);
> +	slice_dbg(" found potential area at 0x%lx\n", addr);
> +	slice_print_mask(" mask", mask);
> +
> + convert:
> +	slice_convert(mm, mask, psize);
> +	return addr;
> +
> +}
> +
> +unsigned long arch_get_unmapped_area(struct file *filp,
> +				     unsigned long addr,
> +				     unsigned long len,
> +				     unsigned long pgoff,
> +				     unsigned long flags)
> +{
> +	return slice_get_unmapped_area(addr, len, flags,
> +				       current->mm->context.user_psize,
> +				       0, 1);
> +}
> +
> +unsigned long arch_get_unmapped_area_topdown(struct file *filp,
> +					     const unsigned long addr0,
> +					     const unsigned long len,
> +					     const unsigned long pgoff,
> +					     const unsigned long flags)
> +{
> +	return slice_get_unmapped_area(addr0, len, flags,
> +				       current->mm->context.user_psize,
> +				       1, 1);
> +}
> +
> +unsigned int get_slice_psize(struct mm_struct *mm, unsigned long  
> addr)
> +{
> +	u64 psizes;
> +	int index;
> +
> +	if (addr < SLICE_LOW_TOP) {
> +		psizes = mm->context.low_slices_psize;
> +		index = GET_LOW_SLICE_INDEX(addr);
> +	} else {
> +		psizes = mm->context.high_slices_psize;
> +		index = GET_HIGH_SLICE_INDEX(addr);
> +	}
> +
> +	return (psizes >> (index * 4)) & 0xf;
> +}
> +
> +/*
> + * This is called by hash_page when it needs to do a lazy  
> conversion of
> + * an address space from real 64K pages to combo 4K pages (typically
> + * when hitting a non cacheable mapping on a processor or hypervisor
> + * that won't allow them for 64K pages).
> + *
> + * This is also called in init_new_context() to change back the user
> + * psize from whatever the parent context had it set to
> + *
> + * This function will only change the content of the {low,high) 
> _slice_psize
> + * masks, it will not flush SLBs as this shall be handled lazily  
> by the
> + * caller
> + */
> +void slice_set_user_psize(struct mm_struct *mm, unsigned int psize)
> +{
> +	unsigned long flags, lpsizes, hpsizes;
> +	unsigned int old_psize;
> +	int i;
> +
> +	slice_dbg("slice_set_user_psize(mm=%p, psize=%d)\n", mm, psize);
> +
> +	spin_lock_irqsave(&slice_convert_lock, flags);
> +
> +	old_psize = mm->context.user_psize;
> +	slice_dbg(" old_psize=%d\n", old_psize);
> +	if (old_psize == psize)
> +		goto bail;
> +
> +	mm->context.user_psize = psize;
> +	wmb();
> +
> +	lpsizes = mm->context.low_slices_psize;
> +	for (i = 0; i < SLICE_NUM_LOW; i++)
> +		if (((lpsizes >> (i * 4)) & 0xf) == old_psize)
> +			lpsizes = (lpsizes & ~(0xful << (i * 4))) |
> +				(((unsigned long)psize) << (i * 4));
> +
> +	hpsizes = mm->context.high_slices_psize;
> +	for (i = 0; i < SLICE_NUM_HIGH; i++)
> +		if (((hpsizes >> (i * 4)) & 0xf) == old_psize)
> +			hpsizes = (hpsizes & ~(0xful << (i * 4))) |
> +				(((unsigned long)psize) << (i * 4));
> +
> +	mm->context.low_slices_psize = lpsizes;
> +	mm->context.high_slices_psize = hpsizes;
> +
> +	slice_dbg(" lsps=%lx, hsps=%lx\n",
> +		  mm->context.low_slices_psize,
> +		  mm->context.high_slices_psize);
> +
> + bail:
> +	spin_unlock_irqrestore(&slice_convert_lock, flags);
> +}
> +
> +/*
> + * is_hugepage_only_range() is used by generic code to verify wether
> + * a normal mmap mapping (non hugetlbfs) is valid on a given area.
> + *
> + * until the generic code provides a more generic hook and/or starts
> + * calling arch get_unmapped_area for MAP_FIXED (which our  
> implementation
> + * here knows how to deal with), we hijack it to keep standard  
> mappings
> + * away from us.
> + *
> + * because of that generic code limitation, MAP_FIXED mapping cannot
> + * "convert" back a slice with no VMAs to the standard page size,  
> only
> + * get_unmapped_area() can. It would be possible to fix it here but I
> + * prefer working on fixing the generic code instead.
> + *
> + * WARNING: This will not work if hugetlbfs isn't enabled since the
> + * generic code will redefine that function as 0 in that. This is ok
> + * for now as we only use slices with hugetlbfs enabled. This should
> + * be fixed as the generic code gets fixed.
> + */
> +int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
> +			   unsigned long len)
> +{
> +	struct slice_mask mask, available;
> +
> +	mask = slice_range_to_mask(addr, len);
> +	available = slice_mask_for_size(mm, mm->context.user_psize);
> +
> +#if 0 /* too verbose */
> +	slice_dbg("is_hugepage_only_range(mm=%p, addr=%lx, len=%lx)\n",
> +		 mm, addr, len);
> +	slice_print_mask(" mask", mask);
> +	slice_print_mask(" available", available);
> +#endif
> +	return !slice_check_fit(mask, available);
> +}
> +
> Index: linux-cell/arch/powerpc/kernel/asm-offsets.c
> ===================================================================
> --- linux-cell.orig/arch/powerpc/kernel/asm-offsets.c	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/arch/powerpc/kernel/asm-offsets.c	2007-02-19  
> 17:25:29.000000000 +1100
> @@ -123,12 +123,18 @@ int main(void)
>  	DEFINE(PACASLBCACHE, offsetof(struct paca_struct, slb_cache));
>  	DEFINE(PACASLBCACHEPTR, offsetof(struct paca_struct,  
> slb_cache_ptr));
>  	DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id));
> -	DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, context.sllp));
>  	DEFINE(PACAVMALLOCSLLP, offsetof(struct paca_struct, vmalloc_sllp));
> -#ifdef CONFIG_HUGETLB_PAGE
> -	DEFINE(PACALOWHTLBAREAS, offsetof(struct paca_struct,  
> context.low_htlb_areas));
> -	DEFINE(PACAHIGHHTLBAREAS, offsetof(struct paca_struct,  
> context.high_htlb_areas));
> -#endif /* CONFIG_HUGETLB_PAGE */
> +#ifdef CONFIG_PPC_MM_SLICES
> +	DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct,
> +					    context.low_slices_psize));
> +	DEFINE(PACAHIGHSLICEPSIZE, offsetof(struct paca_struct,
> +					    context.high_slices_psize));
> +	DEFINE(MMUPSIZEDEFSIZE, sizeof(struct mmu_psize_def));
> +	DEFINE(MMUPSIZESLLP, offsetof(struct mmu_psize_def, sllp));
> +#else
> +	DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, context.sllp));
> +
> +#endif /* CONFIG_PPC_MM_SLICES */
>  	DEFINE(PACA_EXGEN, offsetof(struct paca_struct, exgen));
>  	DEFINE(PACA_EXMC, offsetof(struct paca_struct, exmc));
>  	DEFINE(PACA_EXSLB, offsetof(struct paca_struct, exslb));
> Index: linux-cell/arch/powerpc/mm/hugetlbpage.c
> ===================================================================
> --- linux-cell.orig/arch/powerpc/mm/hugetlbpage.c	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/arch/powerpc/mm/hugetlbpage.c	2007-02-19  
> 17:36:03.000000000 +1100
> @@ -91,7 +91,7 @@ pte_t *huge_pte_offset(struct mm_struct
>  	pgd_t *pg;
>  	pud_t *pu;
>
> -	BUG_ON(! in_hugepage_area(mm->context, addr));
> +	BUG_ON(get_slice_psize(mm, addr) != mmu_huge_psize);
>
>  	addr &= HPAGE_MASK;
>
> @@ -119,7 +119,7 @@ pte_t *huge_pte_alloc(struct mm_struct *
>  	pud_t *pu;
>  	hugepd_t *hpdp = NULL;
>
> -	BUG_ON(! in_hugepage_area(mm->context, addr));
> +	BUG_ON(get_slice_psize(mm, addr) != mmu_huge_psize);
>
>  	addr &= HPAGE_MASK;
>
> @@ -302,7 +302,7 @@ void hugetlb_free_pgd_range(struct mmu_g
>  	start = addr;
>  	pgd = pgd_offset((*tlb)->mm, addr);
>  	do {
> -		BUG_ON(! in_hugepage_area((*tlb)->mm->context, addr));
> +		BUG_ON(get_slice_psize((*tlb)->mm, addr) != mmu_huge_psize);
>  		next = pgd_addr_end(addr, end);
>  		if (pgd_none_or_clear_bad(pgd))
>  			continue;
> @@ -342,186 +342,17 @@ struct slb_flush_info {
>  	u16 newareas;
>  };
>
> -static void flush_low_segments(void *parm)
> -{
> -	struct slb_flush_info *fi = parm;
> -	unsigned long i;
> -
> -	BUILD_BUG_ON((sizeof(fi->newareas)*8) != NUM_LOW_AREAS);
> -
> -	if (current->active_mm != fi->mm)
> -		return;
> -
> -	/* Only need to do anything if this CPU is working in the same
> -	 * mm as the one which has changed */
> -
> -	/* update the paca copy of the context struct */
> -	get_paca()->context = current->active_mm->context;
> -
> -	asm volatile("isync" : : : "memory");
> -	for (i = 0; i < NUM_LOW_AREAS; i++) {
> -		if (! (fi->newareas & (1U << i)))
> -			continue;
> -		asm volatile("slbie %0"
> -			     : : "r" ((i << SID_SHIFT) | SLBIE_C));
> -	}
> -	asm volatile("isync" : : : "memory");
> -}
> -
> -static void flush_high_segments(void *parm)
> -{
> -	struct slb_flush_info *fi = parm;
> -	unsigned long i, j;
> -
> -
> -	BUILD_BUG_ON((sizeof(fi->newareas)*8) != NUM_HIGH_AREAS);
> -
> -	if (current->active_mm != fi->mm)
> -		return;
> -
> -	/* Only need to do anything if this CPU is working in the same
> -	 * mm as the one which has changed */
> -
> -	/* update the paca copy of the context struct */
> -	get_paca()->context = current->active_mm->context;
> -
> -	asm volatile("isync" : : : "memory");
> -	for (i = 0; i < NUM_HIGH_AREAS; i++) {
> -		if (! (fi->newareas & (1U << i)))
> -			continue;
> -		for (j = 0; j < (1UL << (HTLB_AREA_SHIFT-SID_SHIFT)); j++)
> -			asm volatile("slbie %0"
> -				     :: "r" (((i << HTLB_AREA_SHIFT)
> -					      + (j << SID_SHIFT)) | SLBIE_C));
> -	}
> -	asm volatile("isync" : : : "memory");
> -}
> -
> -static int prepare_low_area_for_htlb(struct mm_struct *mm,  
> unsigned long area)
> -{
> -	unsigned long start = area << SID_SHIFT;
> -	unsigned long end = (area+1) << SID_SHIFT;
> -	struct vm_area_struct *vma;
> -
> -	BUG_ON(area >= NUM_LOW_AREAS);
> -
> -	/* Check no VMAs are in the region */
> -	vma = find_vma(mm, start);
> -	if (vma && (vma->vm_start < end))
> -		return -EBUSY;
> -
> -	return 0;
> -}
> -
> -static int prepare_high_area_for_htlb(struct mm_struct *mm,  
> unsigned long area)
> -{
> -	unsigned long start = area << HTLB_AREA_SHIFT;
> -	unsigned long end = (area+1) << HTLB_AREA_SHIFT;
> -	struct vm_area_struct *vma;
> -
> -	BUG_ON(area >= NUM_HIGH_AREAS);
> -
> -	/* Hack, so that each addresses is controlled by exactly one
> -	 * of the high or low area bitmaps, the first high area starts
> -	 * at 4GB, not 0 */
> -	if (start == 0)
> -		start = 0x100000000UL;
> -
> -	/* Check no VMAs are in the region */
> -	vma = find_vma(mm, start);
> -	if (vma && (vma->vm_start < end))
> -		return -EBUSY;
> -
> -	return 0;
> -}
> -
> -static int open_low_hpage_areas(struct mm_struct *mm, u16 newareas)
> -{
> -	unsigned long i;
> -	struct slb_flush_info fi;
> -
> -	BUILD_BUG_ON((sizeof(newareas)*8) != NUM_LOW_AREAS);
> -	BUILD_BUG_ON((sizeof(mm->context.low_htlb_areas)*8) !=  
> NUM_LOW_AREAS);
> -
> -	newareas &= ~(mm->context.low_htlb_areas);
> -	if (! newareas)
> -		return 0; /* The segments we want are already open */
> -
> -	for (i = 0; i < NUM_LOW_AREAS; i++)
> -		if ((1 << i) & newareas)
> -			if (prepare_low_area_for_htlb(mm, i) != 0)
> -				return -EBUSY;
> -
> -	mm->context.low_htlb_areas |= newareas;
> -
> -	/* the context change must make it to memory before the flush,
> -	 * so that further SLB misses do the right thing. */
> -	mb();
> -
> -	fi.mm = mm;
> -	fi.newareas = newareas;
> -	on_each_cpu(flush_low_segments, &fi, 0, 1);
> -
> -	return 0;
> -}
> -
> -static int open_high_hpage_areas(struct mm_struct *mm, u16 newareas)
> -{
> -	struct slb_flush_info fi;
> -	unsigned long i;
> -
> -	BUILD_BUG_ON((sizeof(newareas)*8) != NUM_HIGH_AREAS);
> -	BUILD_BUG_ON((sizeof(mm->context.high_htlb_areas)*8)
> -		     != NUM_HIGH_AREAS);
> -
> -	newareas &= ~(mm->context.high_htlb_areas);
> -	if (! newareas)
> -		return 0; /* The areas we want are already open */
> -
> -	for (i = 0; i < NUM_HIGH_AREAS; i++)
> -		if ((1 << i) & newareas)
> -			if (prepare_high_area_for_htlb(mm, i) != 0)
> -				return -EBUSY;
> -
> -	mm->context.high_htlb_areas |= newareas;
> -
> -	/* the context change must make it to memory before the flush,
> -	 * so that further SLB misses do the right thing. */
> -	mb();
> -
> -	fi.mm = mm;
> -	fi.newareas = newareas;
> -	on_each_cpu(flush_high_segments, &fi, 0, 1);
> -
> -	return 0;
> -}
>
>  int prepare_hugepage_range(unsigned long addr, unsigned long len,  
> pgoff_t pgoff)
>  {
> -	int err = 0;
> +	unsigned long gua_addr;
>
> -	if (pgoff & (~HPAGE_MASK >> PAGE_SHIFT))
> -		return -EINVAL;
> -	if (len & ~HPAGE_MASK)
> -		return -EINVAL;
> -	if (addr & ~HPAGE_MASK)
> -		return -EINVAL;
> -
> -	if (addr < 0x100000000UL)
> -		err = open_low_hpage_areas(current->mm,
> -					  LOW_ESID_MASK(addr, len));
> -	if ((addr + len) > 0x100000000UL)
> -		err = open_high_hpage_areas(current->mm,
> -					    HTLB_AREA_MASK(addr, len));
> -	if (err) {
> -		printk(KERN_DEBUG "prepare_hugepage_range(%lx, %lx)"
> -		       " failed (lowmask: 0x%04hx, highmask: 0x%04hx)\n",
> -		       addr, len,
> -		       LOW_ESID_MASK(addr, len), HTLB_AREA_MASK(addr, len));
> -		return err;
> -	}
> +	printk("prepare_hugepage_range(addr=0x%lx, len=0x%lx\n", addr, len);
>
> -	return 0;
> +	/* This is only useful for MAP_FIXED so we turn it into that */
> +	gua_addr = slice_get_unmapped_area(addr, len, MAP_FIXED,
> +					   mmu_huge_psize, 1, 0);
> +	return gua_addr == addr ? 0 : -EINVAL;
>  }
>
>  struct page *
> @@ -530,7 +361,7 @@ follow_huge_addr(struct mm_struct *mm, u
>  	pte_t *ptep;
>  	struct page *page;
>
> -	if (! in_hugepage_area(mm->context, address))
> +	if (get_slice_psize(mm, address) != mmu_huge_psize)
>  		return ERR_PTR(-EINVAL);
>
>  	ptep = huge_pte_offset(mm, address);
> @@ -554,338 +385,12 @@ follow_huge_pmd(struct mm_struct *mm, un
>  	return NULL;
>  }
>
> -/* Because we have an exclusive hugepage region which lies within the
> - * normal user address space, we have to take special measures to  
> make
> - * non-huge mmap()s evade the hugepage reserved regions. */
> -unsigned long arch_get_unmapped_area(struct file *filp, unsigned  
> long addr,
> -				     unsigned long len, unsigned long pgoff,
> -				     unsigned long flags)
> -{
> -	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> -	unsigned long start_addr;
> -
> -	if (len > TASK_SIZE)
> -		return -ENOMEM;
> -
> -	if (addr) {
> -		addr = PAGE_ALIGN(addr);
> -		vma = find_vma(mm, addr);
> -		if (((TASK_SIZE - len) >= addr)
> -		    && (!vma || (addr+len) <= vma->vm_start)
> -		    && !is_hugepage_only_range(mm, addr,len))
> -			return addr;
> -	}
> -	if (len > mm->cached_hole_size) {
> -	        start_addr = addr = mm->free_area_cache;
> -	} else {
> -	        start_addr = addr = TASK_UNMAPPED_BASE;
> -	        mm->cached_hole_size = 0;
> -	}
> -
> -full_search:
> -	vma = find_vma(mm, addr);
> -	while (TASK_SIZE - len >= addr) {
> -		BUG_ON(vma && (addr >= vma->vm_end));
> -
> -		if (touches_hugepage_low_range(mm, addr, len)) {
> -			addr = ALIGN(addr+1, 1<<SID_SHIFT);
> -			vma = find_vma(mm, addr);
> -			continue;
> -		}
> -		if (touches_hugepage_high_range(mm, addr, len)) {
> -			addr = ALIGN(addr+1, 1UL<<HTLB_AREA_SHIFT);
> -			vma = find_vma(mm, addr);
> -			continue;
> -		}
> -		if (!vma || addr + len <= vma->vm_start) {
> -			/*
> -			 * Remember the place where we stopped the search:
> -			 */
> -			mm->free_area_cache = addr + len;
> -			return addr;
> -		}
> -		if (addr + mm->cached_hole_size < vma->vm_start)
> -		        mm->cached_hole_size = vma->vm_start - addr;
> -		addr = vma->vm_end;
> -		vma = vma->vm_next;
> -	}
> -
> -	/* Make sure we didn't miss any holes */
> -	if (start_addr != TASK_UNMAPPED_BASE) {
> -		start_addr = addr = TASK_UNMAPPED_BASE;
> -		mm->cached_hole_size = 0;
> -		goto full_search;
> -	}
> -	return -ENOMEM;
> -}
> -
> -/*
> - * This mmap-allocator allocates new areas top-down from below the
> - * stack's low limit (the base):
> - *
> - * Because we have an exclusive hugepage region which lies within the
> - * normal user address space, we have to take special measures to  
> make
> - * non-huge mmap()s evade the hugepage reserved regions.
> - */
> -unsigned long
> -arch_get_unmapped_area_topdown(struct file *filp, const unsigned  
> long addr0,
> -			  const unsigned long len, const unsigned long pgoff,
> -			  const unsigned long flags)
> -{
> -	struct vm_area_struct *vma, *prev_vma;
> -	struct mm_struct *mm = current->mm;
> -	unsigned long base = mm->mmap_base, addr = addr0;
> -	unsigned long largest_hole = mm->cached_hole_size;
> -	int first_time = 1;
> -
> -	/* requested length too big for entire address space */
> -	if (len > TASK_SIZE)
> -		return -ENOMEM;
> -
> -	/* dont allow allocations above current base */
> -	if (mm->free_area_cache > base)
> -		mm->free_area_cache = base;
> -
> -	/* requesting a specific address */
> -	if (addr) {
> -		addr = PAGE_ALIGN(addr);
> -		vma = find_vma(mm, addr);
> -		if (TASK_SIZE - len >= addr &&
> -				(!vma || addr + len <= vma->vm_start)
> -				&& !is_hugepage_only_range(mm, addr,len))
> -			return addr;
> -	}
> -
> -	if (len <= largest_hole) {
> -	        largest_hole = 0;
> -		mm->free_area_cache = base;
> -	}
> -try_again:
> -	/* make sure it can fit in the remaining address space */
> -	if (mm->free_area_cache < len)
> -		goto fail;
> -
> -	/* either no address requested or cant fit in requested address  
> hole */
> -	addr = (mm->free_area_cache - len) & PAGE_MASK;
> -	do {
> -hugepage_recheck:
> -		if (touches_hugepage_low_range(mm, addr, len)) {
> -			addr = (addr & ((~0) << SID_SHIFT)) - len;
> -			goto hugepage_recheck;
> -		} else if (touches_hugepage_high_range(mm, addr, len)) {
> -			addr = (addr & ((~0UL) << HTLB_AREA_SHIFT)) - len;
> -			goto hugepage_recheck;
> -		}
> -
> -		/*
> -		 * Lookup failure means no vma is above this address,
> -		 * i.e. return with success:
> -		 */
> - 	 	if (!(vma = find_vma_prev(mm, addr, &prev_vma)))
> -			return addr;
> -
> -		/*
> -		 * new region fits between prev_vma->vm_end and
> -		 * vma->vm_start, use it:
> -		 */
> -		if (addr+len <= vma->vm_start &&
> -		          (!prev_vma || (addr >= prev_vma->vm_end))) {
> -			/* remember the address as a hint for next time */
> -		        mm->cached_hole_size = largest_hole;
> -		        return (mm->free_area_cache = addr);
> -		} else {
> -			/* pull free_area_cache down to the first hole */
> -		        if (mm->free_area_cache == vma->vm_end) {
> -				mm->free_area_cache = vma->vm_start;
> -				mm->cached_hole_size = largest_hole;
> -			}
> -		}
> -
> -		/* remember the largest hole we saw so far */
> -		if (addr + largest_hole < vma->vm_start)
> -		        largest_hole = vma->vm_start - addr;
> -
> -		/* try just below the current vma->vm_start */
> -		addr = vma->vm_start-len;
> -	} while (len <= vma->vm_start);
> -
> -fail:
> -	/*
> -	 * if hint left us with no space for the requested
> -	 * mapping then try again:
> -	 */
> -	if (first_time) {
> -		mm->free_area_cache = base;
> -		largest_hole = 0;
> -		first_time = 0;
> -		goto try_again;
> -	}
> -	/*
> -	 * A failed mmap() very likely causes application failure,
> -	 * so fall back to the bottom-up function here. This scenario
> -	 * can happen with large stack limits and large mmap()
> -	 * allocations.
> -	 */
> -	mm->free_area_cache = TASK_UNMAPPED_BASE;
> -	mm->cached_hole_size = ~0UL;
> -	addr = arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
> -	/*
> -	 * Restore the topdown base:
> -	 */
> -	mm->free_area_cache = base;
> -	mm->cached_hole_size = ~0UL;
> -
> -	return addr;
> -}
> -
> -static int htlb_check_hinted_area(unsigned long addr, unsigned  
> long len)
> -{
> -	struct vm_area_struct *vma;
> -
> -	vma = find_vma(current->mm, addr);
> -	if (TASK_SIZE - len >= addr &&
> -	    (!vma || ((addr + len) <= vma->vm_start)))
> -		return 0;
> -
> -	return -ENOMEM;
> -}
> -
> -static unsigned long htlb_get_low_area(unsigned long len, u16  
> segmask)
> -{
> -	unsigned long addr = 0;
> -	struct vm_area_struct *vma;
> -
> -	vma = find_vma(current->mm, addr);
> -	while (addr + len <= 0x100000000UL) {
> -		BUG_ON(vma && (addr >= vma->vm_end)); /* invariant */
> -
> -		if (! __within_hugepage_low_range(addr, len, segmask)) {
> -			addr = ALIGN(addr+1, 1<<SID_SHIFT);
> -			vma = find_vma(current->mm, addr);
> -			continue;
> -		}
> -
> -		if (!vma || (addr + len) <= vma->vm_start)
> -			return addr;
> -		addr = ALIGN(vma->vm_end, HPAGE_SIZE);
> -		/* Depending on segmask this might not be a confirmed
> -		 * hugepage region, so the ALIGN could have skipped
> -		 * some VMAs */
> -		vma = find_vma(current->mm, addr);
> -	}
> -
> -	return -ENOMEM;
> -}
> -
> -static unsigned long htlb_get_high_area(unsigned long len, u16  
> areamask)
> -{
> -	unsigned long addr = 0x100000000UL;
> -	struct vm_area_struct *vma;
> -
> -	vma = find_vma(current->mm, addr);
> -	while (addr + len <= TASK_SIZE_USER64) {
> -		BUG_ON(vma && (addr >= vma->vm_end)); /* invariant */
> -
> -		if (! __within_hugepage_high_range(addr, len, areamask)) {
> -			addr = ALIGN(addr+1, 1UL<<HTLB_AREA_SHIFT);
> -			vma = find_vma(current->mm, addr);
> -			continue;
> -		}
> -
> -		if (!vma || (addr + len) <= vma->vm_start)
> -			return addr;
> -		addr = ALIGN(vma->vm_end, HPAGE_SIZE);
> -		/* Depending on segmask this might not be a confirmed
> -		 * hugepage region, so the ALIGN could have skipped
> -		 * some VMAs */
> -		vma = find_vma(current->mm, addr);
> -	}
> -
> -	return -ENOMEM;
> -}
> -
>  unsigned long hugetlb_get_unmapped_area(struct file *file,  
> unsigned long addr,
>  					unsigned long len, unsigned long pgoff,
>  					unsigned long flags)
>  {
> -	int lastshift;
> -	u16 areamask, curareas;
> -
> -	if (HPAGE_SHIFT == 0)
> -		return -EINVAL;
> -	if (len & ~HPAGE_MASK)
> -		return -EINVAL;
> -	if (len > TASK_SIZE)
> -		return -ENOMEM;
> -
> -	if (!cpu_has_feature(CPU_FTR_16M_PAGE))
> -		return -EINVAL;
> -
> -	/* Paranoia, caller should have dealt with this */
> -	BUG_ON((addr + len)  < addr);
> -
> -	if (test_thread_flag(TIF_32BIT)) {
> -		curareas = current->mm->context.low_htlb_areas;
> -
> -		/* First see if we can use the hint address */
> -		if (addr && (htlb_check_hinted_area(addr, len) == 0)) {
> -			areamask = LOW_ESID_MASK(addr, len);
> -			if (open_low_hpage_areas(current->mm, areamask) == 0)
> -				return addr;
> -		}
> -
> -		/* Next see if we can map in the existing low areas */
> -		addr = htlb_get_low_area(len, curareas);
> -		if (addr != -ENOMEM)
> -			return addr;
> -
> -		/* Finally go looking for areas to open */
> -		lastshift = 0;
> -		for (areamask = LOW_ESID_MASK(0x100000000UL-len, len);
> -		     ! lastshift; areamask >>=1) {
> -			if (areamask & 1)
> -				lastshift = 1;
> -
> -			addr = htlb_get_low_area(len, curareas | areamask);
> -			if ((addr != -ENOMEM)
> -			    && open_low_hpage_areas(current->mm, areamask) == 0)
> -				return addr;
> -		}
> -	} else {
> -		curareas = current->mm->context.high_htlb_areas;
> -
> -		/* First see if we can use the hint address */
> -		/* We discourage 64-bit processes from doing hugepage
> -		 * mappings below 4GB (must use MAP_FIXED) */
> -		if ((addr >= 0x100000000UL)
> -		    && (htlb_check_hinted_area(addr, len) == 0)) {
> -			areamask = HTLB_AREA_MASK(addr, len);
> -			if (open_high_hpage_areas(current->mm, areamask) == 0)
> -				return addr;
> -		}
> -
> -		/* Next see if we can map in the existing high areas */
> -		addr = htlb_get_high_area(len, curareas);
> -		if (addr != -ENOMEM)
> -			return addr;
> -
> -		/* Finally go looking for areas to open */
> -		lastshift = 0;
> -		for (areamask = HTLB_AREA_MASK(TASK_SIZE_USER64-len, len);
> -		     ! lastshift; areamask >>=1) {
> -			if (areamask & 1)
> -				lastshift = 1;
> -
> -			addr = htlb_get_high_area(len, curareas | areamask);
> -			if ((addr != -ENOMEM)
> -			    && open_high_hpage_areas(current->mm, areamask) == 0)
> -				return addr;
> -		}
> -	}
> -	printk(KERN_DEBUG "hugetlb_get_unmapped_area() unable to open"
> -	       " enough areas\n");
> -	return -ENOMEM;
> +	return slice_get_unmapped_area(addr, len, flags,
> +				       mmu_huge_psize, 1, 0);
>  }
>
>  /*
> Index: linux-cell/arch/powerpc/mm/mmu_context_64.c
> ===================================================================
> --- linux-cell.orig/arch/powerpc/mm/mmu_context_64.c	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/arch/powerpc/mm/mmu_context_64.c	2007-02-19  
> 17:25:30.000000000 +1100
> @@ -28,6 +28,7 @@ int init_new_context(struct task_struct
>  {
>  	int index;
>  	int err;
> +	int new_context = (mm->context.id == 0);
>
>  again:
>  	if (!idr_pre_get(&mmu_context_idr, GFP_KERNEL))
> @@ -50,9 +51,18 @@ again:
>  	}
>
>  	mm->context.id = index;
> +#ifdef CONFIG_PPC_MM_SLICES
> +	/* The old code would re-promote on fork, we don't do that
> +	 * when using slices as it could cause problem promoting slices
> +	 * that have been forced down to 4K
> +	 */
> +	if (new_context)
> +		slice_set_user_psize(mm, mmu_virtual_psize);
> +#else
>  	mm->context.user_psize = mmu_virtual_psize;
>  	mm->context.sllp = SLB_VSID_USER |
>  		mmu_psize_defs[mmu_virtual_psize].sllp;
> +#endif
>
>  	return 0;
>  }
> Index: linux-cell/arch/powerpc/mm/slb_low.S
> ===================================================================
> --- linux-cell.orig/arch/powerpc/mm/slb_low.S	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/arch/powerpc/mm/slb_low.S	2007-02-19  
> 17:25:30.000000000 +1100
> @@ -82,31 +82,45 @@ _GLOBAL(slb_miss_kernel_load_io)
>  	srdi.	r9,r10,USER_ESID_BITS
>  	bne-	8f			/* invalid ea bits set */
>
> -	/* Figure out if the segment contains huge pages */
> -#ifdef CONFIG_HUGETLB_PAGE
> -BEGIN_FTR_SECTION
> -	b	1f
> -END_FTR_SECTION_IFCLR(CPU_FTR_16M_PAGE)
> +
> +	/* when using slices, we extract the psize off the slice bitmaps
> +	 * and then we need to get the sllp encoding off the mmu_psize_defs
> +	 * array.
> +	 *
> +	 * XXX This is a bit inefficient especially for the normal case,
> +	 * so we should try to implement a fast path for the standard page
> +	 * size using the old sllp value so we avoid the array. We cannot
> +	 * really do dynamic patching unfortunately as processes might flip
> +	 * between 4k and 64k standard page size
> +	 */
> +#ifdef CONFIG_PPC_MM_SLICES
>  	cmpldi	r10,16
>
> -	lhz	r9,PACALOWHTLBAREAS(r13)
> -	mr	r11,r10
> +	/* Get the slice index * 4 in r11 and matching slice size mask in  
> r9 */
> +	ld	r9,PACALOWSLICESPSIZE(r13)
> +	sldi	r11,r10,2
>  	blt	5f
> +	ld	r9,PACAHIGHSLICEPSIZE(r13)
> +	srdi	r11,r10,(SLICE_HIGH_SHIFT - SLICE_LOW_SHIFT - 2)
> +	andi.	r11,r11,0x3c
> +
> +5:	/* Extract the psize and multiply to get an array offset */
> +	srd	r9,r9,r11
> +	andi.	r9,r9,0xf
> +	mulli	r9,r9,MMUPSIZEDEFSIZE
>
> -	lhz	r9,PACAHIGHHTLBAREAS(r13)
> -	srdi	r11,r10,(HTLB_AREA_SHIFT-SID_SHIFT)
> -
> -5:	srd	r9,r9,r11
> -	andi.	r9,r9,1
> -	beq	1f
> -_GLOBAL(slb_miss_user_load_huge)
> -	li	r11,0
> -	b	2f
> -1:
> -#endif /* CONFIG_HUGETLB_PAGE */
> -
> +	/* Now get to the array and obtain the sllp
> +	 */
> +	ld	r11,PACATOC(r13)
> +	ld	r11,mmu_psize_defs@got(r11)
> +	add	r11,r11,r9
> +	ld	r11,MMUPSIZESLLP(r11)
> +	ori	r11,r11,SLB_VSID_USER
> +#else
> +	/* paca context sllp already contains the SLB_VSID_USER bits */
>  	lhz	r11,PACACONTEXTSLLP(r13)
> -2:
> +#endif /* CONFIG_PPC_MM_SLICES */
> +
>  	ld	r9,PACACONTEXTID(r13)
>  	rldimi	r10,r9,USER_ESID_BITS,0
>  	b	slb_finish_load
> Index: linux-cell/arch/powerpc/mm/hash_utils_64.c
> ===================================================================
> --- linux-cell.orig/arch/powerpc/mm/hash_utils_64.c	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/arch/powerpc/mm/hash_utils_64.c	2007-02-19  
> 17:33:23.000000000 +1100
> @@ -573,6 +573,51 @@ unsigned int hash_page_do_lazy_icache(un
>  	return pp;
>  }
>
> +#ifdef CONFIG_PPC_64K_PAGES
> +static int hash_handle_ci_restrictions(struct mm_struct *mm,  
> unsigned long ea,
> +				       pte_t *ptep, int psize, int user)
> +{
> +	/* If this PTE is non-cacheable, switch to 4k */
> +	if (psize == MMU_PAGE_64K &&
> +	    (pte_val(*ptep) & _PAGE_NO_CACHE)) {
> +		if (user) {
> +			printk(KERN_INFO "Demoting page size of %s\n",
> +			       current->comm);
> +			psize = MMU_PAGE_4K;
> +#ifdef CONFIG_PPC_MM_SLICES
> +			slice_set_user_psize(mm, psize);
> +#else
> +			mm->context.user_psize = MMU_PAGE_4K;
> +			mm->context.sllp = SLB_VSID_USER |
> +				mmu_psize_defs[MMU_PAGE_4K].sllp;
> +#endif
> +		} else if (ea < VMALLOC_END) {
> +			/*
> +			 * some driver did a non-cacheable mapping
> +			 * in vmalloc space, so switch vmalloc
> +			 * to 4k pages
> +			 */
> +			printk(KERN_ALERT "Reducing vmalloc segment "
> +			       "to 4kB pages because of "
> +			       "non-cacheable mapping\n");
> +			psize = mmu_vmalloc_psize = MMU_PAGE_4K;
> +		}
> +	}
> +	if (user) {
> +		if (psize != get_paca()->context.user_psize) {
> +			get_paca()->context = mm->context;
> +			slb_flush_and_rebolt();
> +		}
> +	} else if (get_paca()->vmalloc_sllp !=
> +		   mmu_psize_defs[mmu_vmalloc_psize].sllp) {
> +		get_paca()->vmalloc_sllp =
> +			mmu_psize_defs[mmu_vmalloc_psize].sllp;
> +		slb_flush_and_rebolt();
> +	}
> +	return psize;
> +}
> +#endif /* CONFIG_PPC_64K_PAGES */
> +
>  /* Result code is:
>   *  0 - handled
>   *  1 - normal page fault
> @@ -635,7 +680,8 @@ int hash_page(unsigned long ea, unsigned
>  		local = 1;
>
>  	/* Handle hugepage regions */
> -	if (unlikely(in_hugepage_area(mm->context, ea))) {
> +	if (HPAGE_SHIFT &&
> +	    unlikely(get_slice_psize(mm, ea) == mmu_huge_psize)) {
>  		DBG_LOW(" -> huge page !\n");
>  		return hash_huge_page(mm, access, ea, vsid, local, trap);
>  	}
> @@ -665,39 +711,9 @@ int hash_page(unsigned long ea, unsigned
>  #ifndef CONFIG_PPC_64K_PAGES
>  	rc = __hash_page_4K(ea, access, vsid, ptep, trap, local);
>  #else
> -	if (mmu_ci_restrictions) {
> -		/* If this PTE is non-cacheable, switch to 4k */
> -		if (psize == MMU_PAGE_64K &&
> -		    (pte_val(*ptep) & _PAGE_NO_CACHE)) {
> -			if (user_region) {
> -				psize = MMU_PAGE_4K;
> -				mm->context.user_psize = MMU_PAGE_4K;
> -				mm->context.sllp = SLB_VSID_USER |
> -					mmu_psize_defs[MMU_PAGE_4K].sllp;
> -			} else if (ea < VMALLOC_END) {
> -				/*
> -				 * some driver did a non-cacheable mapping
> -				 * in vmalloc space, so switch vmalloc
> -				 * to 4k pages
> -				 */
> -				printk(KERN_ALERT "Reducing vmalloc segment "
> -				       "to 4kB pages because of "
> -				       "non-cacheable mapping\n");
> -				psize = mmu_vmalloc_psize = MMU_PAGE_4K;
> -			}
> -		}
> -		if (user_region) {
> -			if (psize != get_paca()->context.user_psize) {
> -				get_paca()->context = mm->context;
> -				slb_flush_and_rebolt();
> -			}
> -		} else if (get_paca()->vmalloc_sllp !=
> -			   mmu_psize_defs[mmu_vmalloc_psize].sllp) {
> -			get_paca()->vmalloc_sllp =
> -				mmu_psize_defs[mmu_vmalloc_psize].sllp;
> -			slb_flush_and_rebolt();
> -		}
> -	}
> +	if (mmu_ci_restrictions)
> +		psize = hash_handle_ci_restrictions(mm, ea, ptep, psize,  
> user_region);
> +
>  	if (psize == MMU_PAGE_64K)
>  		rc = __hash_page_64K(ea, access, vsid, ptep, trap, local);
>  	else
> @@ -723,13 +739,16 @@ void hash_preload(struct mm_struct *mm,
>  	pte_t *ptep;
>  	cpumask_t mask;
>  	unsigned long flags;
> +	int psize;
>  	int local = 0;
>
>  	/* We don't want huge pages prefaulted for now
>  	 */
> -	if (unlikely(in_hugepage_area(mm->context, ea)))
> +	if (HPAGE_SHIFT && unlikely(get_slice_psize(mm, ea) ==  
> mmu_huge_psize))
>  		return;
>
> +	BUG_ON(REGION_ID(ea) != USER_REGION_ID);
> +
>  	DBG_LOW("hash_preload(mm=%p, mm->pgdir=%p, ea=%016lx, access=%lx,"
>  		" trap=%lx\n", mm, mm->pgd, ea, access, trap);
>
> @@ -747,21 +766,13 @@ void hash_preload(struct mm_struct *mm,
>  	mask = cpumask_of_cpu(smp_processor_id());
>  	if (cpus_equal(mm->cpu_vm_mask, mask))
>  		local = 1;
> +	psize = mm->context.user_psize;
>  #ifndef CONFIG_PPC_64K_PAGES
>  	__hash_page_4K(ea, access, vsid, ptep, trap, local);
>  #else
> -	if (mmu_ci_restrictions) {
> -		/* If this PTE is non-cacheable, switch to 4k */
> -		if (mm->context.user_psize == MMU_PAGE_64K &&
> -		    (pte_val(*ptep) & _PAGE_NO_CACHE)) {
> -			mm->context.user_psize = MMU_PAGE_4K;
> -			mm->context.sllp = SLB_VSID_USER |
> -				mmu_psize_defs[MMU_PAGE_4K].sllp;
> -			get_paca()->context = mm->context;
> -			slb_flush_and_rebolt();
> -		}
> -	}
> -	if (mm->context.user_psize == MMU_PAGE_64K)
> +	if (mmu_ci_restrictions)
> +		psize = hash_handle_ci_restrictions(mm, ea, ptep, psize, 1);
> +	if (psize == MMU_PAGE_64K)
>  		__hash_page_64K(ea, access, vsid, ptep, trap, local);
>  	else
>  		__hash_page_4K(ea, access, vsid, ptep, trap, local);
> Index: linux-cell/arch/powerpc/mm/slb.c
> ===================================================================
> --- linux-cell.orig/arch/powerpc/mm/slb.c	2007-02-19  
> 17:24:18.000000000 +1100
> +++ linux-cell/arch/powerpc/mm/slb.c	2007-02-19 17:25:30.000000000  
> +1100
> @@ -198,12 +198,6 @@ void slb_initialize(void)
>  	static int slb_encoding_inited;
>  	extern unsigned int *slb_miss_kernel_load_linear;
>  	extern unsigned int *slb_miss_kernel_load_io;
> -#ifdef CONFIG_HUGETLB_PAGE
> -	extern unsigned int *slb_miss_user_load_huge;
> -	unsigned long huge_llp;
> -
> -	huge_llp = mmu_psize_defs[mmu_huge_psize].sllp;
> -#endif
>
>  	/* Prepare our SLB miss handler based on our page size */
>  	linear_llp = mmu_psize_defs[mmu_linear_psize].sllp;
> @@ -220,11 +214,6 @@ void slb_initialize(void)
>
>  		DBG("SLB: linear  LLP = %04x\n", linear_llp);
>  		DBG("SLB: io      LLP = %04x\n", io_llp);
> -#ifdef CONFIG_HUGETLB_PAGE
> -		patch_slb_encoding(slb_miss_user_load_huge,
> -				   SLB_VSID_USER | huge_llp);
> -		DBG("SLB: huge    LLP = %04x\n", huge_llp);
> -#endif
>  	}
>
>  	get_paca()->stab_rr = SLB_NUM_BOLTED;
>
>
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@ozlabs.org
> https://ozlabs.org/mailman/listinfo/linuxppc-dev

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19  6:43 [PATCH] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
  2007-02-19 13:23 ` Jimi Xenidis
@ 2007-02-19 15:33 ` Olof Johansson
  2007-02-19 16:49   ` [Cbe-oss-dev] " Arnd Bergmann
  2007-02-19 19:39   ` Benjamin Herrenschmidt
  2007-02-19 18:54 ` Adam Litke
  2007-02-20 19:45 ` Adam Litke
  3 siblings, 2 replies; 18+ messages in thread
From: Olof Johansson @ 2007-02-19 15:33 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, cbe-oss-dev

On Mon, Feb 19, 2007 at 05:43:38PM +1100, Benjamin Herrenschmidt wrote:
> powerpc: Introduce address space "slices"
> 
> This patch provide some infrastructure that will allow proper creation
> of special VMAs with different page sizes on powerpc.
> 
> The basic issue is to be able to do what hugetlbfs does but with
> different page sizes for some other special filesystems, more
> specifically, my need is:
> 
>  - hugetlbfs should still work of course :-)
> 
>  - SPE local store mappings using 64K pages on a 4K base page size
> kernel on Cell

Why? What is the reason they can't use 4K pages?

>  - Some special 4K segments in 64K pages kernels for mapping a dodgy
> specie of powerpc specific infiniband hardware that requires 4K MMU
> mappings for various reasons I won't explain here.


-Olof

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [Cbe-oss-dev] [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 15:33 ` Olof Johansson
@ 2007-02-19 16:49   ` Arnd Bergmann
  2007-02-19 19:39   ` Benjamin Herrenschmidt
  1 sibling, 0 replies; 18+ messages in thread
From: Arnd Bergmann @ 2007-02-19 16:49 UTC (permalink / raw)
  To: cbe-oss-dev; +Cc: Olof Johansson, linuxppc-dev list

On Monday 19 February 2007 16:33, Olof Johansson wrote:
>=20
> > =A0- SPE local store mappings using 64K pages on a 4K base page size
> > kernel on Cell
>=20
> Why? What is the reason they can't use 4K pages?
>=20
Performance: On a system with 16 SPEs, you have 4MB of local store memory.
Assuming you have an application running on them that has basically random
access with DMA to all of them, that is 1024 4k pages, while a single SPE
has only 256 TLB entries. This means you get a high overhead from loading
the PTEs, and (worse) handling all the hash miss faults, and even then
you end up thrashing your TLB.
With 64k pages, you can easily fit all the mappings for local store into
the TLB of one SPE.

Note that for regular memory, we can avoid that problem by using
16MB hugepages, which is not possible for the local store.

	Arnd <><

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19  6:43 [PATCH] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
  2007-02-19 13:23 ` Jimi Xenidis
  2007-02-19 15:33 ` Olof Johansson
@ 2007-02-19 18:54 ` Adam Litke
  2007-02-19 19:40   ` Benjamin Herrenschmidt
  2007-02-20 19:45 ` Adam Litke
  3 siblings, 1 reply; 18+ messages in thread
From: Adam Litke @ 2007-02-19 18:54 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, cbe-oss-dev

Patch seems good to me.  I tried it on my power4+ system and it was not
happy.  Have you tested on Power4 at all? 

On Mon, 2007-02-19 at 17:43 +1100, Benjamin Herrenschmidt wrote:
> powerpc: Introduce address space "slices"

-- 
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 15:33 ` Olof Johansson
  2007-02-19 16:49   ` [Cbe-oss-dev] " Arnd Bergmann
@ 2007-02-19 19:39   ` Benjamin Herrenschmidt
  2007-02-19 20:15     ` Olof Johansson
  1 sibling, 1 reply; 18+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-19 19:39 UTC (permalink / raw)
  To: Olof Johansson; +Cc: linuxppc-dev list, cbe-oss-dev

On Mon, 2007-02-19 at 09:33 -0600, Olof Johansson wrote:
> On Mon, Feb 19, 2007 at 05:43:38PM +1100, Benjamin Herrenschmidt wrote:
> > powerpc: Introduce address space "slices"
> > 
> > This patch provide some infrastructure that will allow proper creation
> > of special VMAs with different page sizes on powerpc.
> > 
> > The basic issue is to be able to do what hugetlbfs does but with
> > different page sizes for some other special filesystems, more
> > specifically, my need is:
> > 
> >  - hugetlbfs should still work of course :-)
> > 
> >  - SPE local store mappings using 64K pages on a 4K base page size
> > kernel on Cell
> 
> Why? What is the reason they can't use 4K pages?

Reduce TLB/ERAT trashing. SPE MMUs are fairly small, and in setups where
an SPE maps all the others, we take a performance hit due to trashing
with 4K pages. However, going to full 64K base page size has other
drawbacks, especially in setups with little main memory.

Ben.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 18:54 ` Adam Litke
@ 2007-02-19 19:40   ` Benjamin Herrenschmidt
  2007-02-19 20:35     ` Adam Litke
  0 siblings, 1 reply; 18+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-19 19:40 UTC (permalink / raw)
  To: Adam Litke; +Cc: linuxppc-dev list, cbe-oss-dev

On Mon, 2007-02-19 at 12:54 -0600, Adam Litke wrote:
> Patch seems good to me.  I tried it on my power4+ system and it was not
> happy.  Have you tested on Power4 at all? 

No, on Power5 only so far, I might still have something wrong :-) What
did you try and what was not happy ?

Ben

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 13:23 ` Jimi Xenidis
@ 2007-02-19 19:42   ` Benjamin Herrenschmidt
  2007-02-19 19:57     ` Jimi Xenidis
  0 siblings, 1 reply; 18+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-19 19:42 UTC (permalink / raw)
  To: Jimi Xenidis; +Cc: linuxppc-dev list, cbe-oss-dev, Hollis R. Blanchard

On Mon, 2007-02-19 at 08:23 -0500, Jimi Xenidis wrote:
> Hey Ben,
> is it your intention to eventually support "/memory@x/ 
> ibm,expected#pages" and will these rules effect the kernel's linear map?
> My interest is with hypervisors that would like to restrict certain  
> LMBs to 4k so we can do our nasty memory tricks, but still have most  
> of the memory use large pages.

The slices mecanism is strictly a mecanism for dealing with segment
constraints on userland VMA layout, so I don't think it will have any
impact either negative or positive on you.

Limiting physical memory to different page sizes is a whole different
problem that I haven't though about (damn, that would be hard).

Ben.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 19:42   ` Benjamin Herrenschmidt
@ 2007-02-19 19:57     ` Jimi Xenidis
  0 siblings, 0 replies; 18+ messages in thread
From: Jimi Xenidis @ 2007-02-19 19:57 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: linuxppc-dev list, cbe-oss-dev, Hollis R. Blanchard


On Feb 19, 2007, at 2:42 PM, Benjamin Herrenschmidt wrote:

> On Mon, 2007-02-19 at 08:23 -0500, Jimi Xenidis wrote:
>> Hey Ben,
>> is it your intention to eventually support "/memory@x/
>> ibm,expected#pages" and will these rules effect the kernel's  
>> linear map?
>> My interest is with hypervisors that would like to restrict certain
>> LMBs to 4k so we can do our nasty memory tricks, but still have most
>> of the memory use large pages.
>
> The slices mecanism is strictly a mecanism for dealing with segment
> constraints on userland VMA layout, so I don't think it will have any
> impact either negative or positive on you.
>
> Limiting physical memory to different page sizes is a whole different
> problem that I haven't though about (damn, that would be hard).

Damn, I know, I was hoping you'd be taking care of it. :)

-JX

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 19:39   ` Benjamin Herrenschmidt
@ 2007-02-19 20:15     ` Olof Johansson
  0 siblings, 0 replies; 18+ messages in thread
From: Olof Johansson @ 2007-02-19 20:15 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, cbe-oss-dev

On Tue, Feb 20, 2007 at 06:39:23AM +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2007-02-19 at 09:33 -0600, Olof Johansson wrote:
> > On Mon, Feb 19, 2007 at 05:43:38PM +1100, Benjamin Herrenschmidt wrote:
> > > powerpc: Introduce address space "slices"
> > > 
> > > This patch provide some infrastructure that will allow proper creation
> > > of special VMAs with different page sizes on powerpc.
> > > 
> > > The basic issue is to be able to do what hugetlbfs does but with
> > > different page sizes for some other special filesystems, more
> > > specifically, my need is:
> > > 
> > >  - hugetlbfs should still work of course :-)
> > > 
> > >  - SPE local store mappings using 64K pages on a 4K base page size
> > > kernel on Cell
> > 
> > Why? What is the reason they can't use 4K pages?
> 
> Reduce TLB/ERAT trashing. SPE MMUs are fairly small, and in setups where
> an SPE maps all the others, we take a performance hit due to trashing
> with 4K pages.
>
> However, going to full 64K base page size has other
> drawbacks, especially in setups with little main memory.

Right. I was mostly wondering what the underlying reason for the
requirement was.


Thanks,

-Olof

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 19:40   ` Benjamin Herrenschmidt
@ 2007-02-19 20:35     ` Adam Litke
  2007-02-19 20:47       ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 18+ messages in thread
From: Adam Litke @ 2007-02-19 20:35 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, cbe-oss-dev

On Tue, 2007-02-20 at 06:40 +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2007-02-19 at 12:54 -0600, Adam Litke wrote:
> > Patch seems good to me.  I tried it on my power4+ system and it was not
> > happy.  Have you tested on Power4 at all? 
> 
> No, on Power5 only so far, I might still have something wrong :-) What
> did you try and what was not happy ?

I haven't investigated too deeply yet, but it didn't boot.  Seemed
unable to find init.  [ And yes, I am sure it's not something related to
missing scsi drivers ;) ] Anything you want me to try out?  

console log follows...

   Elf32 kernel loaded...

zImage starting: loaded at 0x00400010 (sp: 0x0291fbe0)
Allocating 0x6d72f0 bytes for kernel ...
OF version = 'IBM,RG040719_regatta'
gunzipping (0x3a00000 <- 0x407010:0x6807b7)...done 0x697938 bytes
Finalizing device tree... using OF tree (promptr=00c3c578)
OF stdout device is: /vdevice/vty@0
Hypertas detected, assuming LPAR !
command line: selinux=0 elevator=cfq autobench_args: root=/dev/sdb1 ABAT:1171916974 
memory layout at init:
  alloc_bottom : 00000000040dc000
  alloc_top    : 0000000030000000
  alloc_top_hi : 0000000780000000
  rmo_top      : 0000000030000000
  ram_top      : 0000000780000000
Looking for displays
instantiating rtas at 0x000000002fd3a000 ... done
0000000000000000 : boot cpu     0000000000000014
0000000000000001 : starting cpu hw idx 0000000000000015... done
0000000000000002 : starting cpu hw idx 0000000000000016... done
0000000000000003 : starting cpu hw idx 0000000000000017... done
copying OF device tree ...
Building dt strings...
Building dt structure...
Device tree strings 0x00000000040dd000 -> 0x00000000040de225
Device tree struct  0x00000000040df000 -> 0x00000000040e9000
Calling quiesce ...
returning from prom_init
Partition configured for 16 cpus.
Starting Linux PPC64 #48 SMP Mon Feb 19 12:11:37 PST 2007
-----------------------------------------------------
ppc64_pft_size                = 0x1d
physicalMemorySize            = 0x780000000
ppc64_caches.dcache_line_size = 0x80
ppc64_caches.icache_line_size = 0x80
htab_address                  = 0x0000000000000000
htab_hash_mask                = 0x3fffff
-----------------------------------------------------
Linux version 2.6.20-rc7-gcb36fb6c (aglitke@kernel) (gcc version 3.4.2) #48 SMP Mon Feb 19 12:11:37 PST 2007
[boot]0012 Setup Arch
No ramdisk, default root is /dev/sda2
EEH: PCI Enhanced I/O Error Handling Enabled
PPC64 nvram contains 20480 bytes
Zone PFN ranges:
  DMA             0 ->  7864320
  Normal    7864320 ->  7864320
early_node_map[1] active PFN ranges
    0:        0 ->  7864320
[boot]0015 Setup Done
Built 1 zonelists.  Total pages: 7756800
Kernel command line: selinux=0 elevator=cfq autobench_args: root=/dev/sdb1 ABAT:1171916974 
[boot]0020 XICS Init
i8259 legacy interrupt controller initialized
[boot]0021 XICS Done
PID hash table entries: 4096 (order: 12, 32768 bytes)
Using pSeries machine description
Partition configured for 16 cpus.
Starting Linux PPC64 #48 SMP Mon Feb 19 12:11:37 PST 2007
-----------------------------------------------------
ppc64_pft_size                = 0x1d
physicalMemorySize            = 0x780000000
ppc64_caches.dcache_line_size = 0x80
ppc64_caches.icache_line_size = 0x80
htab_address                  = 0x0000000000000000
htab_hash_mask                = 0x3fffff
-----------------------------------------------------
Linux version 2.6.20-rc7-gcb36fb6c (aglitke@kernel) (gcc version 3.4.2) #48 SMP Mon Feb 19 12:11:37 PST 2007
[boot]0012 Setup Arch
No ramdisk, default root is /dev/sda2
EEH: PCI Enhanced I/O Error Handling Enabled
PPC64 nvram contains 20480 bytes
Zone PFN ranges:
  DMA             0 ->  7864320
  Normal    7864320 ->  7864320
early_node_map[1] active PFN ranges
    0:        0 ->  7864320
[boot]0015 Setup Done
Built 1 zonelists.  Total pages: 7756800
Kernel command line: selinux=0 elevator=cfq autobench_args: root=/dev/sdb1 ABAT:1171916974 
[boot]0020 XICS Init
i8259 legacy interrupt controller initialized
[boot]0021 XICS Done
PID hash table entries: 4096 (order: 12, 32768 bytes)
Console: colour dummy device 80x25
Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes)
Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes)
freeing bootmem node 0
Memory: 30967340k/31457280k available (5584k kernel code, 489940k reserved, 952k data, 272k bss, 268k init)
Mount-cache hash table entries: 256
Processor 1 found.
Processor 2 found.
Processor 3 found.
Brought up 4 CPUs
migration_cost=0
NET: Registered protocol family 16
Failed to request PCI IO region on PCI domain 0000
IOMMU table initialized, virtual merging disabled
SCSI subsystem initialized
NET: Registered protocol family 2
IP route cache hash table entries: 1048576 (order: 11, 8388608 bytes)
TCP established hash table entries: 1048576 (order: 12, 16777216 bytes)
TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
TCP: Hash tables configured (established 1048576 bind 65536)
TCP reno registered
Total HugeTLB memory allocated, 0
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
JFS: nTxBlock = 8192, nTxLock = 65536
SGI XFS with large block/inode numbers, no debug enabled
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled
serial8250.0: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
serial8250.0: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
Floppy drive(s): fd0 is 2.88M
FDC 0 is a National Semiconductor PC87306
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
loop: loaded (max 8 devices)
nbd: registered device at major 43
Intel(R) PRO/1000 Network Driver - version 7.3.15-k2
Copyright (c) 1999-2006 Intel Corporation.
e1000: 0006:61:01.0: e1000_probe: (PCI-X:133MHz:64-bit) 00:02:55:53:78:34
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
pcnet32.c:v1.33 27.Jun.2006 tsbogend@alpha.franken.de
e100: Intel(R) PRO/100 Network Driver, 3.5.17-k2-NAPI
e100: Copyright(c) 1999-2006 Intel Corporation
sym0: <1010-66> rev 0x1 at pci 0006:41:01.0 irq 71
sym0: No NVRAM, ID 7, Fast-80, LVD, parity checking
sym0: SCSI BUS has been reset.
scsi0 : sym-2.2.3
 target0:0:8: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 31)
scsi 0:0:8:0: Direct-Access     IBM      IC35L036UCDY10-0 S28C PQ: 0 ANSI: 3
 target0:0:8: tagged command queuing enabled, command queue depth 16.
 target0:0:8: Beginning Domain Validation
 target0:0:8: asynchronous
 target0:0:8: wide asynchronous
 target0:0:8: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 31)
 target0:0:8: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 31)
 target0:0:8: Ending Domain Validation
 target0:0:9: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 31)
scsi 0:0:9:0: Direct-Access     IBM      ST336607LC       C50H PQ: 0 ANSI: 3
 target0:0:9: tagged command queuing enabled, command queue depth 16.
 target0:0:9: Beginning Domain Validation
 target0:0:9: asynchronous
 target0:0:9: wide asynchronous
 target0:0:9: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 31)
 target0:0:9: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 31)
 target0:0:9: Ending Domain Validation
scsi 0:0:15:0: Enclosure         IBM      HSBPD4HA PU3SCSI 0018 PQ: 0 ANSI: 2
 target0:0:15: Beginning Domain Validation
scsi 0:0:15:0: phase change 6-7 6@60000fa0 resid=4.
 target0:0:15: asynchronous
 target0:0:15: Ending Domain Validation
st: Version 20061107, fixed bufsize 32768, s/g segs 256
SCSI device sda: 71096640 512-byte hdwr sectors (36401 MB)
sda: Write Protect is off
SCSI device sda: write cache: disabled, read cache: enabled, doesn't support DPO or FUA
SCSI device sda: 71096640 512-byte hdwr sectors (36401 MB)
sda: Write Protect is off
SCSI device sda: write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sda: sda1 sda2 sda3
sd 0:0:8:0: Attached scsi disk sda
SCSI device sdb: 71096640 512-byte hdwr sectors (36401 MB)
sdb: Write Protect is off
SCSI device sdb: write cache: disabled, read cache: enabled, supports DPO and FUA
SCSI device sdb: 71096640 512-byte hdwr sectors (36401 MB)
sdb: Write Protect is off
SCSI device sdb: write cache: disabled, read cache: enabled, supports DPO and FUA
 sdb: sdb1 sdb2
sd 0:0:9:0: Attached scsi disk sdb
sd 0:0:8:0: Attached scsi generic sg0 type 0
sd 0:0:9:0: Attached scsi generic sg1 type 0
scsi 0:0:15:0: Attached scsi generic sg2 type 13
mice: PS/2 mouse device common for all mice
input: PC Speaker as /class/input/input0
md: linear personality registered for level -1
md: raid0 personality registered for level 0
md: raid1 personality registered for level 1
IPv4 over IPv4 tunneling driver
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 17
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
kjournald starting.  Commit interval 5 seconds
EXT3-fs: mounted filesystem with ordered data mode.
VFS: Mounted root (ext3 filesystem) readonly.
Freeing unused kernel memory: 268k freed
Kernel panic - not syncing: No init found.  Try passing init= option to kernel.
 <0>Rebooting in 180 seconds..


-- 
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 20:35     ` Adam Litke
@ 2007-02-19 20:47       ` Benjamin Herrenschmidt
  2007-02-19 21:15         ` Adam Litke
  0 siblings, 1 reply; 18+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-19 20:47 UTC (permalink / raw)
  To: Adam Litke; +Cc: linuxppc-dev list, cbe-oss-dev

On Mon, 2007-02-19 at 14:35 -0600, Adam Litke wrote:
> On Tue, 2007-02-20 at 06:40 +1100, Benjamin Herrenschmidt wrote:
> > On Mon, 2007-02-19 at 12:54 -0600, Adam Litke wrote:
> > > Patch seems good to me.  I tried it on my power4+ system and it was not
> > > happy.  Have you tested on Power4 at all? 
> > 
> > No, on Power5 only so far, I might still have something wrong :-) What
> > did you try and what was not happy ?
> 
> I haven't investigated too deeply yet, but it didn't boot.  Seemed
> unable to find init.  [ And yes, I am sure it's not something related to
> missing scsi drivers ;) ] Anything you want me to try out?  

You can try #define'ing DEBUG in slice.c and booting with "debug" on the
kernel command line ?

Ben.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19 20:47       ` Benjamin Herrenschmidt
@ 2007-02-19 21:15         ` Adam Litke
  0 siblings, 0 replies; 18+ messages in thread
From: Adam Litke @ 2007-02-19 21:15 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, cbe-oss-dev

On Tue, 2007-02-20 at 07:47 +1100, Benjamin Herrenschmidt wrote:
> On Mon, 2007-02-19 at 14:35 -0600, Adam Litke wrote:
> > On Tue, 2007-02-20 at 06:40 +1100, Benjamin Herrenschmidt wrote:
> > > On Mon, 2007-02-19 at 12:54 -0600, Adam Litke wrote:
> > > > Patch seems good to me.  I tried it on my power4+ system and it was not
> > > > happy.  Have you tested on Power4 at all? 
> > > 
> > > No, on Power5 only so far, I might still have something wrong :-) What
> > > did you try and what was not happy ?
> > 
> > I haven't investigated too deeply yet, but it didn't boot.  Seemed
> > unable to find init.  [ And yes, I am sure it's not something related to
> > missing scsi drivers ;) ] Anything you want me to try out?  
> 
> You can try #define'ing DEBUG in slice.c and booting with "debug" on the
> kernel command line ?

Hmm didn't seem to add much additional info, but what do I know.  Here
is the new console log.


   Elf32 kernel loaded...

zImage starting: loaded at 0x00400010 (sp: 0x0291fbe0)
Allocating 0x6d72f0 bytes for kernel ...
OF version = 'IBM,RG040719_regatta'
gunzipping (0x3a00000 <- 0x407010:0x6807c4)...done 0x697938 bytes
Finalizing device tree... using OF tree (promptr=00c3c578)
OF stdout device is: /vdevice/vty@0
Hypertas detected, assuming LPAR !
command line: selinux=0 elevator=cfq autobench_args: root=/dev/sdb1 ABAT:1171919179 debug
memory layout at init:
  alloc_bottom : 00000000040dc000
  alloc_top    : 0000000030000000
  alloc_top_hi : 0000000780000000
  rmo_top      : 0000000030000000
  ram_top      : 0000000780000000
Looking for displays
instantiating rtas at 0x000000002fd3a000 ... done
0000000000000000 : boot cpu     0000000000000014
0000000000000001 : starting cpu hw idx 0000000000000015... done
0000000000000002 : starting cpu hw idx 0000000000000016... done
0000000000000003 : starting cpu hw idx 0000000000000017... done
copying OF device tree ...
Building dt strings...
Building dt structure...
Device tree strings 0x00000000040dd000 -> 0x00000000040de225
Device tree struct  0x00000000040df000 -> 0x00000000040e9000
Calling quiesce ...
returning from prom_init
Partition configured for 16 cpus.
Starting Linux PPC64 #49 SMP Mon Feb 19 13:08:18 PST 2007
-----------------------------------------------------
ppc64_pft_size                = 0x1d
physicalMemorySize            = 0x780000000
ppc64_caches.dcache_line_size = 0x80
ppc64_caches.icache_line_size = 0x80
htab_address                  = 0x0000000000000000
htab_hash_mask                = 0x3fffff
-----------------------------------------------------
Linux version 2.6.20-rc7-gd835aad4-dirty (aglitke@kernel) (gcc version 3.4.2) #49 SMP Mon Feb 19 13:08:18 PST 2007
[boot]0012 Setup Arch
No ramdisk, default root is /dev/sda2
EEH: PCI Enhanced I/O Error Handling Enabled
PPC64 nvram contains 20480 bytes
Zone PFN ranges:
  DMA             0 ->  7864320
  Normal    7864320 ->  7864320
early_node_map[1] active PFN ranges
    0:        0 ->  7864320
[boot]0015 Setup Done
Built 1 zonelists.  Total pages: 7756800
Kernel command line: selinux=0 elevator=cfq autobench_args: root=/dev/sdb1 ABAT:1171919179 debug
[boot]0020 XICS Init
xics: PCI 8259 intack at 0x000003fffdf091f0
i8259 legacy interrupt controller initialized
[boot]0021 XICS Done
PID hash table entries: 4096 (order: 12, 32768 bytes)
time_init: decrementer frequency = 212.995662 MHz
time_init: processor frequency   = 1703.965296 MHz
Using pSeries machine description
Page orders: linear mapping = 24, virtual = 12, io = 12
Found legacy serial port 0 for /pci@3fffdf09000/isa@3/serial@i3f8
  port=3f8, taddr=3fdffe003f8, irq=0, clk=1843200, speed=0
Found legacy serial port 1 for /pci@3fffdf09000/isa@3/serial@i2f8
  port=2f8, taddr=3fdffe002f8, irq=0, clk=1843200, speed=0
Partition configured for 16 cpus.
Starting Linux PPC64 #49 SMP Mon Feb 19 13:08:18 PST 2007
-----------------------------------------------------
ppc64_pft_size                = 0x1d
physicalMemorySize            = 0x780000000
ppc64_caches.dcache_line_size = 0x80
ppc64_caches.icache_line_size = 0x80
htab_address                  = 0x0000000000000000
htab_hash_mask                = 0x3fffff
-----------------------------------------------------
Linux version 2.6.20-rc7-gd835aad4-dirty (aglitke@kernel) (gcc version 3.4.2) #49 SMP Mon Feb 19 13:08:18 PST 2007
[boot]0012 Setup Arch
Top of RAM: 0x780000000, Total RAM: 0x780000000
Memory hole size: 0MB
Entering add_active_range(0, 0, 7864320) 0 entries of 256 used
No ramdisk, default root is /dev/sda2
RTAS: event: 1, Type: Internal Device Failure, Severity: 5
RTAS: event: 2, Type: Internal Device Failure, Severity: 5
EEH: PCI Enhanced I/O Error Handling Enabled
PPC64 nvram contains 20480 bytes
Using default idle loop
Zone PFN ranges:
  DMA             0 ->  7864320
  Normal    7864320 ->  7864320
early_node_map[1] active PFN ranges
    0:        0 ->  7864320
On node 0 totalpages: 7864320
  DMA zone: 107520 pages used for memmap
  DMA zone: 0 pages reserved
  DMA zone: 7756800 pages, LIFO batch:31
  Normal zone: 0 pages used for memmap
[boot]0015 Setup Done
Built 1 zonelists.  Total pages: 7756800
Kernel command line: selinux=0 elevator=cfq autobench_args: root=/dev/sdb1 ABAT:1171919179 debug
[boot]0020 XICS Init
xics: PCI 8259 intack at 0x000003fffdf091f0
i8259 legacy interrupt controller initialized
[boot]0021 XICS Done
PID hash table entries: 4096 (order: 12, 32768 bytes)
time_init: decrementer frequency = 212.995662 MHz
time_init: processor frequency   = 1703.965296 MHz
Console: colour dummy device 80x25
Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes)
Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes)
freeing bootmem node 0
Memory: 30967340k/31457280k available (5584k kernel code, 489940k reserved, 952k data, 272k bss, 268k init)
Calibrating delay loop... 424.96 BogoMIPS (lpj=849920)
Mount-cache hash table entries: 256
Processor 1 found.
Processor 2 found.
Processor 3 found.
Brought up 4 CPUs
migration_cost=1
NET: Registered protocol family 16
PCI: Probing PCI hardware
Failed to request PCI IO region on PCI domain 0000
IOMMU table initialized, virtual merging disabled
RTAS: event: 3, Type: Internal Device Failure, Severity: 5
RTAS: event: 4, Type: Internal Device Failure, Severity: 5
ISA bridge at 0000:00:03.0
mapping IO 3fdffe00000 -> d000080000000000, size: 100000
mapping IO 3fe3fe00000 -> d000080000100000, size: 100000
mapping IO 3fe3ff00000 -> d000080000200000, size: 100000
mapping IO 3fe3fd00000 -> d000080000300000, size: 100000
mapping IO 3fcbfe00000 -> d000080000400000, size: 100000
mapping IO 3fcbff00000 -> d000080000500000, size: 100000
mapping IO 3fcbfd00000 -> d000080000600000, size: 100000
PCI: Probing PCI hardware done
Registering pmac pic with sysfs...
SCSI subsystem initialized
NET: Registered protocol family 2
IP route cache hash table entries: 1048576 (order: 11, 8388608 bytes)
TCP established hash table entries: 1048576 (order: 12, 16777216 bytes)
TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
TCP: Hash tables configured (established 1048576 bind 65536)
TCP reno registered
vio_bus_init: processing c00000077fffbcd8
vio_bus_init: processing c00000077fffbe08
vio_bus_init: processing c00000077fffbf60
RTAS daemon started
RTAS: event: 69, Type: EPOW, Severity: 2
Total HugeTLB memory allocated, 0
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
JFS: nTxBlock = 8192, nTxLock = 65536
SGI XFS with large block/inode numbers, no debug enabled
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
vio_register_driver: driver hvc_console registering
HVSI: registered 0 devices
Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing disabled
serial8250.0: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
serial8250.0: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
Floppy drive(s): fd0 is 2.88M
FDC 0 is a National Semiconductor PC87306
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
loop: loaded (max 8 devices)
nbd: registered device at major 43
Intel(R) PRO/1000 Network Driver - version 7.3.15-k2
Copyright (c) 1999-2006 Intel Corporation.
PCI: Enabling device: (0006:61:01.0), cmd 143
e1000: 0006:61:01.0: e1000_probe: (PCI-X:133MHz:64-bit) 00:02:55:53:78:34
e1000: eth0: e1000_probe: Intel(R) PRO/1000 Network Connection
pcnet32.c:v1.33 27.Jun.2006 tsbogend@alpha.franken.de
e100: Intel(R) PRO/100 Network Driver, 3.5.17-k2-NAPI
e100: Copyright(c) 1999-2006 Intel Corporation
PCI: Enabling device: (0006:41:01.0), cmd 143
sym0: <1010-66> rev 0x1 at pci 0006:41:01.0 irq 71
sym0: No NVRAM, ID 7, Fast-80, LVD, parity checking
sym0: SCSI BUS has been reset.
scsi0 : sym-2.2.3
 target0:0:8: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 31)
scsi 0:0:8:0: Direct-Access     IBM      IC35L036UCDY10-0 S28C PQ: 0 ANSI: 3
 target0:0:8: tagged command queuing enabled, command queue depth 16.
 target0:0:8: Beginning Domain Validation
 target0:0:8: asynchronous
 target0:0:8: wide asynchronous
 target0:0:8: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 31)
 target0:0:8: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 31)
 target0:0:8: Ending Domain Validation
 target0:0:9: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 31)
scsi 0:0:9:0: Direct-Access     IBM      ST336607LC       C50H PQ: 0 ANSI: 3
 target0:0:9: tagged command queuing enabled, command queue depth 16.
 target0:0:9: Beginning Domain Validation
 target0:0:9: asynchronous
 target0:0:9: wide asynchronous
 target0:0:9: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 31)
 target0:0:9: FAST-80 WIDE SCSI 160.0 MB/s DT (12.5 ns, offset 31)
 target0:0:9: Ending Domain Validation
scsi 0:0:15:0: Enclosure         IBM      HSBPD4HA PU3SCSI 0018 PQ: 0 ANSI: 2
 target0:0:15: Beginning Domain Validation
scsi 0:0:15:0: phase change 6-7 6@60000fa0 resid=4.
 target0:0:15: asynchronous
 target0:0:15: Ending Domain Validation
vio_register_driver: driver ibmvscsi registering
st: Version 20061107, fixed bufsize 32768, s/g segs 256
SCSI device sda: 71096640 512-byte hdwr sectors (36401 MB)
sda: Write Protect is off
sda: Mode Sense: cb 00 00 08
SCSI device sda: write cache: disabled, read cache: enabled, doesn't support DPO or FUA
SCSI device sda: 71096640 512-byte hdwr sectors (36401 MB)
sda: Write Protect is off
sda: Mode Sense: cb 00 00 08
SCSI device sda: write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sda: sda1 sda2 sda3
sd 0:0:8:0: Attached scsi disk sda
SCSI device sdb: 71096640 512-byte hdwr sectors (36401 MB)
sdb: Write Protect is off
sdb: Mode Sense: cb 00 10 08
SCSI device sdb: write cache: disabled, read cache: enabled, supports DPO and FUA
SCSI device sdb: 71096640 512-byte hdwr sectors (36401 MB)
sdb: Write Protect is off
sdb: Mode Sense: cb 00 10 08
SCSI device sdb: write cache: disabled, read cache: enabled, supports DPO and FUA
 sdb: sdb1 sdb2
sd 0:0:9:0: Attached scsi disk sdb
sd 0:0:8:0: Attached scsi generic sg0 type 0
sd 0:0:9:0: Attached scsi generic sg1 type 0
scsi 0:0:15:0: Attached scsi generic sg2 type 13
mice: PS/2 mouse device common for all mice
input: PC Speaker as /class/input/input0
md: linear personality registered for level -1
md: raid0 personality registered for level 0
md: raid1 personality registered for level 1
oprofile: using ppc64/power4 performance monitoring.
IPv4 over IPv4 tunneling driver
TCP cubic registered
NET: Registered protocol family 1
NET: Registered protocol family 17
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
kjournald starting.  Commit interval 5 seconds
EXT3-fs: mounted filesystem with ordered data mode.
VFS: Mounted root (ext3 filesystem) readonly.
Freeing unused kernel memory: 268k freed
Kernel panic - not syncing: No init found.  Try passing init= option to kernel.

-- 
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-19  6:43 [PATCH] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
                   ` (2 preceding siblings ...)
  2007-02-19 18:54 ` Adam Litke
@ 2007-02-20 19:45 ` Adam Litke
  2007-02-20 19:51   ` Benjamin Herrenschmidt
  3 siblings, 1 reply; 18+ messages in thread
From: Adam Litke @ 2007-02-20 19:45 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, cbe-oss-dev

Your patch drops the pgoff check that prepare_hugepage_range used to
check.  The misaligned_offset test in libhugetlbfs identified the
problem.  The following patch (applied on top of yours) makes the
problem go away.  I am not necessarily suggesting it's the correct
fix... just concisely describing the problem.
 
commit 95bcfa9c7b086de320cd9a1ff9c7281f7f16b15f
Author: Adam Litke <agl@us.ibm.com>
Date:   Tue Feb 20 11:44:46 2007 -0800

    Restore the pgoff check for prepare_hugepage_range()

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index cbb8c52..f38ab78 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -349,6 +349,9 @@ int prepare_hugepage_range(unsigned long addr, unsigned long len, pgoff_t pgoff)
 
 	printk("prepare_hugepage_range(addr=0x%lx, len=0x%lx\n", addr, len);
 
+	if (pgoff & (~HPAGE_MASK >> PAGE_SHIFT))
+		return -EINVAL;
+
 	/* This is only useful for MAP_FIXED so we turn it into that */
 	gua_addr = slice_get_unmapped_area(addr, len, MAP_FIXED,
 					   mmu_huge_psize, 1, 0);

-- 
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-20 19:45 ` Adam Litke
@ 2007-02-20 19:51   ` Benjamin Herrenschmidt
  2007-02-20 20:07     ` Adam Litke
  2007-02-21  0:29     ` David Gibson
  0 siblings, 2 replies; 18+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-20 19:51 UTC (permalink / raw)
  To: Adam Litke; +Cc: linuxppc-dev list, cbe-oss-dev

On Tue, 2007-02-20 at 13:45 -0600, Adam Litke wrote:
> Your patch drops the pgoff check that prepare_hugepage_range used to
> check.  The misaligned_offset test in libhugetlbfs identified the
> problem.  The following patch (applied on top of yours) makes the
> problem go away.  I am not necessarily suggesting it's the correct
> fix... just concisely describing the problem.

Ok, I'll fold that into the patch. Ultimately, when I finally do the
generic changes, prepare_hugepage_range() will be going away. I will
either pass pgoff along to slice_g_u_a for it to validate the pgoff, or
I will let f_ops->mmap() be responsible of checking it. For SPEs, I do
the pgoff check there. Any reason tht wouldn't work for huge pages ?

Ben.

> commit 95bcfa9c7b086de320cd9a1ff9c7281f7f16b15f
> Author: Adam Litke <agl@us.ibm.com>
> Date:   Tue Feb 20 11:44:46 2007 -0800
> 
>     Restore the pgoff check for prepare_hugepage_range()
> 
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index cbb8c52..f38ab78 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -349,6 +349,9 @@ int prepare_hugepage_range(unsigned long addr, unsigned long len, pgoff_t pgoff)
>  
>  	printk("prepare_hugepage_range(addr=0x%lx, len=0x%lx\n", addr, len);
>  
> +	if (pgoff & (~HPAGE_MASK >> PAGE_SHIFT))
> +		return -EINVAL;
> +
>  	/* This is only useful for MAP_FIXED so we turn it into that */
>  	gua_addr = slice_get_unmapped_area(addr, len, MAP_FIXED,
>  					   mmu_huge_psize, 1, 0);
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-20 19:51   ` Benjamin Herrenschmidt
@ 2007-02-20 20:07     ` Adam Litke
  2007-02-21  0:29     ` David Gibson
  1 sibling, 0 replies; 18+ messages in thread
From: Adam Litke @ 2007-02-20 20:07 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, cbe-oss-dev

On Wed, 2007-02-21 at 06:51 +1100, Benjamin Herrenschmidt wrote:
> On Tue, 2007-02-20 at 13:45 -0600, Adam Litke wrote:
> > Your patch drops the pgoff check that prepare_hugepage_range used to
> > check.  The misaligned_offset test in libhugetlbfs identified the
> > problem.  The following patch (applied on top of yours) makes the
> > problem go away.  I am not necessarily suggesting it's the correct
> > fix... just concisely describing the problem.
> 
> Ok, I'll fold that into the patch. Ultimately, when I finally do the
> generic changes, prepare_hugepage_range() will be going away. I will
> either pass pgoff along to slice_g_u_a for it to validate the pgoff, or
> I will let f_ops->mmap() be responsible of checking it. For SPEs, I do
> the pgoff check there. Any reason tht wouldn't work for huge pages ?

Actually, it should be in ->mmap() if you ask me.  In fact, at some
point in history I think it was there.

-- 
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-20 19:51   ` Benjamin Herrenschmidt
  2007-02-20 20:07     ` Adam Litke
@ 2007-02-21  0:29     ` David Gibson
  2007-02-21  0:40       ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 18+ messages in thread
From: David Gibson @ 2007-02-21  0:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, cbe-oss-dev

On Wed, Feb 21, 2007 at 06:51:47AM +1100, Benjamin Herrenschmidt wrote:
> On Tue, 2007-02-20 at 13:45 -0600, Adam Litke wrote:
> > Your patch drops the pgoff check that prepare_hugepage_range used to
> > check.  The misaligned_offset test in libhugetlbfs identified the
> > problem.  The following patch (applied on top of yours) makes the
> > problem go away.  I am not necessarily suggesting it's the correct
> > fix... just concisely describing the problem.
> 
> Ok, I'll fold that into the patch. Ultimately, when I finally do the
> generic changes, prepare_hugepage_range() will be going away. I will
> either pass pgoff along to slice_g_u_a for it to validate the pgoff, or
> I will let f_ops->mmap() be responsible of checking it. For SPEs, I do
> the pgoff check there. Any reason tht wouldn't work for huge pages ?

Err... there was.  The trouble was the prepare() or
get_unmapped_area() which could open new slices happens before the
->mmap() call, so we could have already converted segments,
irreversibly, then have the mmap fail because of a bad alignment.  Now
that slice conversions can go both ways, that might not be a
significant problem any more.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] powerpc: Introduce address space "slices"
  2007-02-21  0:29     ` David Gibson
@ 2007-02-21  0:40       ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 18+ messages in thread
From: Benjamin Herrenschmidt @ 2007-02-21  0:40 UTC (permalink / raw)
  To: David Gibson; +Cc: linuxppc-dev list, cbe-oss-dev

On Wed, 2007-02-21 at 11:29 +1100, David Gibson wrote:
> On Wed, Feb 21, 2007 at 06:51:47AM +1100, Benjamin Herrenschmidt wrote:
> > On Tue, 2007-02-20 at 13:45 -0600, Adam Litke wrote:
> > > Your patch drops the pgoff check that prepare_hugepage_range used to
> > > check.  The misaligned_offset test in libhugetlbfs identified the
> > > problem.  The following patch (applied on top of yours) makes the
> > > problem go away.  I am not necessarily suggesting it's the correct
> > > fix... just concisely describing the problem.
> > 
> > Ok, I'll fold that into the patch. Ultimately, when I finally do the
> > generic changes, prepare_hugepage_range() will be going away. I will
> > either pass pgoff along to slice_g_u_a for it to validate the pgoff, or
> > I will let f_ops->mmap() be responsible of checking it. For SPEs, I do
> > the pgoff check there. Any reason tht wouldn't work for huge pages ?
> 
> Err... there was.  The trouble was the prepare() or
> get_unmapped_area() which could open new slices happens before the
> ->mmap() call, so we could have already converted segments,
> irreversibly, then have the mmap fail because of a bad alignment.  Now
> that slice conversions can go both ways, that might not be a
> significant problem any more.

Ok. I'll start working on the generic g_u_a changes and will try to make
sure I get that right there. In the meantime, Adam's patch is good.

Ben.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2007-02-21  0:40 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-19  6:43 [PATCH] powerpc: Introduce address space "slices" Benjamin Herrenschmidt
2007-02-19 13:23 ` Jimi Xenidis
2007-02-19 19:42   ` Benjamin Herrenschmidt
2007-02-19 19:57     ` Jimi Xenidis
2007-02-19 15:33 ` Olof Johansson
2007-02-19 16:49   ` [Cbe-oss-dev] " Arnd Bergmann
2007-02-19 19:39   ` Benjamin Herrenschmidt
2007-02-19 20:15     ` Olof Johansson
2007-02-19 18:54 ` Adam Litke
2007-02-19 19:40   ` Benjamin Herrenschmidt
2007-02-19 20:35     ` Adam Litke
2007-02-19 20:47       ` Benjamin Herrenschmidt
2007-02-19 21:15         ` Adam Litke
2007-02-20 19:45 ` Adam Litke
2007-02-20 19:51   ` Benjamin Herrenschmidt
2007-02-20 20:07     ` Adam Litke
2007-02-21  0:29     ` David Gibson
2007-02-21  0:40       ` Benjamin Herrenschmidt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.