linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions
@ 2011-01-24 17:55 Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile" Catalin Marinas
                   ` (18 more replies)
  0 siblings, 19 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

Hi,

This set of patches adds support for the Large Physical Extensions on
the ARM architecture (available with the Cortex-A15 processor). LPAE
comes with a 3-level page table format (compared to 2-level for the
classic one), allowing up to 40-bit physical address space.

The ARM LPAE documentation is available from (free registration needed):

http://infocenter.arm.com/help/topic/com.arm.doc.ddi0406b_virtualization_extns/index.html

The full set of patches (kernel fixes, LPAE and support for an emulated
Versatile Express with Cortex-A15 tile) is available on this branch:

git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-2.6-cm.git arm-lpae

Changelog:

- Rebased on 2.6.38-rc1. This kernel includes Russell's refactoring of
  the page table maintenance.
- PTE_PFN_MASK renamed to PHYS_MASK and some of the functions using this
  macro have been unified between classic and LPAE page table format.
- Fixes for modules/pkmap page table entries - the corresponding PMD is
  preallocated in pgd_alloc() and cleaned-up in pgd_free().
- Identity mapping support following mainline changes.


Catalin Marinas (14):
  ARM: Make the argument to virt_to_phys() "const volatile"
  ARM: LPAE: Fix early_pte_alloc() assumption about the Linux PTE
  ARM: LPAE: Use PMD_(SHIFT|SIZE|MASK) instead of PGDIR_*
  ARM: LPAE: Factor out 2-level page table definitions into separate
    files
  ARM: LPAE: Add (pte|pmd|pgd|pgprot)val_t type definitions as u32
  ARM: LPAE: Use a mask for physical addresses in page table entries
  ARM: LPAE: Introduce the 3-level page table format definitions
  ARM: LPAE: Page table maintenance for the 3-level format
  ARM: LPAE: MMU setup for the 3-level page table format
  ARM: LPAE: Add fault handling support
  ARM: LPAE: Add context switching support
  ARM: LPAE: Add identity mapping support for the 3-level page table
    format
  ARM: LPAE: Add SMP support for the 3-level page table format
  ARM: LPAE: Add the Kconfig entries

Will Deacon (5):
  ARM: LPAE: use long long format when printing physical addresses and
    ptes
  ARM: LPAE: use phys_addr_t instead of unsigned long for physical
    addresses
  ARM: LPAE: Use generic dma_addr_t type definition
  ARM: LPAE: mark memory banks with start > ULONG_MAX as highmem
  ARM: LPAE: add support for ATAG_MEM64

 arch/arm/Kconfig                            |    2 +-
 arch/arm/include/asm/cpu-multi32.h          |    8 +
 arch/arm/include/asm/cpu-single.h           |    4 +
 arch/arm/include/asm/memory.h               |   17 +-
 arch/arm/include/asm/outercache.h           |   14 +-
 arch/arm/include/asm/page.h                 |   44 +-----
 arch/arm/include/asm/pgalloc.h              |   28 ++++-
 arch/arm/include/asm/pgtable-2level-hwdef.h |   93 ++++++++++++
 arch/arm/include/asm/pgtable-2level-types.h |   67 +++++++++
 arch/arm/include/asm/pgtable-2level.h       |  148 +++++++++++++++++++
 arch/arm/include/asm/pgtable-3level-hwdef.h |   81 ++++++++++
 arch/arm/include/asm/pgtable-3level-types.h |   68 +++++++++
 arch/arm/include/asm/pgtable-3level.h       |  106 ++++++++++++++
 arch/arm/include/asm/pgtable-hwdef.h        |   81 +----------
 arch/arm/include/asm/pgtable.h              |  211 +++++++++------------------
 arch/arm/include/asm/proc-fns.h             |   13 ++
 arch/arm/include/asm/setup.h                |   12 ++-
 arch/arm/include/asm/tlbflush.h             |    4 +-
 arch/arm/include/asm/types.h                |   20 +---
 arch/arm/kernel/compat.c                    |    4 +-
 arch/arm/kernel/head.S                      |  125 +++++++++++-----
 arch/arm/kernel/module.c                    |    2 +-
 arch/arm/kernel/setup.c                     |   19 ++-
 arch/arm/kernel/traps.c                     |    6 +-
 arch/arm/mm/Kconfig                         |   13 ++
 arch/arm/mm/alignment.c                     |    8 +-
 arch/arm/mm/context.c                       |   18 ++-
 arch/arm/mm/dma-mapping.c                   |    6 +-
 arch/arm/mm/fault.c                         |   90 +++++++++++-
 arch/arm/mm/idmap.c                         |   39 ++++--
 arch/arm/mm/init.c                          |    6 +-
 arch/arm/mm/ioremap.c                       |    8 +-
 arch/arm/mm/mm.h                            |    4 +-
 arch/arm/mm/mmu.c                           |   88 ++++++++----
 arch/arm/mm/pgd.c                           |   64 ++++++--
 arch/arm/mm/proc-macros.S                   |    5 +-
 arch/arm/mm/proc-v7.S                       |  120 ++++++++++++++--
 37 files changed, 1212 insertions(+), 434 deletions(-)
 create mode 100644 arch/arm/include/asm/pgtable-2level-hwdef.h
 create mode 100644 arch/arm/include/asm/pgtable-2level-types.h
 create mode 100644 arch/arm/include/asm/pgtable-2level.h
 create mode 100644 arch/arm/include/asm/pgtable-3level-hwdef.h
 create mode 100644 arch/arm/include/asm/pgtable-3level-types.h
 create mode 100644 arch/arm/include/asm/pgtable-3level.h



^ permalink raw reply	[flat|nested] 59+ messages in thread

* [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile"
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 19:19   ` Stephen Boyd
  2011-01-24 17:55 ` [PATCH v4 02/19] ARM: LPAE: Fix early_pte_alloc() assumption about the Linux PTE Catalin Marinas
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: Stephen Boyd, Arnd Bergmann

Changing the virt_to_phys() argument to "const volatile void *" avoids
compiler warnings in some situations where this function is used.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm/include/asm/memory.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 23c2e8e..d0ee74b 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -188,7 +188,7 @@
  * translation for translating DMA addresses.  Use the driver
  * DMA support - see dma-mapping.h.
  */
-static inline unsigned long virt_to_phys(void *x)
+static inline unsigned long virt_to_phys(const volatile void *x)
 {
 	return __virt_to_phys((unsigned long)(x));
 }


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 02/19] ARM: LPAE: Fix early_pte_alloc() assumption about the Linux PTE
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile" Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-02-12  9:56   ` Russell King - ARM Linux
  2011-01-24 17:55 ` [PATCH v4 03/19] ARM: LPAE: use long long format when printing physical addresses and ptes Catalin Marinas
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

With LPAE we no longer have software bits in a separate Linux PTE and
the early_pte_alloc() function should pass PTE_HWTABLE_OFF +
PTE_HWTABLE_SIZE to early_alloc() to avoid allocating extra memory.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/mm/mmu.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 3c67e92..dcafa5c 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -533,7 +533,7 @@ static void __init *early_alloc(unsigned long sz)
 static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr, unsigned long prot)
 {
 	if (pmd_none(*pmd)) {
-		pte_t *pte = early_alloc(2 * PTRS_PER_PTE * sizeof(pte_t));
+		pte_t *pte = early_alloc(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE);
 		__pmd_populate(pmd, __pa(pte), prot);
 	}
 	BUG_ON(pmd_bad(*pmd));


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 03/19] ARM: LPAE: use long long format when printing physical addresses and ptes
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile" Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 02/19] ARM: LPAE: Fix early_pte_alloc() assumption about the Linux PTE Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-02-12  9:59   ` Russell King - ARM Linux
  2011-01-24 17:55 ` [PATCH v4 04/19] ARM: LPAE: Use PMD_(SHIFT|SIZE|MASK) instead of PGDIR_* Catalin Marinas
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: Will Deacon

From: Will Deacon <will.deacon@arm.com>

Now that the Kernel supports 2 level and 3 level page tables, physical
addresses (and also page table entries) may be 32 or 64-bits depending
upon the configuration.

This patch uses the %08llx conversion specifier for physical addresses
and page table entries, ensuring that they are cast to (long long) so
that common code can be used regardless of the datatype widths.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/kernel/setup.c |    2 +-
 arch/arm/kernel/traps.c |    6 +++---
 arch/arm/mm/fault.c     |   10 ++++++----
 arch/arm/mm/mmu.c       |   30 +++++++++++++++---------------
 4 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 5ea4fb7..3d23f0f 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -449,7 +449,7 @@ static int __init arm_add_memory(unsigned long start, unsigned long size)
 
 	if (meminfo.nr_banks >= NR_BANKS) {
 		printk(KERN_CRIT "NR_BANKS too low, "
-			"ignoring memory at %#lx\n", start);
+			"ignoring memory at %#08llx\n", (long long)start);
 		return -EINVAL;
 	}
 
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index ee57640..8f3ca04 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -712,17 +712,17 @@ EXPORT_SYMBOL(__readwrite_bug);
 
 void __pte_error(const char *file, int line, pte_t pte)
 {
-	printk("%s:%d: bad pte %08lx.\n", file, line, pte_val(pte));
+	printk("%s:%d: bad pte %08llx.\n", file, line, (long long)pte_val(pte));
 }
 
 void __pmd_error(const char *file, int line, pmd_t pmd)
 {
-	printk("%s:%d: bad pmd %08lx.\n", file, line, pmd_val(pmd));
+	printk("%s:%d: bad pmd %08llx.\n", file, line, (long long)pmd_val(pmd));
 }
 
 void __pgd_error(const char *file, int line, pgd_t pgd)
 {
-	printk("%s:%d: bad pgd %08lx.\n", file, line, pgd_val(pgd));
+	printk("%s:%d: bad pgd %08llx.\n", file, line, (long long)pgd_val(pgd));
 }
 
 asmlinkage void __div0(void)
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index f10f9ba..ef0e24f 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -76,7 +76,8 @@ void show_pte(struct mm_struct *mm, unsigned long addr)
 
 	printk(KERN_ALERT "pgd = %p\n", mm->pgd);
 	pgd = pgd_offset(mm, addr);
-	printk(KERN_ALERT "[%08lx] *pgd=%08lx", addr, pgd_val(*pgd));
+	printk(KERN_ALERT "[%08lx] *pgd=%08llx",
+			addr, (long long)pgd_val(*pgd));
 
 	do {
 		pmd_t *pmd;
@@ -92,7 +93,7 @@ void show_pte(struct mm_struct *mm, unsigned long addr)
 
 		pmd = pmd_offset(pgd, addr);
 		if (PTRS_PER_PMD != 1)
-			printk(", *pmd=%08lx", pmd_val(*pmd));
+			printk(", *pmd=%08llx", (long long)pmd_val(*pmd));
 
 		if (pmd_none(*pmd))
 			break;
@@ -107,8 +108,9 @@ void show_pte(struct mm_struct *mm, unsigned long addr)
 			break;
 
 		pte = pte_offset_map(pmd, addr);
-		printk(", *pte=%08lx", pte_val(*pte));
-		printk(", *ppte=%08lx", pte_val(pte[PTE_HWTABLE_PTRS]));
+		printk(", *pte=%08llx", (long long)pte_val(*pte));
+		printk(", *ppte=%08llx",
+		       (long long)pte_val(pte[PTE_HWTABLE_PTRS]));
 		pte_unmap(pte);
 	} while(0);
 
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index dcafa5c..cae8d68 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -597,8 +597,8 @@ static void __init create_36bit_mapping(struct map_desc *md,
 
 	if (!(cpu_architecture() >= CPU_ARCH_ARMv6 || cpu_is_xsc3())) {
 		printk(KERN_ERR "MM: CPU does not support supersection "
-		       "mapping for 0x%08llx at 0x%08lx\n",
-		       __pfn_to_phys((u64)md->pfn), addr);
+		       "mapping for %#08llx at %#08lx\n",
+		       (long long)__pfn_to_phys((u64)md->pfn), addr);
 		return;
 	}
 
@@ -610,15 +610,15 @@ static void __init create_36bit_mapping(struct map_desc *md,
 	 */
 	if (type->domain) {
 		printk(KERN_ERR "MM: invalid domain in supersection "
-		       "mapping for 0x%08llx at 0x%08lx\n",
-		       __pfn_to_phys((u64)md->pfn), addr);
+		       "mapping for %#08llx at %#08lx\n",
+		       (long long)__pfn_to_phys((u64)md->pfn), addr);
 		return;
 	}
 
 	if ((addr | length | __pfn_to_phys(md->pfn)) & ~SUPERSECTION_MASK) {
-		printk(KERN_ERR "MM: cannot create mapping for "
-		       "0x%08llx at 0x%08lx invalid alignment\n",
-		       __pfn_to_phys((u64)md->pfn), addr);
+		printk(KERN_ERR "MM: cannot create mapping for %#08llx"
+		       " at %#08lx invalid alignment\n",
+		       (long long)__pfn_to_phys((u64)md->pfn), addr);
 		return;
 	}
 
@@ -657,17 +657,17 @@ static void __init create_mapping(struct map_desc *md)
 	pgd_t *pgd;
 
 	if (md->virtual != vectors_base() && md->virtual < TASK_SIZE) {
-		printk(KERN_WARNING "BUG: not creating mapping for "
-		       "0x%08llx at 0x%08lx in user region\n",
-		       __pfn_to_phys((u64)md->pfn), md->virtual);
+		printk(KERN_WARNING "BUG: not creating mapping for %#08llx"
+		       " at %#08lx in user region\n",
+		       (long long)__pfn_to_phys((u64)md->pfn), md->virtual);
 		return;
 	}
 
 	if ((md->type == MT_DEVICE || md->type == MT_ROM) &&
 	    md->virtual >= PAGE_OFFSET && md->virtual < VMALLOC_END) {
-		printk(KERN_WARNING "BUG: mapping for 0x%08llx at 0x%08lx "
-		       "overlaps vmalloc space\n",
-		       __pfn_to_phys((u64)md->pfn), md->virtual);
+		printk(KERN_WARNING "BUG: mapping for %#08llx"
+		       " at %#08lx overlaps vmalloc space\n",
+		       (long long)__pfn_to_phys((u64)md->pfn), md->virtual);
 	}
 
 	type = &mem_types[md->type];
@@ -685,9 +685,9 @@ static void __init create_mapping(struct map_desc *md)
 	length = PAGE_ALIGN(md->length + (md->virtual & ~PAGE_MASK));
 
 	if (type->prot_l1 == 0 && ((addr | phys | length) & ~SECTION_MASK)) {
-		printk(KERN_WARNING "BUG: map for 0x%08lx at 0x%08lx can not "
+		printk(KERN_WARNING "BUG: map for %#08llx at %#08lx can not "
 		       "be mapped using pages, ignoring.\n",
-		       __pfn_to_phys(md->pfn), addr);
+		       (long long)__pfn_to_phys(md->pfn), addr);
 		return;
 	}
 


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 04/19] ARM: LPAE: Use PMD_(SHIFT|SIZE|MASK) instead of PGDIR_*
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (2 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 03/19] ARM: LPAE: use long long format when printing physical addresses and ptes Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 05/19] ARM: LPAE: Factor out 2-level page table definitions into separate files Catalin Marinas
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

PGDIR_SHIFT and PMD_SHIFT for the classic 2-level page table format have
the same value (21). This patch converts the PGDIR_* uses in the kernel
to the PMD_* equivalent so that LPAE builds can reuse the same code.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/kernel/module.c  |    2 +-
 arch/arm/mm/dma-mapping.c |    6 +++---
 arch/arm/mm/mmu.c         |   10 +++++-----
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index 2cfe816..94292bb 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -32,7 +32,7 @@
  * recompiling the whole kernel when CONFIG_XIP_KERNEL is turned on/off.
  */
 #undef MODULES_VADDR
-#define MODULES_VADDR	(((unsigned long)_etext + ~PGDIR_MASK) & PGDIR_MASK)
+#define MODULES_VADDR	(((unsigned long)_etext + ~PMD_MASK) & PMD_MASK)
 #endif
 
 #ifdef CONFIG_MMU
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 4771dba..4fbcda8 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -121,8 +121,8 @@ static void __dma_free_buffer(struct page *page, size_t size)
 #endif
 
 #define CONSISTENT_OFFSET(x)	(((unsigned long)(x) - CONSISTENT_BASE) >> PAGE_SHIFT)
-#define CONSISTENT_PTE_INDEX(x) (((unsigned long)(x) - CONSISTENT_BASE) >> PGDIR_SHIFT)
-#define NUM_CONSISTENT_PTES (CONSISTENT_DMA_SIZE >> PGDIR_SHIFT)
+#define CONSISTENT_PTE_INDEX(x) (((unsigned long)(x) - CONSISTENT_BASE) >> PMD_SHIFT)
+#define NUM_CONSISTENT_PTES (CONSISTENT_DMA_SIZE >> PMD_SHIFT)
 
 /*
  * These are the page tables (2MB each) covering uncached, DMA consistent allocations
@@ -172,7 +172,7 @@ static int __init consistent_init(void)
 		}
 
 		consistent_pte[i++] = pte;
-		base += (1 << PGDIR_SHIFT);
+		base += (1 << PMD_SHIFT);
 	} while (base < CONSISTENT_END);
 
 	return ret;
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index cae8d68..195a31e 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -857,14 +857,14 @@ static inline void prepare_page_table(void)
 	/*
 	 * Clear out all the mappings below the kernel image.
 	 */
-	for (addr = 0; addr < MODULES_VADDR; addr += PGDIR_SIZE)
+	for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE)
 		pmd_clear(pmd_off_k(addr));
 
 #ifdef CONFIG_XIP_KERNEL
 	/* The XIP kernel is mapped in the module area -- skip over it */
-	addr = ((unsigned long)_etext + PGDIR_SIZE - 1) & PGDIR_MASK;
+	addr = ((unsigned long)_etext + PMD_SIZE - 1) & PMD_MASK;
 #endif
-	for ( ; addr < PAGE_OFFSET; addr += PGDIR_SIZE)
+	for ( ; addr < PAGE_OFFSET; addr += PMD_SIZE)
 		pmd_clear(pmd_off_k(addr));
 
 	/*
@@ -879,7 +879,7 @@ static inline void prepare_page_table(void)
 	 * memory bank, up to the end of the vmalloc region.
 	 */
 	for (addr = __phys_to_virt(end);
-	     addr < VMALLOC_END; addr += PGDIR_SIZE)
+	     addr < VMALLOC_END; addr += PMD_SIZE)
 		pmd_clear(pmd_off_k(addr));
 }
 
@@ -920,7 +920,7 @@ static void __init devicemaps_init(struct machine_desc *mdesc)
 	 */
 	vectors_page = early_alloc(PAGE_SIZE);
 
-	for (addr = VMALLOC_END; addr; addr += PGDIR_SIZE)
+	for (addr = VMALLOC_END; addr; addr += PMD_SIZE)
 		pmd_clear(pmd_off_k(addr));
 
 	/*


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 05/19] ARM: LPAE: Factor out 2-level page table definitions into separate files
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (3 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 04/19] ARM: LPAE: Use PMD_(SHIFT|SIZE|MASK) instead of PGDIR_* Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 06/19] ARM: LPAE: Add (pte|pmd|pgd|pgprot)val_t type definitions as u32 Catalin Marinas
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

This patch moves page table definitions from asm/page.h, asm/pgtable.h
and asm/ptgable-hwdef.h into corresponding *-2level* files.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/page.h                 |   42 +--------
 arch/arm/include/asm/pgtable-2level-hwdef.h |   91 +++++++++++++++++
 arch/arm/include/asm/pgtable-2level-types.h |   64 ++++++++++++
 arch/arm/include/asm/pgtable-2level.h       |  143 +++++++++++++++++++++++++++
 arch/arm/include/asm/pgtable-hwdef.h        |   77 +--------------
 arch/arm/include/asm/pgtable.h              |  135 +-------------------------
 6 files changed, 302 insertions(+), 250 deletions(-)
 create mode 100644 arch/arm/include/asm/pgtable-2level-hwdef.h
 create mode 100644 arch/arm/include/asm/pgtable-2level-types.h
 create mode 100644 arch/arm/include/asm/pgtable-2level.h

diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h
index f51a695..3848105 100644
--- a/arch/arm/include/asm/page.h
+++ b/arch/arm/include/asm/page.h
@@ -151,47 +151,7 @@ extern void __cpu_copy_user_highpage(struct page *to, struct page *from,
 #define clear_page(page)	memset((void *)(page), 0, PAGE_SIZE)
 extern void copy_page(void *to, const void *from);
 
-typedef unsigned long pteval_t;
-
-#undef STRICT_MM_TYPECHECKS
-
-#ifdef STRICT_MM_TYPECHECKS
-/*
- * These are used to make use of C type-checking..
- */
-typedef struct { pteval_t pte; } pte_t;
-typedef struct { unsigned long pmd; } pmd_t;
-typedef struct { unsigned long pgd[2]; } pgd_t;
-typedef struct { unsigned long pgprot; } pgprot_t;
-
-#define pte_val(x)      ((x).pte)
-#define pmd_val(x)      ((x).pmd)
-#define pgd_val(x)	((x).pgd[0])
-#define pgprot_val(x)   ((x).pgprot)
-
-#define __pte(x)        ((pte_t) { (x) } )
-#define __pmd(x)        ((pmd_t) { (x) } )
-#define __pgprot(x)     ((pgprot_t) { (x) } )
-
-#else
-/*
- * .. while these make it easier on the compiler
- */
-typedef pteval_t pte_t;
-typedef unsigned long pmd_t;
-typedef unsigned long pgd_t[2];
-typedef unsigned long pgprot_t;
-
-#define pte_val(x)      (x)
-#define pmd_val(x)      (x)
-#define pgd_val(x)	((x)[0])
-#define pgprot_val(x)   (x)
-
-#define __pte(x)        (x)
-#define __pmd(x)        (x)
-#define __pgprot(x)     (x)
-
-#endif /* STRICT_MM_TYPECHECKS */
+#include <asm/pgtable-2level-types.h>
 
 #endif /* CONFIG_MMU */
 
diff --git a/arch/arm/include/asm/pgtable-2level-hwdef.h b/arch/arm/include/asm/pgtable-2level-hwdef.h
new file mode 100644
index 0000000..436529c
--- /dev/null
+++ b/arch/arm/include/asm/pgtable-2level-hwdef.h
@@ -0,0 +1,91 @@
+/*
+ *  arch/arm/include/asm/pgtable-2level-hwdef.h
+ *
+ *  Copyright (C) 1995-2002 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef _ASM_PGTABLE_2LEVEL_HWDEF_H
+#define _ASM_PGTABLE_2LEVEL_HWDEF_H
+
+/*
+ * Hardware page table definitions.
+ *
+ * + Level 1 descriptor (PMD)
+ *   - common
+ */
+#define PMD_TYPE_MASK		(3 << 0)
+#define PMD_TYPE_FAULT		(0 << 0)
+#define PMD_TYPE_TABLE		(1 << 0)
+#define PMD_TYPE_SECT		(2 << 0)
+#define PMD_BIT4		(1 << 4)
+#define PMD_DOMAIN(x)		((x) << 5)
+#define PMD_PROTECTION		(1 << 9)	/* v5 */
+/*
+ *   - section
+ */
+#define PMD_SECT_BUFFERABLE	(1 << 2)
+#define PMD_SECT_CACHEABLE	(1 << 3)
+#define PMD_SECT_XN		(1 << 4)	/* v6 */
+#define PMD_SECT_AP_WRITE	(1 << 10)
+#define PMD_SECT_AP_READ	(1 << 11)
+#define PMD_SECT_TEX(x)		((x) << 12)	/* v5 */
+#define PMD_SECT_APX		(1 << 15)	/* v6 */
+#define PMD_SECT_S		(1 << 16)	/* v6 */
+#define PMD_SECT_nG		(1 << 17)	/* v6 */
+#define PMD_SECT_SUPER		(1 << 18)	/* v6 */
+#define PMD_SECT_AF		(0)
+
+#define PMD_SECT_UNCACHED	(0)
+#define PMD_SECT_BUFFERED	(PMD_SECT_BUFFERABLE)
+#define PMD_SECT_WT		(PMD_SECT_CACHEABLE)
+#define PMD_SECT_WB		(PMD_SECT_CACHEABLE | PMD_SECT_BUFFERABLE)
+#define PMD_SECT_MINICACHE	(PMD_SECT_TEX(1) | PMD_SECT_CACHEABLE)
+#define PMD_SECT_WBWA		(PMD_SECT_TEX(1) | PMD_SECT_CACHEABLE | PMD_SECT_BUFFERABLE)
+#define PMD_SECT_NONSHARED_DEV	(PMD_SECT_TEX(2))
+
+/*
+ *   - coarse table (not used)
+ */
+
+/*
+ * + Level 2 descriptor (PTE)
+ *   - common
+ */
+#define PTE_TYPE_MASK		(3 << 0)
+#define PTE_TYPE_FAULT		(0 << 0)
+#define PTE_TYPE_LARGE		(1 << 0)
+#define PTE_TYPE_SMALL		(2 << 0)
+#define PTE_TYPE_EXT		(3 << 0)	/* v5 */
+#define PTE_BUFFERABLE		(1 << 2)
+#define PTE_CACHEABLE		(1 << 3)
+
+/*
+ *   - extended small page/tiny page
+ */
+#define PTE_EXT_XN		(1 << 0)	/* v6 */
+#define PTE_EXT_AP_MASK		(3 << 4)
+#define PTE_EXT_AP0		(1 << 4)
+#define PTE_EXT_AP1		(2 << 4)
+#define PTE_EXT_AP_UNO_SRO	(0 << 4)
+#define PTE_EXT_AP_UNO_SRW	(PTE_EXT_AP0)
+#define PTE_EXT_AP_URO_SRW	(PTE_EXT_AP1)
+#define PTE_EXT_AP_URW_SRW	(PTE_EXT_AP1|PTE_EXT_AP0)
+#define PTE_EXT_TEX(x)		((x) << 6)	/* v5 */
+#define PTE_EXT_APX		(1 << 9)	/* v6 */
+#define PTE_EXT_COHERENT	(1 << 9)	/* XScale3 */
+#define PTE_EXT_SHARED		(1 << 10)	/* v6 */
+#define PTE_EXT_NG		(1 << 11)	/* v6 */
+
+/*
+ *   - small page
+ */
+#define PTE_SMALL_AP_MASK	(0xff << 4)
+#define PTE_SMALL_AP_UNO_SRO	(0x00 << 4)
+#define PTE_SMALL_AP_UNO_SRW	(0x55 << 4)
+#define PTE_SMALL_AP_URO_SRW	(0xaa << 4)
+#define PTE_SMALL_AP_URW_SRW	(0xff << 4)
+
+#endif
diff --git a/arch/arm/include/asm/pgtable-2level-types.h b/arch/arm/include/asm/pgtable-2level-types.h
new file mode 100644
index 0000000..8ff6941
--- /dev/null
+++ b/arch/arm/include/asm/pgtable-2level-types.h
@@ -0,0 +1,64 @@
+/*
+ * arch/arm/include/asm/pgtable_32_types.h
+ *
+ * Copyright (C) 1995-2003 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#ifndef _ASM_PGTABLE_2LEVEL_TYPES_H
+#define _ASM_PGTABLE_2LEVEL_TYPES_H
+
+typedef unsigned long pteval_t;
+
+#undef STRICT_MM_TYPECHECKS
+
+#ifdef STRICT_MM_TYPECHECKS
+/*
+ * These are used to make use of C type-checking..
+ */
+typedef struct { pteval_t pte; } pte_t;
+typedef struct { unsigned long pmd; } pmd_t;
+typedef struct { unsigned long pgd[2]; } pgd_t;
+typedef struct { unsigned long pgprot; } pgprot_t;
+
+#define pte_val(x)      ((x).pte)
+#define pmd_val(x)      ((x).pmd)
+#define pgd_val(x)	((x).pgd[0])
+#define pgprot_val(x)   ((x).pgprot)
+
+#define __pte(x)        ((pte_t) { (x) } )
+#define __pmd(x)        ((pmd_t) { (x) } )
+#define __pgprot(x)     ((pgprot_t) { (x) } )
+
+#else
+/*
+ * .. while these make it easier on the compiler
+ */
+typedef pteval_t pte_t;
+typedef unsigned long pmd_t;
+typedef unsigned long pgd_t[2];
+typedef unsigned long pgprot_t;
+
+#define pte_val(x)      (x)
+#define pmd_val(x)      (x)
+#define pgd_val(x)	((x)[0])
+#define pgprot_val(x)   (x)
+
+#define __pte(x)        (x)
+#define __pmd(x)        (x)
+#define __pgprot(x)     (x)
+
+#endif /* STRICT_MM_TYPECHECKS */
+
+#endif	/* _ASM_PGTABLE_2LEVEL_TYPES_H */
diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h
new file mode 100644
index 0000000..470457e
--- /dev/null
+++ b/arch/arm/include/asm/pgtable-2level.h
@@ -0,0 +1,143 @@
+/*
+ *  arch/arm/include/asm/pgtable-2level.h
+ *
+ *  Copyright (C) 1995-2002 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef _ASM_PGTABLE_2LEVEL_H
+#define _ASM_PGTABLE_2LEVEL_H
+
+/*
+ * Hardware-wise, we have a two level page table structure, where the first
+ * level has 4096 entries, and the second level has 256 entries.  Each entry
+ * is one 32-bit word.  Most of the bits in the second level entry are used
+ * by hardware, and there aren't any "accessed" and "dirty" bits.
+ *
+ * Linux on the other hand has a three level page table structure, which can
+ * be wrapped to fit a two level page table structure easily - using the PGD
+ * and PTE only.  However, Linux also expects one "PTE" table per page, and
+ * at least a "dirty" bit.
+ *
+ * Therefore, we tweak the implementation slightly - we tell Linux that we
+ * have 2048 entries in the first level, each of which is 8 bytes (iow, two
+ * hardware pointers to the second level.)  The second level contains two
+ * hardware PTE tables arranged contiguously, preceded by Linux versions
+ * which contain the state information Linux needs.  We, therefore, end up
+ * with 512 entries in the "PTE" level.
+ *
+ * This leads to the page tables having the following layout:
+ *
+ *    pgd             pte
+ * |        |
+ * +--------+
+ * |        |       +------------+ +0
+ * +- - - - +       | Linux pt 0 |
+ * |        |       +------------+ +1024
+ * +--------+ +0    | Linux pt 1 |
+ * |        |-----> +------------+ +2048
+ * +- - - - + +4    |  h/w pt 0  |
+ * |        |-----> +------------+ +3072
+ * +--------+ +8    |  h/w pt 1  |
+ * |        |       +------------+ +4096
+ *
+ * See L_PTE_xxx below for definitions of bits in the "Linux pt", and
+ * PTE_xxx for definitions of bits appearing in the "h/w pt".
+ *
+ * PMD_xxx definitions refer to bits in the first level page table.
+ *
+ * The "dirty" bit is emulated by only granting hardware write permission
+ * iff the page is marked "writable" and "dirty" in the Linux PTE.  This
+ * means that a write to a clean page will cause a permission fault, and
+ * the Linux MM layer will mark the page dirty via handle_pte_fault().
+ * For the hardware to notice the permission change, the TLB entry must
+ * be flushed, and ptep_set_access_flags() does that for us.
+ *
+ * The "accessed" or "young" bit is emulated by a similar method; we only
+ * allow accesses to the page if the "young" bit is set.  Accesses to the
+ * page will cause a fault, and handle_pte_fault() will set the young bit
+ * for us as long as the page is marked present in the corresponding Linux
+ * PTE entry.  Again, ptep_set_access_flags() will ensure that the TLB is
+ * up to date.
+ *
+ * However, when the "young" bit is cleared, we deny access to the page
+ * by clearing the hardware PTE.  Currently Linux does not flush the TLB
+ * for us in this case, which means the TLB will retain the transation
+ * until either the TLB entry is evicted under pressure, or a context
+ * switch which changes the user space mapping occurs.
+ */
+#define PTRS_PER_PTE		512
+#define PTRS_PER_PMD		1
+#define PTRS_PER_PGD		2048
+
+#define PTE_HWTABLE_PTRS	(PTRS_PER_PTE)
+#define PTE_HWTABLE_OFF		(PTE_HWTABLE_PTRS * sizeof(pte_t))
+#define PTE_HWTABLE_SIZE	(PTRS_PER_PTE * sizeof(u32))
+
+/*
+ * PMD_SHIFT determines the size of the area a second-level page table can map
+ * PGDIR_SHIFT determines what a third-level page table entry can map
+ */
+#define PMD_SHIFT		21
+#define PGDIR_SHIFT		21
+
+#define PMD_SIZE		(1UL << PMD_SHIFT)
+#define PMD_MASK		(~(PMD_SIZE-1))
+#define PGDIR_SIZE		(1UL << PGDIR_SHIFT)
+#define PGDIR_MASK		(~(PGDIR_SIZE-1))
+
+/*
+ * section address mask and size definitions.
+ */
+#define SECTION_SHIFT		20
+#define SECTION_SIZE		(1UL << SECTION_SHIFT)
+#define SECTION_MASK		(~(SECTION_SIZE-1))
+
+/*
+ * ARMv6 supersection address mask and size definitions.
+ */
+#define SUPERSECTION_SHIFT	24
+#define SUPERSECTION_SIZE	(1UL << SUPERSECTION_SHIFT)
+#define SUPERSECTION_MASK	(~(SUPERSECTION_SIZE-1))
+
+#define USER_PTRS_PER_PGD	(TASK_SIZE / PGDIR_SIZE)
+
+/*
+ * "Linux" PTE definitions.
+ *
+ * We keep two sets of PTEs - the hardware and the linux version.
+ * This allows greater flexibility in the way we map the Linux bits
+ * onto the hardware tables, and allows us to have YOUNG and DIRTY
+ * bits.
+ *
+ * The PTE table pointer refers to the hardware entries; the "Linux"
+ * entries are stored 1024 bytes below.
+ */
+#define L_PTE_PRESENT		(_AT(pteval_t, 1) << 0)
+#define L_PTE_YOUNG		(_AT(pteval_t, 1) << 1)
+#define L_PTE_FILE		(_AT(pteval_t, 1) << 2)	/* only when !PRESENT */
+#define L_PTE_DIRTY		(_AT(pteval_t, 1) << 6)
+#define L_PTE_RDONLY		(_AT(pteval_t, 1) << 7)
+#define L_PTE_USER		(_AT(pteval_t, 1) << 8)
+#define L_PTE_XN		(_AT(pteval_t, 1) << 9)
+#define L_PTE_SHARED		(_AT(pteval_t, 1) << 10)	/* shared(v6), coherent(xsc3) */
+
+/*
+ * These are the memory types, defined to be compatible with
+ * pre-ARMv6 CPUs cacheable and bufferable bits:   XXCB
+ */
+#define L_PTE_MT_UNCACHED	(_AT(pteval_t, 0x00) << 2)	/* 0000 */
+#define L_PTE_MT_BUFFERABLE	(_AT(pteval_t, 0x01) << 2)	/* 0001 */
+#define L_PTE_MT_WRITETHROUGH	(_AT(pteval_t, 0x02) << 2)	/* 0010 */
+#define L_PTE_MT_WRITEBACK	(_AT(pteval_t, 0x03) << 2)	/* 0011 */
+#define L_PTE_MT_MINICACHE	(_AT(pteval_t, 0x06) << 2)	/* 0110 (sa1100, xscale) */
+#define L_PTE_MT_WRITEALLOC	(_AT(pteval_t, 0x07) << 2)	/* 0111 */
+#define L_PTE_MT_DEV_SHARED	(_AT(pteval_t, 0x04) << 2)	/* 0100 */
+#define L_PTE_MT_DEV_NONSHARED	(_AT(pteval_t, 0x0c) << 2)	/* 1100 */
+#define L_PTE_MT_DEV_WC		(_AT(pteval_t, 0x09) << 2)	/* 1001 */
+#define L_PTE_MT_DEV_CACHED	(_AT(pteval_t, 0x0b) << 2)	/* 1011 */
+#define L_PTE_MT_MASK		(_AT(pteval_t, 0x0f) << 2)
+
+#endif /* _ASM_PGTABLE_2LEVEL_H */
diff --git a/arch/arm/include/asm/pgtable-hwdef.h b/arch/arm/include/asm/pgtable-hwdef.h
index fd1521d..1831111 100644
--- a/arch/arm/include/asm/pgtable-hwdef.h
+++ b/arch/arm/include/asm/pgtable-hwdef.h
@@ -10,81 +10,6 @@
 #ifndef _ASMARM_PGTABLE_HWDEF_H
 #define _ASMARM_PGTABLE_HWDEF_H
 
-/*
- * Hardware page table definitions.
- *
- * + Level 1 descriptor (PMD)
- *   - common
- */
-#define PMD_TYPE_MASK		(3 << 0)
-#define PMD_TYPE_FAULT		(0 << 0)
-#define PMD_TYPE_TABLE		(1 << 0)
-#define PMD_TYPE_SECT		(2 << 0)
-#define PMD_BIT4		(1 << 4)
-#define PMD_DOMAIN(x)		((x) << 5)
-#define PMD_PROTECTION		(1 << 9)	/* v5 */
-/*
- *   - section
- */
-#define PMD_SECT_BUFFERABLE	(1 << 2)
-#define PMD_SECT_CACHEABLE	(1 << 3)
-#define PMD_SECT_XN		(1 << 4)	/* v6 */
-#define PMD_SECT_AP_WRITE	(1 << 10)
-#define PMD_SECT_AP_READ	(1 << 11)
-#define PMD_SECT_TEX(x)		((x) << 12)	/* v5 */
-#define PMD_SECT_APX		(1 << 15)	/* v6 */
-#define PMD_SECT_S		(1 << 16)	/* v6 */
-#define PMD_SECT_nG		(1 << 17)	/* v6 */
-#define PMD_SECT_SUPER		(1 << 18)	/* v6 */
-
-#define PMD_SECT_UNCACHED	(0)
-#define PMD_SECT_BUFFERED	(PMD_SECT_BUFFERABLE)
-#define PMD_SECT_WT		(PMD_SECT_CACHEABLE)
-#define PMD_SECT_WB		(PMD_SECT_CACHEABLE | PMD_SECT_BUFFERABLE)
-#define PMD_SECT_MINICACHE	(PMD_SECT_TEX(1) | PMD_SECT_CACHEABLE)
-#define PMD_SECT_WBWA		(PMD_SECT_TEX(1) | PMD_SECT_CACHEABLE | PMD_SECT_BUFFERABLE)
-#define PMD_SECT_NONSHARED_DEV	(PMD_SECT_TEX(2))
-
-/*
- *   - coarse table (not used)
- */
-
-/*
- * + Level 2 descriptor (PTE)
- *   - common
- */
-#define PTE_TYPE_MASK		(3 << 0)
-#define PTE_TYPE_FAULT		(0 << 0)
-#define PTE_TYPE_LARGE		(1 << 0)
-#define PTE_TYPE_SMALL		(2 << 0)
-#define PTE_TYPE_EXT		(3 << 0)	/* v5 */
-#define PTE_BUFFERABLE		(1 << 2)
-#define PTE_CACHEABLE		(1 << 3)
-
-/*
- *   - extended small page/tiny page
- */
-#define PTE_EXT_XN		(1 << 0)	/* v6 */
-#define PTE_EXT_AP_MASK		(3 << 4)
-#define PTE_EXT_AP0		(1 << 4)
-#define PTE_EXT_AP1		(2 << 4)
-#define PTE_EXT_AP_UNO_SRO	(0 << 4)
-#define PTE_EXT_AP_UNO_SRW	(PTE_EXT_AP0)
-#define PTE_EXT_AP_URO_SRW	(PTE_EXT_AP1)
-#define PTE_EXT_AP_URW_SRW	(PTE_EXT_AP1|PTE_EXT_AP0)
-#define PTE_EXT_TEX(x)		((x) << 6)	/* v5 */
-#define PTE_EXT_APX		(1 << 9)	/* v6 */
-#define PTE_EXT_COHERENT	(1 << 9)	/* XScale3 */
-#define PTE_EXT_SHARED		(1 << 10)	/* v6 */
-#define PTE_EXT_NG		(1 << 11)	/* v6 */
-
-/*
- *   - small page
- */
-#define PTE_SMALL_AP_MASK	(0xff << 4)
-#define PTE_SMALL_AP_UNO_SRO	(0x00 << 4)
-#define PTE_SMALL_AP_UNO_SRW	(0x55 << 4)
-#define PTE_SMALL_AP_URO_SRW	(0xaa << 4)
-#define PTE_SMALL_AP_URW_SRW	(0xff << 4)
+#include <asm/pgtable-2level-hwdef.h>
 
 #endif
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index ebcb643..218bdea 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -24,6 +24,8 @@
 #include <mach/vmalloc.h>
 #include <asm/pgtable-hwdef.h>
 
+#include <asm/pgtable-2level.h>
+
 /*
  * Just any arbitrary offset to the start of the vmalloc VM area: the
  * current 8MB value just means that there will be a 8MB "hole" after the
@@ -41,79 +43,6 @@
 #define VMALLOC_START		(((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
 #endif
 
-/*
- * Hardware-wise, we have a two level page table structure, where the first
- * level has 4096 entries, and the second level has 256 entries.  Each entry
- * is one 32-bit word.  Most of the bits in the second level entry are used
- * by hardware, and there aren't any "accessed" and "dirty" bits.
- *
- * Linux on the other hand has a three level page table structure, which can
- * be wrapped to fit a two level page table structure easily - using the PGD
- * and PTE only.  However, Linux also expects one "PTE" table per page, and
- * at least a "dirty" bit.
- *
- * Therefore, we tweak the implementation slightly - we tell Linux that we
- * have 2048 entries in the first level, each of which is 8 bytes (iow, two
- * hardware pointers to the second level.)  The second level contains two
- * hardware PTE tables arranged contiguously, preceded by Linux versions
- * which contain the state information Linux needs.  We, therefore, end up
- * with 512 entries in the "PTE" level.
- *
- * This leads to the page tables having the following layout:
- *
- *    pgd             pte
- * |        |
- * +--------+
- * |        |       +------------+ +0
- * +- - - - +       | Linux pt 0 |
- * |        |       +------------+ +1024
- * +--------+ +0    | Linux pt 1 |
- * |        |-----> +------------+ +2048
- * +- - - - + +4    |  h/w pt 0  |
- * |        |-----> +------------+ +3072
- * +--------+ +8    |  h/w pt 1  |
- * |        |       +------------+ +4096
- *
- * See L_PTE_xxx below for definitions of bits in the "Linux pt", and
- * PTE_xxx for definitions of bits appearing in the "h/w pt".
- *
- * PMD_xxx definitions refer to bits in the first level page table.
- *
- * The "dirty" bit is emulated by only granting hardware write permission
- * iff the page is marked "writable" and "dirty" in the Linux PTE.  This
- * means that a write to a clean page will cause a permission fault, and
- * the Linux MM layer will mark the page dirty via handle_pte_fault().
- * For the hardware to notice the permission change, the TLB entry must
- * be flushed, and ptep_set_access_flags() does that for us.
- *
- * The "accessed" or "young" bit is emulated by a similar method; we only
- * allow accesses to the page if the "young" bit is set.  Accesses to the
- * page will cause a fault, and handle_pte_fault() will set the young bit
- * for us as long as the page is marked present in the corresponding Linux
- * PTE entry.  Again, ptep_set_access_flags() will ensure that the TLB is
- * up to date.
- *
- * However, when the "young" bit is cleared, we deny access to the page
- * by clearing the hardware PTE.  Currently Linux does not flush the TLB
- * for us in this case, which means the TLB will retain the transation
- * until either the TLB entry is evicted under pressure, or a context
- * switch which changes the user space mapping occurs.
- */
-#define PTRS_PER_PTE		512
-#define PTRS_PER_PMD		1
-#define PTRS_PER_PGD		2048
-
-#define PTE_HWTABLE_PTRS	(PTRS_PER_PTE)
-#define PTE_HWTABLE_OFF		(PTE_HWTABLE_PTRS * sizeof(pte_t))
-#define PTE_HWTABLE_SIZE	(PTRS_PER_PTE * sizeof(u32))
-
-/*
- * PMD_SHIFT determines the size of the area a second-level page table can map
- * PGDIR_SHIFT determines what a third-level page table entry can map
- */
-#define PMD_SHIFT		21
-#define PGDIR_SHIFT		21
-
 #define LIBRARY_TEXT_START	0x0c000000
 
 #ifndef __ASSEMBLY__
@@ -124,12 +53,6 @@ extern void __pgd_error(const char *file, int line, pgd_t);
 #define pte_ERROR(pte)		__pte_error(__FILE__, __LINE__, pte)
 #define pmd_ERROR(pmd)		__pmd_error(__FILE__, __LINE__, pmd)
 #define pgd_ERROR(pgd)		__pgd_error(__FILE__, __LINE__, pgd)
-#endif /* !__ASSEMBLY__ */
-
-#define PMD_SIZE		(1UL << PMD_SHIFT)
-#define PMD_MASK		(~(PMD_SIZE-1))
-#define PGDIR_SIZE		(1UL << PGDIR_SHIFT)
-#define PGDIR_MASK		(~(PGDIR_SIZE-1))
 
 /*
  * This is the lowest virtual address we can permit any user space
@@ -138,60 +61,6 @@ extern void __pgd_error(const char *file, int line, pgd_t);
  */
 #define FIRST_USER_ADDRESS	PAGE_SIZE
 
-#define USER_PTRS_PER_PGD	(TASK_SIZE / PGDIR_SIZE)
-
-/*
- * section address mask and size definitions.
- */
-#define SECTION_SHIFT		20
-#define SECTION_SIZE		(1UL << SECTION_SHIFT)
-#define SECTION_MASK		(~(SECTION_SIZE-1))
-
-/*
- * ARMv6 supersection address mask and size definitions.
- */
-#define SUPERSECTION_SHIFT	24
-#define SUPERSECTION_SIZE	(1UL << SUPERSECTION_SHIFT)
-#define SUPERSECTION_MASK	(~(SUPERSECTION_SIZE-1))
-
-/*
- * "Linux" PTE definitions.
- *
- * We keep two sets of PTEs - the hardware and the linux version.
- * This allows greater flexibility in the way we map the Linux bits
- * onto the hardware tables, and allows us to have YOUNG and DIRTY
- * bits.
- *
- * The PTE table pointer refers to the hardware entries; the "Linux"
- * entries are stored 1024 bytes below.
- */
-#define L_PTE_PRESENT		(_AT(pteval_t, 1) << 0)
-#define L_PTE_YOUNG		(_AT(pteval_t, 1) << 1)
-#define L_PTE_FILE		(_AT(pteval_t, 1) << 2)	/* only when !PRESENT */
-#define L_PTE_DIRTY		(_AT(pteval_t, 1) << 6)
-#define L_PTE_RDONLY		(_AT(pteval_t, 1) << 7)
-#define L_PTE_USER		(_AT(pteval_t, 1) << 8)
-#define L_PTE_XN		(_AT(pteval_t, 1) << 9)
-#define L_PTE_SHARED		(_AT(pteval_t, 1) << 10)	/* shared(v6), coherent(xsc3) */
-
-/*
- * These are the memory types, defined to be compatible with
- * pre-ARMv6 CPUs cacheable and bufferable bits:   XXCB
- */
-#define L_PTE_MT_UNCACHED	(_AT(pteval_t, 0x00) << 2)	/* 0000 */
-#define L_PTE_MT_BUFFERABLE	(_AT(pteval_t, 0x01) << 2)	/* 0001 */
-#define L_PTE_MT_WRITETHROUGH	(_AT(pteval_t, 0x02) << 2)	/* 0010 */
-#define L_PTE_MT_WRITEBACK	(_AT(pteval_t, 0x03) << 2)	/* 0011 */
-#define L_PTE_MT_MINICACHE	(_AT(pteval_t, 0x06) << 2)	/* 0110 (sa1100, xscale) */
-#define L_PTE_MT_WRITEALLOC	(_AT(pteval_t, 0x07) << 2)	/* 0111 */
-#define L_PTE_MT_DEV_SHARED	(_AT(pteval_t, 0x04) << 2)	/* 0100 */
-#define L_PTE_MT_DEV_NONSHARED	(_AT(pteval_t, 0x0c) << 2)	/* 1100 */
-#define L_PTE_MT_DEV_WC		(_AT(pteval_t, 0x09) << 2)	/* 1001 */
-#define L_PTE_MT_DEV_CACHED	(_AT(pteval_t, 0x0b) << 2)	/* 1011 */
-#define L_PTE_MT_MASK		(_AT(pteval_t, 0x0f) << 2)
-
-#ifndef __ASSEMBLY__
-
 /*
  * The pgprot_* and protection_map entries will be fixed up in runtime
  * to include the cachable and bufferable bits based on memory policy,


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 06/19] ARM: LPAE: Add (pte|pmd|pgd|pgprot)val_t type definitions as u32
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (4 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 05/19] ARM: LPAE: Factor out 2-level page table definitions into separate files Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-02-03 17:13   ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 07/19] ARM: LPAE: Use a mask for physical addresses in page table entries Catalin Marinas
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

This patch defines the (pte|pmd|pgd|pgprot)val_t as u32 and changes the
page table types to be based on these. The PMD bits are converted to the
corresponding type using the _AT macro.

The flush_pmd_entry/clean_pmd_entry argument was changed to (void *) to
allow them to be used with both PGD and PMD pointers and avoid code
duplication.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/pgalloc.h              |    4 +-
 arch/arm/include/asm/pgtable-2level-hwdef.h |   82 +++++++++++++-------------
 arch/arm/include/asm/pgtable-2level-types.h |   17 +++--
 arch/arm/include/asm/tlbflush.h             |    4 +-
 arch/arm/mm/mm.h                            |    4 +-
 arch/arm/mm/mmu.c                           |    2 +-
 6 files changed, 58 insertions(+), 55 deletions(-)

diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index 9763be0..841293e 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -103,9 +103,9 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
 }
 
 static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t pte,
-	unsigned long prot)
+				  pmdval_t prot)
 {
-	unsigned long pmdval = (pte + PTE_HWTABLE_OFF) | prot;
+	pmdval_t pmdval = (pte + PTE_HWTABLE_OFF) | prot;
 	pmdp[0] = __pmd(pmdval);
 	pmdp[1] = __pmd(pmdval + 256 * sizeof(pte_t));
 	flush_pmd_entry(pmdp);
diff --git a/arch/arm/include/asm/pgtable-2level-hwdef.h b/arch/arm/include/asm/pgtable-2level-hwdef.h
index 436529c..2b52c40 100644
--- a/arch/arm/include/asm/pgtable-2level-hwdef.h
+++ b/arch/arm/include/asm/pgtable-2level-hwdef.h
@@ -16,29 +16,29 @@
  * + Level 1 descriptor (PMD)
  *   - common
  */
-#define PMD_TYPE_MASK		(3 << 0)
-#define PMD_TYPE_FAULT		(0 << 0)
-#define PMD_TYPE_TABLE		(1 << 0)
-#define PMD_TYPE_SECT		(2 << 0)
-#define PMD_BIT4		(1 << 4)
-#define PMD_DOMAIN(x)		((x) << 5)
-#define PMD_PROTECTION		(1 << 9)	/* v5 */
+#define PMD_TYPE_MASK		(_AT(pmdval_t, 3) << 0)
+#define PMD_TYPE_FAULT		(_AT(pmdval_t, 0) << 0)
+#define PMD_TYPE_TABLE		(_AT(pmdval_t, 1) << 0)
+#define PMD_TYPE_SECT		(_AT(pmdval_t, 2) << 0)
+#define PMD_BIT4		(_AT(pmdval_t, 1) << 4)
+#define PMD_DOMAIN(x)		(_AT(pmdval_t, (x)) << 5)
+#define PMD_PROTECTION		(_AT(pmdval_t, 1) << 9)		/* v5 */
 /*
  *   - section
  */
-#define PMD_SECT_BUFFERABLE	(1 << 2)
-#define PMD_SECT_CACHEABLE	(1 << 3)
-#define PMD_SECT_XN		(1 << 4)	/* v6 */
-#define PMD_SECT_AP_WRITE	(1 << 10)
-#define PMD_SECT_AP_READ	(1 << 11)
-#define PMD_SECT_TEX(x)		((x) << 12)	/* v5 */
-#define PMD_SECT_APX		(1 << 15)	/* v6 */
-#define PMD_SECT_S		(1 << 16)	/* v6 */
-#define PMD_SECT_nG		(1 << 17)	/* v6 */
-#define PMD_SECT_SUPER		(1 << 18)	/* v6 */
-#define PMD_SECT_AF		(0)
+#define PMD_SECT_BUFFERABLE	(_AT(pmdval_t, 1) << 2)
+#define PMD_SECT_CACHEABLE	(_AT(pmdval_t, 1) << 3)
+#define PMD_SECT_XN		(_AT(pmdval_t, 1) << 4)		/* v6 */
+#define PMD_SECT_AP_WRITE	(_AT(pmdval_t, 1) << 10)
+#define PMD_SECT_AP_READ	(_AT(pmdval_t, 1) << 11)
+#define PMD_SECT_TEX(x)		(_AT(pmdval_t, (x)) << 12)	/* v5 */
+#define PMD_SECT_APX		(_AT(pmdval_t, 1) << 15)	/* v6 */
+#define PMD_SECT_S		(_AT(pmdval_t, 1) << 16)	/* v6 */
+#define PMD_SECT_nG		(_AT(pmdval_t, 1) << 17)	/* v6 */
+#define PMD_SECT_SUPER		(_AT(pmdval_t, 1) << 18)	/* v6 */
+#define PMD_SECT_AF		(_AT(pmdval_t, 0))
 
-#define PMD_SECT_UNCACHED	(0)
+#define PMD_SECT_UNCACHED	(_AT(pmdval_t, 0))
 #define PMD_SECT_BUFFERED	(PMD_SECT_BUFFERABLE)
 #define PMD_SECT_WT		(PMD_SECT_CACHEABLE)
 #define PMD_SECT_WB		(PMD_SECT_CACHEABLE | PMD_SECT_BUFFERABLE)
@@ -54,38 +54,38 @@
  * + Level 2 descriptor (PTE)
  *   - common
  */
-#define PTE_TYPE_MASK		(3 << 0)
-#define PTE_TYPE_FAULT		(0 << 0)
-#define PTE_TYPE_LARGE		(1 << 0)
-#define PTE_TYPE_SMALL		(2 << 0)
-#define PTE_TYPE_EXT		(3 << 0)	/* v5 */
-#define PTE_BUFFERABLE		(1 << 2)
-#define PTE_CACHEABLE		(1 << 3)
+#define PTE_TYPE_MASK		(_AT(pteval_t, 3) << 0)
+#define PTE_TYPE_FAULT		(_AT(pteval_t, 0) << 0)
+#define PTE_TYPE_LARGE		(_AT(pteval_t, 1) << 0)
+#define PTE_TYPE_SMALL		(_AT(pteval_t, 2) << 0)
+#define PTE_TYPE_EXT		(_AT(pteval_t, 3) << 0)		/* v5 */
+#define PTE_BUFFERABLE		(_AT(pteval_t, 1) << 2)
+#define PTE_CACHEABLE		(_AT(pteval_t, 1) << 3)
 
 /*
  *   - extended small page/tiny page
  */
-#define PTE_EXT_XN		(1 << 0)	/* v6 */
-#define PTE_EXT_AP_MASK		(3 << 4)
-#define PTE_EXT_AP0		(1 << 4)
-#define PTE_EXT_AP1		(2 << 4)
-#define PTE_EXT_AP_UNO_SRO	(0 << 4)
+#define PTE_EXT_XN		(_AT(pteval_t, 1) << 0)		/* v6 */
+#define PTE_EXT_AP_MASK		(_AT(pteval_t, 3) << 4)
+#define PTE_EXT_AP0		(_AT(pteval_t, 1) << 4)
+#define PTE_EXT_AP1		(_AT(pteval_t, 2) << 4)
+#define PTE_EXT_AP_UNO_SRO	(_AT(pteval_t, 0) << 4)
 #define PTE_EXT_AP_UNO_SRW	(PTE_EXT_AP0)
 #define PTE_EXT_AP_URO_SRW	(PTE_EXT_AP1)
 #define PTE_EXT_AP_URW_SRW	(PTE_EXT_AP1|PTE_EXT_AP0)
-#define PTE_EXT_TEX(x)		((x) << 6)	/* v5 */
-#define PTE_EXT_APX		(1 << 9)	/* v6 */
-#define PTE_EXT_COHERENT	(1 << 9)	/* XScale3 */
-#define PTE_EXT_SHARED		(1 << 10)	/* v6 */
-#define PTE_EXT_NG		(1 << 11)	/* v6 */
+#define PTE_EXT_TEX(x)		(_AT(pteval_t, (x)) << 6)	/* v5 */
+#define PTE_EXT_APX		(_AT(pteval_t, 1) << 9)		/* v6 */
+#define PTE_EXT_COHERENT	(_AT(pteval_t, 1) << 9)		/* XScale3 */
+#define PTE_EXT_SHARED		(_AT(pteval_t, 1) << 10)	/* v6 */
+#define PTE_EXT_NG		(_AT(pteval_t, 1) << 11)	/* v6 */
 
 /*
  *   - small page
  */
-#define PTE_SMALL_AP_MASK	(0xff << 4)
-#define PTE_SMALL_AP_UNO_SRO	(0x00 << 4)
-#define PTE_SMALL_AP_UNO_SRW	(0x55 << 4)
-#define PTE_SMALL_AP_URO_SRW	(0xaa << 4)
-#define PTE_SMALL_AP_URW_SRW	(0xff << 4)
+#define PTE_SMALL_AP_MASK	(_AT(pteval_t, 0xff) << 4)
+#define PTE_SMALL_AP_UNO_SRO	(_AT(pteval_t, 0x00) << 4)
+#define PTE_SMALL_AP_UNO_SRW	(_AT(pteval_t, 0x55) << 4)
+#define PTE_SMALL_AP_URO_SRW	(_AT(pteval_t, 0xaa) << 4)
+#define PTE_SMALL_AP_URW_SRW	(_AT(pteval_t, 0xff) << 4)
 
 #endif
diff --git a/arch/arm/include/asm/pgtable-2level-types.h b/arch/arm/include/asm/pgtable-2level-types.h
index 8ff6941..a4a4067 100644
--- a/arch/arm/include/asm/pgtable-2level-types.h
+++ b/arch/arm/include/asm/pgtable-2level-types.h
@@ -19,7 +19,10 @@
 #ifndef _ASM_PGTABLE_2LEVEL_TYPES_H
 #define _ASM_PGTABLE_2LEVEL_TYPES_H
 
-typedef unsigned long pteval_t;
+typedef u32 pteval_t;
+typedef u32 pmdval_t;
+typedef u32 pgdval_t;
+typedef u32 pgprotval_t;
 
 #undef STRICT_MM_TYPECHECKS
 
@@ -28,9 +31,9 @@ typedef unsigned long pteval_t;
  * These are used to make use of C type-checking..
  */
 typedef struct { pteval_t pte; } pte_t;
-typedef struct { unsigned long pmd; } pmd_t;
-typedef struct { unsigned long pgd[2]; } pgd_t;
-typedef struct { unsigned long pgprot; } pgprot_t;
+typedef struct { pmdval_t pmd; } pmd_t;
+typedef struct { pgdval_t pgd[2]; } pgd_t;
+typedef struct { pgprotval_t pgprot; } pgprot_t;
 
 #define pte_val(x)      ((x).pte)
 #define pmd_val(x)      ((x).pmd)
@@ -46,9 +49,9 @@ typedef struct { unsigned long pgprot; } pgprot_t;
  * .. while these make it easier on the compiler
  */
 typedef pteval_t pte_t;
-typedef unsigned long pmd_t;
-typedef unsigned long pgd_t[2];
-typedef unsigned long pgprot_t;
+typedef pmdval_t pmd_t;
+typedef pgdval_t pgd_t[2];
+typedef pgprotval_t pgprot_t;
 
 #define pte_val(x)      (x)
 #define pmd_val(x)      (x)
diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h
index ce7378e..8746b9a 100644
--- a/arch/arm/include/asm/tlbflush.h
+++ b/arch/arm/include/asm/tlbflush.h
@@ -514,7 +514,7 @@ static inline void local_flush_tlb_kernel_page(unsigned long kaddr)
  *	these operations.  This is typically used when we are removing
  *	PMD entries.
  */
-static inline void flush_pmd_entry(pmd_t *pmd)
+static inline void flush_pmd_entry(void *pmd)
 {
 	const unsigned int __tlb_flag = __cpu_tlb_flags;
 
@@ -530,7 +530,7 @@ static inline void flush_pmd_entry(pmd_t *pmd)
 		dsb();
 }
 
-static inline void clean_pmd_entry(pmd_t *pmd)
+static inline void clean_pmd_entry(void *pmd)
 {
 	const unsigned int __tlb_flag = __cpu_tlb_flags;
 
diff --git a/arch/arm/mm/mm.h b/arch/arm/mm/mm.h
index 36960df..794e83e 100644
--- a/arch/arm/mm/mm.h
+++ b/arch/arm/mm/mm.h
@@ -17,8 +17,8 @@ static inline pmd_t *pmd_off_k(unsigned long virt)
 
 struct mem_type {
 	pteval_t prot_pte;
-	unsigned int prot_l1;
-	unsigned int prot_sect;
+	pgprotval_t prot_l1;
+	pgprotval_t prot_sect;
 	unsigned int domain;
 };
 
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 195a31e..57f9688 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -290,7 +290,7 @@ static void __init build_mem_type_table(void)
 {
 	struct cachepolicy *cp;
 	unsigned int cr = get_cr();
-	unsigned int user_pgprot, kern_pgprot, vecs_pgprot;
+	pgprotval_t user_pgprot, kern_pgprot, vecs_pgprot;
 	int cpu_arch = cpu_architecture();
 	int i;
 


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 07/19] ARM: LPAE: Use a mask for physical addresses in page table entries
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (5 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 06/19] ARM: LPAE: Add (pte|pmd|pgd|pgprot)val_t type definitions as u32 Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions Catalin Marinas
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

With LPAE, the physical address mask is 40-bit while the page table
entry is 64-bit. This patch introduces PHYS_MASK for the 2-level page
table format, defined as ~0UL.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/pgtable-2level-hwdef.h |    2 ++
 arch/arm/include/asm/pgtable.h              |    6 +++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/pgtable-2level-hwdef.h b/arch/arm/include/asm/pgtable-2level-hwdef.h
index 2b52c40..5cfba15 100644
--- a/arch/arm/include/asm/pgtable-2level-hwdef.h
+++ b/arch/arm/include/asm/pgtable-2level-hwdef.h
@@ -88,4 +88,6 @@
 #define PTE_SMALL_AP_URO_SRW	(_AT(pteval_t, 0xaa) << 4)
 #define PTE_SMALL_AP_URW_SRW	(_AT(pteval_t, 0xff) << 4)
 
+#define PHYS_MASK		(~0UL)
+
 #endif
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 218bdea..e35941d 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -195,10 +195,10 @@ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 
 static inline pte_t *pmd_page_vaddr(pmd_t pmd)
 {
-	return __va(pmd_val(pmd) & PAGE_MASK);
+	return __va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK);
 }
 
-#define pmd_page(pmd)		pfn_to_page(__phys_to_pfn(pmd_val(pmd)))
+#define pmd_page(pmd)		pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK))
 
 /* we don't need complex calculations here as the pmd is folded into the pgd */
 #define pmd_addr_end(addr,end)	(end)
@@ -219,7 +219,7 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
 #define pte_offset_map(pmd,addr)	(__pte_map(pmd) + pte_index(addr))
 #define pte_unmap(pte)			__pte_unmap(pte)
 
-#define pte_pfn(pte)		(pte_val(pte) >> PAGE_SHIFT)
+#define pte_pfn(pte)		((pte_val(pte) & PHYS_MASK) >> PAGE_SHIFT)
 #define pfn_pte(pfn,prot)	__pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
 
 #define pte_page(pte)		pfn_to_page(pte_pfn(pte))


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (6 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 07/19] ARM: LPAE: Use a mask for physical addresses in page table entries Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 21:26   ` Nick Piggin
  2011-02-03 17:11   ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format Catalin Marinas
                   ` (10 subsequent siblings)
  18 siblings, 2 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

This patch introduces the pgtable-3level*.h files with definitions
specific to the LPAE page table format (3 levels of page tables).

Each table is 4KB and has 512 64-bit entries. An entry can point to a
40-bit physical address. The young, write and exec software bits share
the corresponding hardware bits (negated). Other software bits use spare
bits in the PTE.

The patch also changes some variable types from unsigned long or int to
pteval_t or pgprot_t.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/page.h                 |    4 +
 arch/arm/include/asm/pgtable-3level-hwdef.h |   81 +++++++++++++++++++++
 arch/arm/include/asm/pgtable-3level-types.h |   68 ++++++++++++++++++
 arch/arm/include/asm/pgtable-3level.h       |  101 +++++++++++++++++++++++++++
 arch/arm/include/asm/pgtable-hwdef.h        |    4 +
 arch/arm/include/asm/pgtable.h              |    4 +
 6 files changed, 262 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/include/asm/pgtable-3level-hwdef.h
 create mode 100644 arch/arm/include/asm/pgtable-3level-types.h
 create mode 100644 arch/arm/include/asm/pgtable-3level.h

diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h
index 3848105..e5124db 100644
--- a/arch/arm/include/asm/page.h
+++ b/arch/arm/include/asm/page.h
@@ -151,7 +151,11 @@ extern void __cpu_copy_user_highpage(struct page *to, struct page *from,
 #define clear_page(page)	memset((void *)(page), 0, PAGE_SIZE)
 extern void copy_page(void *to, const void *from);
 
+#ifdef CONFIG_ARM_LPAE
+#include <asm/pgtable-3level-types.h>
+#else
 #include <asm/pgtable-2level-types.h>
+#endif
 
 #endif /* CONFIG_MMU */
 
diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h
new file mode 100644
index 0000000..9e1fd78
--- /dev/null
+++ b/arch/arm/include/asm/pgtable-3level-hwdef.h
@@ -0,0 +1,81 @@
+/*
+ * arch/arm/include/asm/pgtable-3level-hwdef.h
+ *
+ * Copyright (C) 2011 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#ifndef _ASM_PGTABLE_3LEVEL_HWDEF_H
+#define _ASM_PGTABLE_3LEVEL_HWDEF_H
+
+/*
+ * Hardware page table definitions.
+ *
+ * + Level 1/2 descriptor
+ *   - common
+ */
+#define PMD_TYPE_MASK		(_AT(pmdval_t, 3) << 0)
+#define PMD_TYPE_FAULT		(_AT(pmdval_t, 0) << 0)
+#define PMD_TYPE_TABLE		(_AT(pmdval_t, 3) << 0)
+#define PMD_TYPE_SECT		(_AT(pmdval_t, 1) << 0)
+#define PMD_BIT4		(_AT(pmdval_t, 0))
+#define PMD_DOMAIN(x)		(_AT(pmdval_t, 0))
+
+/*
+ *   - section
+ */
+#define PMD_SECT_BUFFERABLE	(_AT(pmdval_t, 1) << 2)
+#define PMD_SECT_CACHEABLE	(_AT(pmdval_t, 1) << 3)
+#define PMD_SECT_S		(_AT(pmdval_t, 3) << 8)
+#define PMD_SECT_AF		(_AT(pmdval_t, 1) << 10)
+#define PMD_SECT_nG		(_AT(pmdval_t, 1) << 11)
+#ifdef __ASSEMBLY__
+/* avoid 'shift count out of range' warning */
+#define PMD_SECT_XN		(0)
+#else
+#define PMD_SECT_XN		((pmdval_t)1 << 54)
+#endif
+#define PMD_SECT_AP_WRITE	(_AT(pmdval_t, 0))
+#define PMD_SECT_AP_READ	(_AT(pmdval_t, 0))
+#define PMD_SECT_TEX(x)		(_AT(pmdval_t, 0))
+
+/*
+ * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
+ */
+#define PMD_SECT_UNCACHED	(_AT(pteval_t, 0) << 2)	/* strongly ordered */
+#define PMD_SECT_BUFFERED	(_AT(pteval_t, 1) << 2)	/* normal non-cacheable */
+#define PMD_SECT_WT		(_AT(pteval_t, 2) << 2)	/* normal inner write-through */
+#define PMD_SECT_WB		(_AT(pteval_t, 3) << 2)	/* normal inner write-back */
+#define PMD_SECT_WBWA		(_AT(pteval_t, 7) << 2)	/* normal inner write-alloc */
+
+/*
+ * + Level 3 descriptor (PTE)
+ */
+#define PTE_TYPE_MASK		(_AT(pteval_t, 3) << 0)
+#define PTE_TYPE_FAULT		(_AT(pteval_t, 0) << 0)
+#define PTE_TYPE_PAGE		(_AT(pteval_t, 3) << 0)
+#define PTE_BUFFERABLE		(_AT(pteval_t, 1) << 2)		/* AttrIndx[0] */
+#define PTE_CACHEABLE		(_AT(pteval_t, 1) << 3)		/* AttrIndx[1] */
+#define PTE_EXT_SHARED		(_AT(pteval_t, 3) << 8)		/* SH[1:0], inner shareable */
+#define PTE_EXT_AF		(_AT(pteval_t, 1) << 10)	/* Access Flag */
+#define PTE_EXT_NG		(_AT(pteval_t, 1) << 11)	/* nG */
+#define PTE_EXT_XN		(_AT(pteval_t, 1) << 54)	/* XN */
+
+/*
+ * 40-bit physical address supported.
+ */
+#define PHYS_MASK_SHIFT		(40)
+#define PHYS_MASK		((1ULL << PHYS_MASK_SHIFT) - 1)
+
+#endif
diff --git a/arch/arm/include/asm/pgtable-3level-types.h b/arch/arm/include/asm/pgtable-3level-types.h
new file mode 100644
index 0000000..a3dd5cf
--- /dev/null
+++ b/arch/arm/include/asm/pgtable-3level-types.h
@@ -0,0 +1,68 @@
+/*
+ * arch/arm/include/asm/pgtable-3level-types.h
+ *
+ * Copyright (C) 2011 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#ifndef _ASM_PGTABLE_3LEVEL_TYPES_H
+#define _ASM_PGTABLE_3LEVEL_TYPES_H
+
+typedef u64 pteval_t;
+typedef u64 pmdval_t;
+typedef u64 pgdval_t;
+typedef u64 pgprotval_t;
+
+#undef STRICT_MM_TYPECHECKS
+
+#ifdef STRICT_MM_TYPECHECKS
+
+/*
+ * These are used to make use of C type-checking..
+ */
+typedef struct { pteval_t pte; } pte_t;
+typedef struct { pmdval_t pmd; } pmd_t;
+typedef struct { pgdval_t pgd; } pgd_t;
+typedef struct { pgprotval_t pgprot; } pgprot_t;
+
+#define pte_val(x)      ((x).pte)
+#define pmd_val(x)      ((x).pmd)
+#define pgd_val(x)	((x).pgd)
+#define pgprot_val(x)   ((x).pgprot)
+
+#define __pte(x)        ((pte_t) { (x) } )
+#define __pmd(x)        ((pmd_t) { (x) } )
+#define __pgd(x)	((pgd_t) { (x) } )
+#define __pgprot(x)     ((pgprot_t) { (x) } )
+
+#else	/* !STRICT_MM_TYPECHECKS */
+
+typedef pteval_t pte_t;
+typedef pmdval_t pmd_t;
+typedef pgdval_t pgd_t;
+typedef pgprotval_t pgprot_t;
+
+#define pte_val(x)	(x)
+#define pmd_val(x)	(x)
+#define pgd_val(x)	(x)
+#define pgprot_val(x)	(x)
+
+#define __pte(x)	(x)
+#define __pmd(x)	(x)
+#define __pgd(x)	(x)
+#define __pgprot(x)	(x)
+
+#endif	/* STRICT_MM_TYPECHECKS */
+
+#endif	/* _ASM_PGTABLE_3LEVEL_TYPES_H */
diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
new file mode 100644
index 0000000..ac45358
--- /dev/null
+++ b/arch/arm/include/asm/pgtable-3level.h
@@ -0,0 +1,101 @@
+/*
+ * arch/arm/include/asm/pgtable-3level.h
+ *
+ * Copyright (C) 2011 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#ifndef _ASM_PGTABLE_3LEVEL_H
+#define _ASM_PGTABLE_3LEVEL_H
+
+/*
+ * With LPAE, there are 3 levels of page tables. Each level has 512 entries of
+ * 8 bytes each, occupying a 4K page. The first level table covers a range of
+ * 512GB, each entry representing 1GB. Since we are limited to 4GB input
+ * address range, only 4 entries in the PGD are used.
+ *
+ * There are enough spare bits in a page table entry for the kernel specific
+ * state.
+ */
+#define PTRS_PER_PTE		512
+#define PTRS_PER_PMD		512
+#define PTRS_PER_PGD		4
+
+#define PTE_HWTABLE_PTRS	(PTRS_PER_PTE)
+#define PTE_HWTABLE_OFF		(0)
+#define PTE_HWTABLE_SIZE	(PTRS_PER_PTE * sizeof(u64))
+
+/*
+ * PGDIR_SHIFT determines the size a top-level page table entry can map.
+ */
+#define PGDIR_SHIFT		30
+
+/*
+ * PMD_SHIFT determines the size a middle-level page table entry can map.
+ */
+#define PMD_SHIFT		21
+
+#define PMD_SIZE		(1UL << PMD_SHIFT)
+#define PMD_MASK		(~(PMD_SIZE-1))
+#define PGDIR_SIZE		(1UL << PGDIR_SHIFT)
+#define PGDIR_MASK		(~(PGDIR_SIZE-1))
+
+/*
+ * section address mask and size definitions.
+ */
+#define SECTION_SHIFT		21
+#define SECTION_SIZE		(1UL << SECTION_SHIFT)
+#define SECTION_MASK		(~(SECTION_SIZE-1))
+
+#define USER_PTRS_PER_PGD	(PAGE_OFFSET / PGDIR_SIZE)
+
+/*
+ * "Linux" PTE definitions for LPAE.
+ *
+ * These bits overlap with the hardware bits but the naming is preserved for
+ * consistency with the classic page table format.
+ */
+#define L_PTE_PRESENT		(_AT(pteval_t, 3) << 0)		/* Valid */
+#define L_PTE_FILE		(_AT(pteval_t, 1) << 2)		/* only when !PRESENT */
+#define L_PTE_BUFFERABLE	(_AT(pteval_t, 1) << 2)		/* AttrIndx[0] */
+#define L_PTE_CACHEABLE		(_AT(pteval_t, 1) << 3)		/* AttrIndx[1] */
+#define L_PTE_USER		(_AT(pteval_t, 1) << 6)		/* AP[1] */
+#define L_PTE_RDONLY		(_AT(pteval_t, 1) << 7)		/* AP[2] */
+#define L_PTE_SHARED		(_AT(pteval_t, 3) << 8)		/* SH[1:0], inner shareable */
+#define L_PTE_YOUNG		(_AT(pteval_t, 1) << 10)	/* AF */
+#define L_PTE_XN		(_AT(pteval_t, 1) << 54)	/* XN */
+#define L_PTE_DIRTY		(_AT(pteval_t, 1) << 55)	/* unused */
+#define L_PTE_SPECIAL		(_AT(pteval_t, 1) << 56)	/* unused */
+
+/*
+ * To be used in assembly code with the upper page attributes.
+ */
+#define L_PTE_XN_HIGH		(1 << (54 - 32))
+#define L_PTE_DIRTY_HIGH	(1 << (55 - 32))
+
+/*
+ * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
+ */
+#define L_PTE_MT_UNCACHED	(_AT(pteval_t, 0) << 2)	/* strongly ordered */
+#define L_PTE_MT_BUFFERABLE	(_AT(pteval_t, 1) << 2)	/* normal non-cacheable */
+#define L_PTE_MT_WRITETHROUGH	(_AT(pteval_t, 2) << 2)	/* normal inner write-through */
+#define L_PTE_MT_WRITEBACK	(_AT(pteval_t, 3) << 2)	/* normal inner write-back */
+#define L_PTE_MT_WRITEALLOC	(_AT(pteval_t, 7) << 2)	/* normal inner write-alloc */
+#define L_PTE_MT_DEV_SHARED	(_AT(pteval_t, 4) << 2)	/* device */
+#define L_PTE_MT_DEV_NONSHARED	(_AT(pteval_t, 4) << 2)	/* device */
+#define L_PTE_MT_DEV_WC		(_AT(pteval_t, 1) << 2)	/* normal non-cacheable */
+#define L_PTE_MT_DEV_CACHED	(_AT(pteval_t, 3) << 2)	/* normal inner write-back */
+#define L_PTE_MT_MASK		(_AT(pteval_t, 7) << 2)
+
+#endif /* _ASM_PGTABLE_3LEVEL_H */
diff --git a/arch/arm/include/asm/pgtable-hwdef.h b/arch/arm/include/asm/pgtable-hwdef.h
index 1831111..8426229 100644
--- a/arch/arm/include/asm/pgtable-hwdef.h
+++ b/arch/arm/include/asm/pgtable-hwdef.h
@@ -10,6 +10,10 @@
 #ifndef _ASMARM_PGTABLE_HWDEF_H
 #define _ASMARM_PGTABLE_HWDEF_H
 
+#ifdef CONFIG_ARM_LPAE
+#include <asm/pgtable-3level-hwdef.h>
+#else
 #include <asm/pgtable-2level-hwdef.h>
+#endif
 
 #endif
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index e35941d..b474478 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -24,7 +24,11 @@
 #include <mach/vmalloc.h>
 #include <asm/pgtable-hwdef.h>
 
+#ifdef CONFIG_ARM_LPAE
+#include <asm/pgtable-3level.h>
+#else
 #include <asm/pgtable-2level.h>
+#endif
 
 /*
  * Just any arbitrary offset to the start of the vmalloc VM area: the


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (7 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-02-03 17:09   ` Catalin Marinas
  2011-02-03 17:56   ` Russell King - ARM Linux
  2011-01-24 17:55 ` [PATCH v4 10/19] ARM: LPAE: MMU setup for the 3-level page table format Catalin Marinas
                   ` (9 subsequent siblings)
  18 siblings, 2 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

This patch modifies the pgd/pmd/pte manipulation functions to support
the 3-level page table format. Since there is no need for an 'ext'
argument to cpu_set_pte_ext(), this patch conditionally defines a
different prototype for this function when CONFIG_ARM_LPAE.

The patch also introduces the L_PGD_SWAPPER flag to mark pgd entries
pointing to pmd tables pre-allocated in the swapper_pg_dir and avoid
trying to free them at run-time. This flag is 0 with the classic page
table format.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/cpu-multi32.h    |    8 ++++
 arch/arm/include/asm/cpu-single.h     |    4 ++
 arch/arm/include/asm/pgalloc.h        |   24 ++++++++++++
 arch/arm/include/asm/pgtable-2level.h |    5 +++
 arch/arm/include/asm/pgtable-3level.h |    5 +++
 arch/arm/include/asm/pgtable.h        |   60 ++++++++++++++++++++++++++++++-
 arch/arm/include/asm/proc-fns.h       |   13 +++++++
 arch/arm/mm/ioremap.c                 |    8 +++--
 arch/arm/mm/pgd.c                     |   64 +++++++++++++++++++++++++--------
 arch/arm/mm/proc-v7.S                 |    8 ++++
 10 files changed, 180 insertions(+), 19 deletions(-)

diff --git a/arch/arm/include/asm/cpu-multi32.h b/arch/arm/include/asm/cpu-multi32.h
index e2b5b0b..985fcf5 100644
--- a/arch/arm/include/asm/cpu-multi32.h
+++ b/arch/arm/include/asm/cpu-multi32.h
@@ -57,7 +57,11 @@ extern struct processor {
 	 * Set a possibly extended PTE.  Non-extended PTEs should
 	 * ignore 'ext'.
 	 */
+#ifdef CONFIG_ARM_LPAE
+	void (*set_pte_ext)(pte_t *ptep, pte_t pte);
+#else
 	void (*set_pte_ext)(pte_t *ptep, pte_t pte, unsigned int ext);
+#endif
 } processor;
 
 #define cpu_proc_init()			processor._proc_init()
@@ -65,5 +69,9 @@ extern struct processor {
 #define cpu_reset(addr)			processor.reset(addr)
 #define cpu_do_idle()			processor._do_idle()
 #define cpu_dcache_clean_area(addr,sz)	processor.dcache_clean_area(addr,sz)
+#ifdef CONFIG_ARM_LPAE
+#define cpu_set_pte_ext(ptep,pte)	processor.set_pte_ext(ptep,pte)
+#else
 #define cpu_set_pte_ext(ptep,pte,ext)	processor.set_pte_ext(ptep,pte,ext)
+#endif
 #define cpu_do_switch_mm(pgd,mm)	processor.switch_mm(pgd,mm)
diff --git a/arch/arm/include/asm/cpu-single.h b/arch/arm/include/asm/cpu-single.h
index f073a6d..f436df2 100644
--- a/arch/arm/include/asm/cpu-single.h
+++ b/arch/arm/include/asm/cpu-single.h
@@ -40,5 +40,9 @@ extern void cpu_proc_fin(void);
 extern int cpu_do_idle(void);
 extern void cpu_dcache_clean_area(void *, int);
 extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
+#ifdef CONFIG_ARM_LPAE
+extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte);
+#else
 extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte, unsigned int ext);
+#endif
 extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h
index 841293e..9acaa0a 100644
--- a/arch/arm/include/asm/pgalloc.h
+++ b/arch/arm/include/asm/pgalloc.h
@@ -23,6 +23,26 @@
 #define _PAGE_USER_TABLE	(PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_USER))
 #define _PAGE_KERNEL_TABLE	(PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL))
 
+#ifdef CONFIG_ARM_LPAE
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+	return (pmd_t *)get_zeroed_page(GFP_KERNEL | __GFP_REPEAT);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+	BUG_ON((unsigned long)pmd & (PAGE_SIZE-1));
+	free_page((unsigned long)pmd);
+}
+
+static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmd)
+{
+	set_pgd(pgd, __pgd(__pa(pmd) | PMD_TYPE_TABLE));
+}
+
+#else	/* !CONFIG_ARM_LPAE */
+
 /*
  * Since we have only two-level page tables, these are trivial
  */
@@ -30,6 +50,8 @@
 #define pmd_free(mm, pmd)		do { } while (0)
 #define pgd_populate(mm,pmd,pte)	BUG()
 
+#endif	/* CONFIG_ARM_LPAE */
+
 extern pgd_t *pgd_alloc(struct mm_struct *mm);
 extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
 
@@ -107,7 +129,9 @@ static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t pte,
 {
 	pmdval_t pmdval = (pte + PTE_HWTABLE_OFF) | prot;
 	pmdp[0] = __pmd(pmdval);
+#ifndef CONFIG_ARM_LPAE
 	pmdp[1] = __pmd(pmdval + 256 * sizeof(pte_t));
+#endif
 	flush_pmd_entry(pmdp);
 }
 
diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h
index 470457e..c21924d 100644
--- a/arch/arm/include/asm/pgtable-2level.h
+++ b/arch/arm/include/asm/pgtable-2level.h
@@ -140,4 +140,9 @@
 #define L_PTE_MT_DEV_CACHED	(_AT(pteval_t, 0x0b) << 2)	/* 1011 */
 #define L_PTE_MT_MASK		(_AT(pteval_t, 0x0f) << 2)
 
+/*
+ * Software PGD flags.
+ */
+#define L_PGD_SWAPPER		(_AT(pgdval_t, 0))		/* compatibility with LPAE */
+
 #endif /* _ASM_PGTABLE_2LEVEL_H */
diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
index ac45358..14a3e28 100644
--- a/arch/arm/include/asm/pgtable-3level.h
+++ b/arch/arm/include/asm/pgtable-3level.h
@@ -98,4 +98,9 @@
 #define L_PTE_MT_DEV_CACHED	(_AT(pteval_t, 3) << 2)	/* normal inner write-back */
 #define L_PTE_MT_MASK		(_AT(pteval_t, 7) << 2)
 
+/*
+ * Software PGD flags.
+ */
+#define L_PGD_SWAPPER		(_AT(pgdval_t, 1) << 55)	/* swapper_pg_dir entry */
+
 #endif /* _ASM_PGTABLE_3LEVEL_H */
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index b474478..a833701 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -164,6 +164,31 @@ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 /* to find an entry in a kernel page-table-directory */
 #define pgd_offset_k(addr)	pgd_offset(&init_mm, addr)
 
+#ifdef CONFIG_ARM_LPAE
+
+#define pgd_none(pgd)		(!pgd_val(pgd))
+#define pgd_bad(pgd)		(!(pgd_val(pgd) & 2))
+#define pgd_present(pgd)	(pgd_val(pgd))
+
+#define pgd_clear(pgdp)			\
+	do {				\
+		*pgdp = __pgd(0);	\
+		clean_pmd_entry(pgdp);	\
+	} while (0)
+
+#define set_pgd(pgdp, pgd)		\
+	do {				\
+		*pgdp = pgd;		\
+		flush_pmd_entry(pgdp);	\
+	} while (0)
+
+static inline pmd_t *pgd_page_vaddr(pgd_t pgd)
+{
+	return __va(pgd_val(pgd) & PHYS_MASK & (s32)PAGE_MASK);
+}
+
+#else	/* !CONFIG_ARM_LPAE */
+
 /*
  * The "pgd_xxx()" functions here are trivial for a folded two-level
  * setup: the pgd is never bad, and a pmd always exists (as it's folded
@@ -175,12 +200,38 @@ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 #define pgd_clear(pgdp)		do { } while (0)
 #define set_pgd(pgd,pgdp)	do { } while (0)
 
+#endif	/* CONFIG_ARM_LPAE */
 
 /* Find an entry in the second-level page table.. */
+#ifdef CONFIG_ARM_LPAE
+#define pmd_index(addr)		(((addr) >> PMD_SHIFT) & (PTRS_PER_PMD - 1))
+#define pmd_offset(pgdp, addr)	((pmd_t *)(pgd_page_vaddr(*(pgdp))) + \
+				 pmd_index(addr))
+#else
 #define pmd_offset(dir, addr)	((pmd_t *)(dir))
+#endif
 
 #define pmd_none(pmd)		(!pmd_val(pmd))
 #define pmd_present(pmd)	(pmd_val(pmd))
+
+#ifdef CONFIG_ARM_LPAE
+
+#define pmd_bad(pmd)		(!(pmd_val(pmd) & 2))
+
+#define copy_pmd(pmdpd,pmdps)		\
+	do {				\
+		*pmdpd = *pmdps;	\
+		flush_pmd_entry(pmdpd);	\
+	} while (0)
+
+#define pmd_clear(pmdp)			\
+	do {				\
+		*pmdp = __pmd(0);	\
+		clean_pmd_entry(pmdp);	\
+	} while (0)
+
+#else	/* !CONFIG_ARM_LPAE */
+
 #define pmd_bad(pmd)		(pmd_val(pmd) & 2)
 
 #define copy_pmd(pmdpd,pmdps)		\
@@ -197,6 +248,8 @@ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 		clean_pmd_entry(pmdp);	\
 	} while (0)
 
+#endif	/* CONFIG_ARM_LPAE */
+
 static inline pte_t *pmd_page_vaddr(pmd_t pmd)
 {
 	return __va(pmd_val(pmd) & PHYS_MASK & (s32)PAGE_MASK);
@@ -229,9 +282,14 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
 #define pte_page(pte)		pfn_to_page(pte_pfn(pte))
 #define mk_pte(page,prot)	pfn_pte(page_to_pfn(page), prot)
 
-#define set_pte_ext(ptep,pte,ext) cpu_set_pte_ext(ptep,pte,ext)
 #define pte_clear(mm,addr,ptep)	set_pte_ext(ptep, __pte(0), 0)
 
+#ifdef CONFIG_ARM_LPAE
+#define set_pte_ext(ptep,pte,ext) cpu_set_pte_ext(ptep,__pte(pte_val(pte)|(ext)))
+#else
+#define set_pte_ext(ptep,pte,ext) cpu_set_pte_ext(ptep,pte,ext)
+#endif
+
 #if __LINUX_ARM_ARCH__ < 6
 static inline void __sync_icache_dcache(pte_t pteval)
 {
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index 8fdae9b..f00ae99 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -263,6 +263,18 @@
 
 #define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm)
 
+#ifdef CONFIG_ARM_LPAE
+#define cpu_get_pgd()	\
+	({						\
+		unsigned long pg, pg2;			\
+		__asm__("mrrc	p15, 0, %0, %1, c2"	\
+			: "=r" (pg), "=r" (pg2)		\
+			:				\
+			: "cc");			\
+		pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1);	\
+		(pgd_t *)phys_to_virt(pg);		\
+	})
+#else
 #define cpu_get_pgd()	\
 	({						\
 		unsigned long pg;			\
@@ -271,6 +283,7 @@
 		pg &= ~0x3fff;				\
 		(pgd_t *)phys_to_virt(pg);		\
 	})
+#endif
 
 #endif
 
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index ab50627..6bdf42c 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -64,7 +64,7 @@ void __check_kvm_seq(struct mm_struct *mm)
 	} while (seq != init_mm.context.kvm_seq);
 }
 
-#ifndef CONFIG_SMP
+#if !defined(CONFIG_SMP) && !defined(CONFIG_ARM_LPAE)
 /*
  * Section support is unsafe on SMP - If you iounmap and ioremap a region,
  * the other CPUs will not see this change until their next context switch.
@@ -195,11 +195,13 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
 	unsigned long addr;
  	struct vm_struct * area;
 
+#ifndef CONFIG_ARM_LPAE
 	/*
 	 * High mappings must be supersection aligned
 	 */
 	if (pfn >= 0x100000 && (__pfn_to_phys(pfn) & ~SUPERSECTION_MASK))
 		return NULL;
+#endif
 
 	/*
 	 * Don't allow RAM to be mapped - this causes problems with ARMv6+
@@ -221,7 +223,7 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
  		return NULL;
  	addr = (unsigned long)area->addr;
 
-#ifndef CONFIG_SMP
+#if !defined(CONFIG_SMP) && !defined(CONFIG_ARM_LPAE)
 	if (DOMAIN_IO == 0 &&
 	    (((cpu_architecture() >= CPU_ARCH_ARMv6) && (get_cr() & CR_XP)) ||
 	       cpu_is_xsc3()) && pfn >= 0x100000 &&
@@ -292,7 +294,7 @@ EXPORT_SYMBOL(__arm_ioremap);
 void __iounmap(volatile void __iomem *io_addr)
 {
 	void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr);
-#ifndef CONFIG_SMP
+#if !defined(CONFIG_SMP) && !defined(CONFIG_ARM_LPAE)
 	struct vm_struct **p, *tmp;
 
 	/*
diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
index 709244c..003587d 100644
--- a/arch/arm/mm/pgd.c
+++ b/arch/arm/mm/pgd.c
@@ -10,6 +10,7 @@
 #include <linux/mm.h>
 #include <linux/gfp.h>
 #include <linux/highmem.h>
+#include <linux/slab.h>
 
 #include <asm/pgalloc.h>
 #include <asm/page.h>
@@ -17,6 +18,14 @@
 
 #include "mm.h"
 
+#ifdef CONFIG_ARM_LPAE
+#define __pgd_alloc()	kmalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL)
+#define __pgd_free(pgd)	kfree(pgd)
+#else
+#define __pgd_alloc()	(pgd_t *)__get_free_pages(GFP_KERNEL, 2)
+#define __pgd_free(pgd)	free_pages((unsigned long)pgd, 2)
+#endif
+
 /*
  * need to get a 16k page for level 1
  */
@@ -26,7 +35,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
 	pmd_t *new_pmd, *init_pmd;
 	pte_t *new_pte, *init_pte;
 
-	new_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 2);
+	new_pgd = __pgd_alloc();
 	if (!new_pgd)
 		goto no_pgd;
 
@@ -41,12 +50,21 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
 
 	clean_dcache_area(new_pgd, PTRS_PER_PGD * sizeof(pgd_t));
 
+#ifdef CONFIG_ARM_LPAE
+	/*
+	 * Allocate PMD table for modules and pkmap mappings.
+	 */
+	new_pmd = pmd_alloc(mm, new_pgd + pgd_index(MODULES_VADDR), 0);
+	if (!new_pmd)
+		goto no_pmd;
+#endif
+
 	if (!vectors_high()) {
 		/*
 		 * On ARM, first page must always be allocated since it
 		 * contains the machine vectors.
 		 */
-		new_pmd = pmd_alloc(mm, new_pgd, 0);
+		new_pmd = pmd_alloc(mm, new_pgd + pgd_index(0), 0);
 		if (!new_pmd)
 			goto no_pmd;
 
@@ -66,7 +84,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
 no_pte:
 	pmd_free(mm, new_pmd);
 no_pmd:
-	free_pages((unsigned long)new_pgd, 2);
+	__pgd_free(new_pgd);
 no_pgd:
 	return NULL;
 }
@@ -80,20 +98,36 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd_base)
 	if (!pgd_base)
 		return;
 
-	pgd = pgd_base + pgd_index(0);
-	if (pgd_none_or_clear_bad(pgd))
-		goto no_pgd;
+	if (!vectors_high()) {
+		pgd = pgd_base + pgd_index(0);
+		if (pgd_none_or_clear_bad(pgd))
+			goto no_pgd;
 
-	pmd = pmd_offset(pgd, 0);
-	if (pmd_none_or_clear_bad(pmd))
-		goto no_pmd;
+		pmd = pmd_offset(pgd, 0);
+		if (pmd_none_or_clear_bad(pmd))
+			goto no_pmd;
 
-	pte = pmd_pgtable(*pmd);
-	pmd_clear(pmd);
-	pte_free(mm, pte);
+		pte = pmd_pgtable(*pmd);
+		pmd_clear(pmd);
+		pte_free(mm, pte);
 no_pmd:
-	pgd_clear(pgd);
-	pmd_free(mm, pmd);
+		pgd_clear(pgd);
+		pmd_free(mm, pmd);
+	}
 no_pgd:
-	free_pages((unsigned long) pgd_base, 2);
+#ifdef CONFIG_ARM_LPAE
+	/*
+	 * Free modules/pkmap or identity pmd tables.
+	 */
+	for (pgd = pgd_base; pgd < pgd_base + PTRS_PER_PGD; pgd++) {
+		if (pgd_none_or_clear_bad(pgd))
+			continue;
+		if (pgd_val(*pgd) & L_PGD_SWAPPER)
+			continue;
+		pmd = pmd_offset(pgd, 0);
+		pgd_clear(pgd);
+		pmd_free(mm, pmd);
+	}
+#endif
+	__pgd_free(pgd_base);
 }
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index bb0faa1..9be03a5 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -130,6 +130,13 @@ ENDPROC(cpu_v7_switch_mm)
  */
 ENTRY(cpu_v7_set_pte_ext)
 #ifdef CONFIG_MMU
+#ifdef CONFIG_ARM_LPAE
+	tst	r2, #L_PTE_PRESENT
+	beq	1f
+	tst	r3, #1 << (55 - 32)		@ L_PTE_DIRTY
+	orreq	r2, #L_PTE_RDONLY
+1:	strd	r2, r3, [r0]
+#else	/* !CONFIG_ARM_LPAE */
 	str	r1, [r0]			@ linux version
 
 	bic	r3, r1, #0x000003f0
@@ -162,6 +169,7 @@ ENTRY(cpu_v7_set_pte_ext)
  ARM(	str	r3, [r0, #2048]! )
  THUMB(	add	r0, r0, #2048 )
  THUMB(	str	r3, [r0] )
+#endif	/* CONFIG_ARM_LPAE */
 	mcr	p15, 0, r0, c7, c10, 1		@ flush_pte
 #endif
 	mov	pc, lr


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 10/19] ARM: LPAE: MMU setup for the 3-level page table format
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (8 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 11/19] ARM: LPAE: Add fault handling support Catalin Marinas
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

This patch adds the MMU initialisation for the LPAE page table format.
The swapper_pg_dir size with LPAE is 5 rather than 4 pages. The
__v7_setup function configures the TTBRx split based on the PAGE_OFFSET
and sets the corresponding TTB control and MAIRx bits (similar to
PRRR/NMRR for TEX remapping). The 36-bit mappings (supersections) and
a few other memory types in mmu.c are conditionally compiled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/kernel/head.S    |  115 +++++++++++++++++++++++++++++++--------------
 arch/arm/mm/mmu.c         |   32 ++++++++++++-
 arch/arm/mm/proc-macros.S |    5 +-
 arch/arm/mm/proc-v7.S     |  104 ++++++++++++++++++++++++++++++++++++----
 4 files changed, 207 insertions(+), 49 deletions(-)

diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index f17d9a0..d96986c 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -21,6 +21,7 @@
 #include <asm/memory.h>
 #include <asm/thread_info.h>
 #include <asm/system.h>
+#include <asm/pgtable.h>
 
 #ifdef CONFIG_DEBUG_LL
 #include <mach/debug-macro.S>
@@ -45,11 +46,20 @@
 #error KERNEL_RAM_VADDR must start at 0xXXXX8000
 #endif
 
+#ifdef CONFIG_ARM_LPAE
+	/* LPAE requires an additional page for the PGD */
+#define PG_DIR_SIZE	0x5000
+#define PMD_ORDER	3
+#else
+#define PG_DIR_SIZE	0x4000
+#define PMD_ORDER	2
+#endif
+
 	.globl	swapper_pg_dir
-	.equ	swapper_pg_dir, KERNEL_RAM_VADDR - 0x4000
+	.equ	swapper_pg_dir, KERNEL_RAM_VADDR - PG_DIR_SIZE
 
 	.macro	pgtbl, rd
-	ldr	\rd, =(KERNEL_RAM_PADDR - 0x4000)
+	ldr	\rd, =(KERNEL_RAM_PADDR - PG_DIR_SIZE)
 	.endm
 
 #ifdef CONFIG_XIP_KERNEL
@@ -136,11 +146,11 @@ __create_page_tables:
 	pgtbl	r4				@ page table address
 
 	/*
-	 * Clear the 16K level 1 swapper page table
+	 * Clear the swapper page table
 	 */
 	mov	r0, r4
 	mov	r3, #0
-	add	r6, r0, #0x4000
+	add	r6, r0, #PG_DIR_SIZE
 1:	str	r3, [r0], #4
 	str	r3, [r0], #4
 	str	r3, [r0], #4
@@ -148,6 +158,24 @@ __create_page_tables:
 	teq	r0, r6
 	bne	1b
 
+#ifdef CONFIG_ARM_LPAE
+	/*
+	 * Build the PGD table (first level) to point to the PMD table. A PGD
+	 * entry is 64-bit wide and the top 32 bits are 0.
+	 */
+	mov	r0, r4
+	add	r3, r4, #0x1000			@ first PMD table address
+	orr	r3, r3, #3			@ PGD block type
+	mov	r6, #4				@ PTRS_PER_PGD
+	mov	r7, #1 << (55 - 32)		@ L_PGD_SWAPPER
+1:	strd	r3, r7, [r0], #8		@ set PGD entry
+	add	r3, r3, #0x1000			@ next PMD table
+	subs	r6, r6, #1
+	bne	1b
+
+	add	r4, r4, #0x1000			@ point to the PMD tables
+#endif
+
 	ldr	r7, [r10, #PROCINFO_MM_MMUFLAGS] @ mm_mmuflags
 
 	/*
@@ -159,30 +187,30 @@ __create_page_tables:
 	sub	r0, r0, r3			@ virt->phys offset
 	add	r5, r5, r0			@ phys __enable_mmu
 	add	r6, r6, r0			@ phys __enable_mmu_end
-	mov	r5, r5, lsr #20
-	mov	r6, r6, lsr #20
+	mov	r5, r5, lsr #SECTION_SHIFT
+	mov	r6, r6, lsr #SECTION_SHIFT
 
-1:	orr	r3, r7, r5, lsl #20		@ flags + kernel base
-	str	r3, [r4, r5, lsl #2]		@ identity mapping
-	teq	r5, r6
-	addne	r5, r5, #1			@ next section
-	bne	1b
+1:	orr	r3, r7, r5, lsl #SECTION_SHIFT	@ flags + kernel base
+	str	r3, [r4, r5, lsl #PMD_ORDER]	@ identity mapping
+	cmp	r5, r6
+	addlo	r5, r5, #SECTION_SHIFT >> 20	@ next section
+	blo	1b
 
 	/*
 	 * Now setup the pagetables for our kernel direct
 	 * mapped region.
 	 */
 	mov	r3, pc
-	mov	r3, r3, lsr #20
-	orr	r3, r7, r3, lsl #20
-	add	r0, r4,  #(KERNEL_START & 0xff000000) >> 18
-	str	r3, [r0, #(KERNEL_START & 0x00f00000) >> 18]!
+	mov	r3, r3, lsr #SECTION_SHIFT
+	orr	r3, r7, r3, lsl #SECTION_SHIFT
+	add	r0, r4,  #(KERNEL_START & 0xff000000) >> (SECTION_SHIFT - PMD_ORDER)
+	str	r3, [r0, #(KERNEL_START & 0x00e00000) >> (SECTION_SHIFT - PMD_ORDER)]!
 	ldr	r6, =(KERNEL_END - 1)
-	add	r0, r0, #4
-	add	r6, r4, r6, lsr #18
+	add	r0, r0, #1 << PMD_ORDER
+	add	r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ORDER)
 1:	cmp	r0, r6
-	add	r3, r3, #1 << 20
-	strls	r3, [r0], #4
+	add	r3, r3, #1 << SECTION_SHIFT
+	strls	r3, [r0], #1 << PMD_ORDER
 	bls	1b
 
 #ifdef CONFIG_XIP_KERNEL
@@ -193,11 +221,11 @@ __create_page_tables:
 	.if	(KERNEL_RAM_PADDR & 0x00f00000)
 	orr	r3, r3, #(KERNEL_RAM_PADDR & 0x00f00000)
 	.endif
-	add	r0, r4,  #(KERNEL_RAM_VADDR & 0xff000000) >> 18
-	str	r3, [r0, #(KERNEL_RAM_VADDR & 0x00f00000) >> 18]!
+	add	r0, r4,  #(KERNEL_RAM_VADDR & 0xff000000) >> (SECTION_SHIFT - PMD_ORDER)
+	str	r3, [r0, #(KERNEL_RAM_VADDR & 0x00f00000) >> (SECTION_SHIFT - PMD_ORDER)]!
 	ldr	r6, =(_end - 1)
 	add	r0, r0, #4
-	add	r6, r4, r6, lsr #18
+	add	r6, r4, r6, lsr #(SECTION_SHIFT - PMD_ORDER)
 1:	cmp	r0, r6
 	add	r3, r3, #1 << 20
 	strls	r3, [r0], #4
@@ -205,12 +233,13 @@ __create_page_tables:
 #endif
 
 	/*
-	 * Then map first 1MB of ram in case it contains our boot params.
+	 * Then map first section of RAM in case it contains our boot params.
+	 * It assumes that PAGE_OFFSET is 2MB-aligned.
 	 */
-	add	r0, r4, #PAGE_OFFSET >> 18
+	add	r0, r4, #PAGE_OFFSET >> (SECTION_SHIFT - PMD_ORDER)
 	orr	r6, r7, #(PHYS_OFFSET & 0xff000000)
-	.if	(PHYS_OFFSET & 0x00f00000)
-	orr	r6, r6, #(PHYS_OFFSET & 0x00f00000)
+	.if	(PHYS_OFFSET & 0x00e00000)
+	orr	r6, r6, #(PHYS_OFFSET & 0x00e00000)
 	.endif
 	str	r6, [r0]
 
@@ -223,21 +252,27 @@ __create_page_tables:
 	 */
 	addruart r7, r3
 
-	mov	r3, r3, lsr #20
-	mov	r3, r3, lsl #2
+	mov	r3, r3, lsr #SECTION_SHIFT
+	mov	r3, r3, lsl #PMD_ORDER
 
 	add	r0, r4, r3
 	rsb	r3, r3, #0x4000			@ PTRS_PER_PGD*sizeof(long)
 	cmp	r3, #0x0800			@ limit to 512MB
 	movhi	r3, #0x0800
 	add	r6, r0, r3
-	mov	r3, r7, lsr #20
+	mov	r3, r7, lsr #SECTION_SHIFT
 	ldr	r7, [r10, #PROCINFO_IO_MMUFLAGS] @ io_mmuflags
-	orr	r3, r7, r3, lsl #20
+	orr	r3, r7, r3, lsl #SECTION_SHIFT
+#ifdef CONFIG_ARM_LPAE
+	mov	r7, #1 << (54 - 32)		@ XN
+#endif
 1:	str	r3, [r0], #4
-	add	r3, r3, #1 << 20
-	teq	r0, r6
-	bne	1b
+#ifdef CONFIG_ARM_LPAE
+	str	r7, [r0], #4
+#endif
+	add	r3, r3, #1 << SECTION_SHIFT
+	cmp	r0, r6
+	blo	1b
 
 #else /* CONFIG_DEBUG_ICEDCC */
 	/* we don't need any serial debugging mappings for ICEDCC */
@@ -249,7 +284,7 @@ __create_page_tables:
 	 * If we're using the NetWinder or CATS, we also need to map
 	 * in the 16550-type serial port for the debug messages
 	 */
-	add	r0, r4, #0xff000000 >> 18
+	add	r0, r4, #0xff000000 >> (SECTION_SHIFT - PMD_ORDER)
 	orr	r3, r7, #0x7c000000
 	str	r3, [r0]
 #endif
@@ -259,13 +294,16 @@ __create_page_tables:
 	 * Similar reasons here - for debug.  This is
 	 * only for Acorn RiscPC architectures.
 	 */
-	add	r0, r4, #0x02000000 >> 18
+	add	r0, r4, #0x02000000 >> (SECTION_SHIFT - PMD_ORDER)
 	orr	r3, r7, #0x02000000
 	str	r3, [r0]
-	add	r0, r4, #0xd8000000 >> 18
+	add	r0, r4, #0xd8000000 >> (SECTION_SHIFT - PMD_ORDER)
 	str	r3, [r0]
 #endif
 #endif
+#ifdef CONFIG_ARM_LPAE
+	sub	r4, r4, #0x1000		@ point to the PGD table
+#endif
 	mov	pc, lr
 ENDPROC(__create_page_tables)
 	.ltorg
@@ -355,12 +393,17 @@ __enable_mmu:
 #ifdef CONFIG_CPU_ICACHE_DISABLE
 	bic	r0, r0, #CR_I
 #endif
+#ifdef CONFIG_ARM_LPAE
+	mov	r5, #0
+	mcrr	p15, 0, r4, r5, c2		@ load TTBR0
+#else
 	mov	r5, #(domain_val(DOMAIN_USER, DOMAIN_MANAGER) | \
 		      domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \
 		      domain_val(DOMAIN_TABLE, DOMAIN_MANAGER) | \
 		      domain_val(DOMAIN_IO, DOMAIN_CLIENT))
 	mcr	p15, 0, r5, c3, c0, 0		@ load domain access register
 	mcr	p15, 0, r4, c2, c0, 0		@ load page table pointer
+#endif
 	b	__turn_mmu_on
 ENDPROC(__enable_mmu)
 
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 57f9688..251056a 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -152,6 +152,7 @@ static int __init early_nowrite(char *__unused)
 }
 early_param("nowb", early_nowrite);
 
+#ifndef CONFIG_ARM_LPAE
 static int __init early_ecc(char *p)
 {
 	if (memcmp(p, "on", 2) == 0)
@@ -161,6 +162,7 @@ static int __init early_ecc(char *p)
 	return 0;
 }
 early_param("ecc", early_ecc);
+#endif
 
 static int __init noalign_setup(char *__unused)
 {
@@ -230,10 +232,12 @@ static struct mem_type mem_types[] = {
 		.prot_sect = PMD_TYPE_SECT | PMD_SECT_XN,
 		.domain    = DOMAIN_KERNEL,
 	},
+#ifndef CONFIG_ARM_LPAE
 	[MT_MINICLEAN] = {
 		.prot_sect = PMD_TYPE_SECT | PMD_SECT_XN | PMD_SECT_MINICACHE,
 		.domain    = DOMAIN_KERNEL,
 	},
+#endif
 	[MT_LOW_VECTORS] = {
 		.prot_pte  = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
 				L_PTE_RDONLY,
@@ -423,6 +427,7 @@ static void __init build_mem_type_table(void)
 	 * ARMv6 and above have extended page tables.
 	 */
 	if (cpu_arch >= CPU_ARCH_ARMv6 && (cr & CR_XP)) {
+#ifndef CONFIG_ARM_LPAE
 		/*
 		 * Mark cache clean areas and XIP ROM read only
 		 * from SVC mode and no access from userspace.
@@ -430,6 +435,7 @@ static void __init build_mem_type_table(void)
 		mem_types[MT_ROM].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
 		mem_types[MT_MINICLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
 		mem_types[MT_CACHECLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE;
+#endif
 
 		if (is_smp()) {
 			/*
@@ -468,6 +474,18 @@ static void __init build_mem_type_table(void)
 		mem_types[MT_MEMORY_NONCACHED].prot_sect |= PMD_SECT_BUFFERABLE;
 	}
 
+#ifdef CONFIG_ARM_LPAE
+	/*
+	 * Do not generate access flag faults for the kernel mappings.
+	 */
+	for (i = 0; i < ARRAY_SIZE(mem_types); i++) {
+		mem_types[i].prot_pte |= PTE_EXT_AF;
+		mem_types[i].prot_sect |= PMD_SECT_AF;
+	}
+	kern_pgprot |= PTE_EXT_AF;
+	vecs_pgprot |= PTE_EXT_AF;
+#endif
+
 	for (i = 0; i < 16; i++) {
 		unsigned long v = pgprot_val(protection_map[i]);
 		protection_map[i] = __pgprot(v | user_pgprot);
@@ -584,6 +602,7 @@ static void __init alloc_init_section(pgd_t *pgd, unsigned long addr,
 	}
 }
 
+#ifndef CONFIG_ARM_LPAE
 static void __init create_36bit_mapping(struct map_desc *md,
 					const struct mem_type *type)
 {
@@ -642,6 +661,7 @@ static void __init create_36bit_mapping(struct map_desc *md,
 		pgd += SUPERSECTION_SIZE >> PGDIR_SHIFT;
 	} while (addr != end);
 }
+#endif	/* !CONFIG_ARM_LPAE */
 
 /*
  * Create the page directory entries and any necessary
@@ -672,6 +692,7 @@ static void __init create_mapping(struct map_desc *md)
 
 	type = &mem_types[md->type];
 
+#ifndef CONFIG_ARM_LPAE
 	/*
 	 * Catch 36-bit addresses
 	 */
@@ -679,6 +700,7 @@ static void __init create_mapping(struct map_desc *md)
 		create_36bit_mapping(md, type);
 		return;
 	}
+#endif
 
 	addr = md->virtual & PAGE_MASK;
 	phys = (unsigned long)__pfn_to_phys(md->pfn);
@@ -883,6 +905,14 @@ static inline void prepare_page_table(void)
 		pmd_clear(pmd_off_k(addr));
 }
 
+#ifdef CONFIG_ARM_LPAE
+/* the first page is reserved for pgd */
+#define SWAPPER_PG_DIR_SIZE	(PAGE_SIZE + \
+				 PTRS_PER_PGD * PTRS_PER_PMD * sizeof(pmd_t))
+#else
+#define SWAPPER_PG_DIR_SIZE	(PTRS_PER_PGD * sizeof(pgd_t))
+#endif
+
 /*
  * Reserve the special regions of memory
  */
@@ -892,7 +922,7 @@ void __init arm_mm_memblock_reserve(void)
 	 * Reserve the page tables.  These are already in use,
 	 * and can only be in node 0.
 	 */
-	memblock_reserve(__pa(swapper_pg_dir), PTRS_PER_PGD * sizeof(pgd_t));
+	memblock_reserve(__pa(swapper_pg_dir), SWAPPER_PG_DIR_SIZE);
 
 #ifdef CONFIG_SA1111
 	/*
diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
index e32fa49..90c680a 100644
--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -91,8 +91,9 @@
 #if L_PTE_SHARED != PTE_EXT_SHARED
 #error PTE shared bit mismatch
 #endif
-#if (L_PTE_XN+L_PTE_USER+L_PTE_RDONLY+L_PTE_DIRTY+L_PTE_YOUNG+\
-     L_PTE_FILE+L_PTE_PRESENT) > L_PTE_SHARED
+#if !defined (CONFIG_ARM_LPAE) && \
+	(L_PTE_XN+L_PTE_USER+L_PTE_RDONLY+L_PTE_DIRTY+L_PTE_YOUNG+\
+	 L_PTE_FILE+L_PTE_PRESENT) > L_PTE_SHARED
 #error Invalid Linux PTE bit settings
 #endif
 #endif	/* CONFIG_MMU */
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index 9be03a5..a22b89f 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -19,6 +19,19 @@
 
 #include "proc-macros.S"
 
+#ifdef CONFIG_ARM_LPAE
+#define TTB_IRGN_NC	(0 << 8)
+#define TTB_IRGN_WBWA	(1 << 8)
+#define TTB_IRGN_WT	(2 << 8)
+#define TTB_IRGN_WB	(3 << 8)
+#define TTB_RGN_NC	(0 << 10)
+#define TTB_RGN_OC_WBWA	(1 << 10)
+#define TTB_RGN_OC_WT	(2 << 10)
+#define TTB_RGN_OC_WB	(3 << 10)
+#define TTB_S		(3 << 12)
+#define TTB_NOS		(0)
+#define TTB_EAE		(1 << 31)
+#else
 #define TTB_S		(1 << 1)
 #define TTB_RGN_NC	(0 << 3)
 #define TTB_RGN_OC_WBWA	(1 << 3)
@@ -29,14 +42,15 @@
 #define TTB_IRGN_WBWA	((0 << 0) | (1 << 6))
 #define TTB_IRGN_WT	((1 << 0) | (0 << 6))
 #define TTB_IRGN_WB	((1 << 0) | (1 << 6))
+#endif
 
 /* PTWs cacheable, inner WB not shareable, outer WB not shareable */
-#define TTB_FLAGS_UP	TTB_IRGN_WB|TTB_RGN_OC_WB
-#define PMD_FLAGS_UP	PMD_SECT_WB
+#define TTB_FLAGS_UP	(TTB_IRGN_WB|TTB_RGN_OC_WB)
+#define PMD_FLAGS_UP	(PMD_SECT_WB)
 
 /* PTWs cacheable, inner WBWA shareable, outer WBWA not shareable */
-#define TTB_FLAGS_SMP	TTB_IRGN_WBWA|TTB_S|TTB_NOS|TTB_RGN_OC_WBWA
-#define PMD_FLAGS_SMP	PMD_SECT_WBWA|PMD_SECT_S
+#define TTB_FLAGS_SMP	(TTB_IRGN_WBWA|TTB_S|TTB_NOS|TTB_RGN_OC_WBWA)
+#define PMD_FLAGS_SMP	(PMD_SECT_WBWA|PMD_SECT_S)
 
 ENTRY(cpu_v7_proc_init)
 	mov	pc, lr
@@ -282,10 +296,46 @@ __v7_setup:
 	dsb
 #ifdef CONFIG_MMU
 	mcr	p15, 0, r10, c8, c7, 0		@ invalidate I + D TLBs
+#ifdef CONFIG_ARM_LPAE
+	mov	r5, #TTB_EAE
+	ALT_SMP(orr	r5, r5, #TTB_FLAGS_SMP)
+	ALT_SMP(orr	r5, r5, #TTB_FLAGS_SMP << 16)
+	ALT_UP(orr	r5, r5, #TTB_FLAGS_UP)
+	ALT_UP(orr	r5, r5, #TTB_FLAGS_UP << 16)
+	mrc	p15, 0, r10, c2, c0, 2
+	orr	r10, r10, r5
+#if PHYS_OFFSET <= PAGE_OFFSET
+	/*
+	 * TTBR0/TTBR1 split (PAGE_OFFSET):
+	 *   0x40000000: T0SZ = 2, T1SZ = 0 (not used)
+	 *   0x80000000: T0SZ = 0, T1SZ = 1
+	 *   0xc0000000: T0SZ = 0, T1SZ = 2
+	 *
+	 * Only use this feature if PAGE_OFFSET <=  PAGE_OFFSET, otherwise
+	 * booting secondary CPUs would end up using TTBR1 for the identity
+	 * mapping set up in TTBR0.
+	 */
+	orr	r10, r10, #(((PAGE_OFFSET >> 30) - 1) << 16)	@ TTBCR.T1SZ
+#endif
+#endif
 	mcr	p15, 0, r10, c2, c0, 2		@ TTB control register
+#ifdef CONFIG_ARM_LPAE
+	mov	r5, #0
+#if defined CONFIG_VMSPLIT_2G
+	/* PAGE_OFFSET == 0x80000000, T1SZ == 1 */
+	add	r6, r4, #1 << 4			@ skip two L1 entries
+#elif defined CONFIG_VMSPLIT_3G
+	/* PAGE_OFFSET == 0xc0000000, T1SZ == 2 */
+	add	r6, r4, #4096 * (1 + 3)		@ only L2 used, skip pgd+3*pmd
+#else
+	mov	r6, r4
+#endif
+	mcrr	p15, 1, r6, r5, c2		@ load TTBR1
+#else	/* !CONFIG_ARM_LPAE */
 	ALT_SMP(orr	r4, r4, #TTB_FLAGS_SMP)
 	ALT_UP(orr	r4, r4, #TTB_FLAGS_UP)
 	mcr	p15, 0, r4, c2, c0, 1		@ load TTB1
+#endif	/* CONFIG_ARM_LPAE */
 	/*
 	 * Memory region attributes with SCTLR.TRE=1
 	 *
@@ -313,11 +363,33 @@ __v7_setup:
 	 *   NS0 = PRRR[18] = 0		- normal shareable property
 	 *   NS1 = PRRR[19] = 1		- normal shareable property
 	 *   NOS = PRRR[24+n] = 1	- not outer shareable
+	 *
+	 * Memory region attributes for LPAE (defined in pgtable-3level.h):
+	 *
+	 *   n = AttrIndx[2:0]
+	 *
+	 *			n	MAIR
+	 *   UNCACHED		000	00000000
+	 *   BUFFERABLE		001	01000100
+	 *   DEV_WC		001	01000100
+	 *   WRITETHROUGH	010	10101010
+	 *   WRITEBACK		011	11101110
+	 *   DEV_CACHED		011	11101110
+	 *   DEV_SHARED		100	00000100
+	 *   DEV_NONSHARED	100	00000100
+	 *   unused		101
+	 *   unused		110
+	 *   WRITEALLOC		111	11111111
 	 */
+#ifdef CONFIG_ARM_LPAE
+	ldr	r5, =0xeeaa4400			@ MAIR0
+	ldr	r6, =0xff000004			@ MAIR1
+#else
 	ldr	r5, =0xff0a81a8			@ PRRR
 	ldr	r6, =0x40e040e0			@ NMRR
-	mcr	p15, 0, r5, c10, c2, 0		@ write PRRR
-	mcr	p15, 0, r6, c10, c2, 1		@ write NMRR
+#endif
+	mcr	p15, 0, r5, c10, c2, 0		@ write PRRR/MAIR0
+	mcr	p15, 0, r6, c10, c2, 1		@ write NMRR/MAIR1
 #endif
 	adr	r5, v7_crval
 	ldmia	r5, {r5, r6}
@@ -336,14 +408,19 @@ __v7_setup:
 ENDPROC(__v7_setup)
 
 	/*   AT
-	 *  TFR   EV X F   I D LR    S
-	 * .EEE ..EE PUI. .T.T 4RVI ZWRS BLDP WCAM
+	 *  TFR   EV X F   IHD LR    S
+	 * .EEE ..EE PUI. .TAT 4RVI ZWRS BLDP WCAM
 	 * rxxx rrxx xxx0 0101 xxxx xxxx x111 xxxx < forced
 	 *    1    0 110       0011 1100 .111 1101 < we want
+	 *   11    0 110    1  0011 1100 .111 1101 < we want (LPAE)
 	 */
 	.type	v7_crval, #object
 v7_crval:
+#ifdef CONFIG_ARM_LPAE
+	crval	clear=0x0120c302, mmuset=0x30c23c7d, ucset=0x00c01c7c
+#else
 	crval	clear=0x0120c302, mmuset=0x10c03c7d, ucset=0x00c01c7c
+#endif
 
 __v7_setup_stack:
 	.space	4 * 11				@ 11 registers
@@ -415,17 +492,20 @@ __v7_ca15mp_proc_info:
 		PMD_TYPE_SECT | \
 		PMD_SECT_AP_WRITE | \
 		PMD_SECT_AP_READ | \
+		PMD_SECT_AF | \
 		PMD_FLAGS_SMP)
 	ALT_UP(.long \
 		PMD_TYPE_SECT | \
 		PMD_SECT_AP_WRITE | \
 		PMD_SECT_AP_READ | \
+		PMD_SECT_AF | \
 		PMD_FLAGS_UP)
 		/* PMD_SECT_XN is set explicitly in head.S for LPAE */
 	.long   PMD_TYPE_SECT | \
 		PMD_SECT_XN | \
 		PMD_SECT_AP_WRITE | \
-		PMD_SECT_AP_READ
+		PMD_SECT_AP_READ | \
+		PMD_SECT_AF
 	b	__v7_ca15mp_setup
 	.long	cpu_arch_name
 	.long	cpu_elf_name
@@ -448,16 +528,20 @@ __v7_proc_info:
 		PMD_TYPE_SECT | \
 		PMD_SECT_AP_WRITE | \
 		PMD_SECT_AP_READ | \
+		PMD_SECT_AF | \
 		PMD_FLAGS_SMP)
 	ALT_UP(.long \
 		PMD_TYPE_SECT | \
 		PMD_SECT_AP_WRITE | \
 		PMD_SECT_AP_READ | \
+		PMD_SECT_AF | \
 		PMD_FLAGS_UP)
+		/* PMD_SECT_XN is set explicitly in head.S for LPAE */
 	.long   PMD_TYPE_SECT | \
 		PMD_SECT_XN | \
 		PMD_SECT_AP_WRITE | \
-		PMD_SECT_AP_READ
+		PMD_SECT_AP_READ | \
+		PMD_SECT_AF
 	W(b)	__v7_setup
 	.long	cpu_arch_name
 	.long	cpu_elf_name


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 11/19] ARM: LPAE: Add fault handling support
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (9 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 10/19] ARM: LPAE: MMU setup for the 3-level page table format Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 12/19] ARM: LPAE: Add context switching support Catalin Marinas
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

The DFSR and IFSR register format is different when LPAE is enabled. In
addition, DFSR and IFSR have the similar definitions for the fault type.
This modifies modifies the fault code to correctly handle the new
format.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/mm/alignment.c |    8 ++++-
 arch/arm/mm/fault.c     |   80 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 87 insertions(+), 1 deletions(-)

diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
index 724ba3b..bc98a6e 100644
--- a/arch/arm/mm/alignment.c
+++ b/arch/arm/mm/alignment.c
@@ -906,6 +906,12 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 	return 0;
 }
 
+#ifdef CONFIG_ARM_LPAE
+#define ALIGNMENT_FAULT		33
+#else
+#define ALIGNMENT_FAULT		1
+#endif
+
 /*
  * This needs to be done after sysctl_init, otherwise sys/ will be
  * overwritten.  Actually, this shouldn't be in sys/ at all since
@@ -939,7 +945,7 @@ static int __init alignment_init(void)
 		ai_usermode = UM_FIXUP;
 	}
 
-	hook_fault_code(1, do_alignment, SIGBUS, BUS_ADRALN,
+	hook_fault_code(ALIGNMENT_FAULT, do_alignment, SIGBUS, BUS_ADRALN,
 			"alignment exception");
 
 	/*
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index ef0e24f..350eb0a 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -33,10 +33,15 @@
 #define FSR_WRITE		(1 << 11)
 #define FSR_FS4			(1 << 10)
 #define FSR_FS3_0		(15)
+#define FSR_FS5_0		(0x3f)
 
 static inline int fsr_fs(unsigned int fsr)
 {
+#ifdef CONFIG_ARM_LPAE
+	return fsr & FSR_FS5_0;
+#else
 	return (fsr & FSR_FS3_0) | (fsr & FSR_FS4) >> 6;
+#endif
 }
 
 #ifdef CONFIG_MMU
@@ -109,8 +114,10 @@ void show_pte(struct mm_struct *mm, unsigned long addr)
 
 		pte = pte_offset_map(pmd, addr);
 		printk(", *pte=%08llx", (long long)pte_val(*pte));
+#ifndef CONFIG_ARM_LPAE
 		printk(", *ppte=%08llx",
 		       (long long)pte_val(pte[PTE_HWTABLE_PTRS]));
+#endif
 		pte_unmap(pte);
 	} while(0);
 
@@ -469,6 +476,72 @@ static struct fsr_info {
 	int	code;
 	const char *name;
 } fsr_info[] = {
+#ifdef CONFIG_ARM_LPAE
+	{ do_bad,		SIGBUS,  0,		"unknown 0"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 1"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 2"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 3"			},
+	{ do_bad,		SIGBUS,  0,		"reserved translation fault"	},
+	{ do_translation_fault,	SIGSEGV, SEGV_MAPERR,	"level 1 translation fault"	},
+	{ do_translation_fault,	SIGSEGV, SEGV_MAPERR,	"level 2 translation fault"	},
+	{ do_page_fault,	SIGSEGV, SEGV_MAPERR,	"level 3 translation fault"	},
+	{ do_bad,		SIGBUS,  0,		"reserved access flag fault"	},
+	{ do_bad,		SIGSEGV, SEGV_ACCERR,	"level 1 access flag fault"	},
+	{ do_bad,		SIGSEGV, SEGV_ACCERR,	"level 2 access flag fault"	},
+	{ do_page_fault,	SIGSEGV, SEGV_ACCERR,	"level 3 access flag fault"	},
+	{ do_bad,		SIGBUS,  0,		"reserved permission fault"	},
+	{ do_bad,		SIGSEGV, SEGV_ACCERR,	"level 1 permission fault"	},
+	{ do_sect_fault,	SIGSEGV, SEGV_ACCERR,	"level 2 permission fault"	},
+	{ do_page_fault,	SIGSEGV, SEGV_ACCERR,	"level 3 permission fault"	},
+	{ do_bad,		SIGBUS,  0,		"synchronous external abort"	},
+	{ do_bad,		SIGBUS,  0,		"asynchronous external abort"	},
+	{ do_bad,		SIGBUS,  0,		"unknown 18"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 19"			},
+	{ do_bad,		SIGBUS,  0,		"synchronous abort (translation table walk)" },
+	{ do_bad,		SIGBUS,  0,		"synchronous abort (translation table walk)" },
+	{ do_bad,		SIGBUS,  0,		"synchronous abort (translation table walk)" },
+	{ do_bad,		SIGBUS,  0,		"synchronous abort (translation table walk)" },
+	{ do_bad,		SIGBUS,  0,		"synchronous parity error"	},
+	{ do_bad,		SIGBUS,  0,		"asynchronous parity error"	},
+	{ do_bad,		SIGBUS,  0,		"unknown 26"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 27"			},
+	{ do_bad,		SIGBUS,  0,		"synchronous parity error (translation table walk" },
+	{ do_bad,		SIGBUS,  0,		"synchronous parity error (translation table walk" },
+	{ do_bad,		SIGBUS,  0,		"synchronous parity error (translation table walk" },
+	{ do_bad,		SIGBUS,  0,		"synchronous parity error (translation table walk" },
+	{ do_bad,		SIGBUS,  0,		"unknown 32"			},
+	{ do_bad,		SIGBUS,  BUS_ADRALN,	"alignment fault"		},
+	{ do_bad,		SIGBUS,  0,		"debug event"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 35"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 36"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 37"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 38"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 39"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 40"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 41"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 42"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 43"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 44"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 45"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 46"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 47"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 48"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 49"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 50"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 51"			},
+	{ do_bad,		SIGBUS,  0,		"implementation fault (lockdown abort)" },
+	{ do_bad,		SIGBUS,  0,		"unknown 53"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 54"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 55"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 56"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 57"			},
+	{ do_bad,		SIGBUS,  0,		"implementation fault (coprocessor abort)" },
+	{ do_bad,		SIGBUS,  0,		"unknown 59"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 60"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 61"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 62"			},
+	{ do_bad,		SIGBUS,  0,		"unknown 63"			},
+#else	/* !CONFIG_ARM_LPAE */
 	/*
 	 * The following are the standard ARMv3 and ARMv4 aborts.  ARMv5
 	 * defines these to be "precise" aborts.
@@ -510,6 +583,7 @@ static struct fsr_info {
 	{ do_bad,		SIGBUS,  0,		"unknown 29"			   },
 	{ do_bad,		SIGBUS,  0,		"unknown 30"			   },
 	{ do_bad,		SIGBUS,  0,		"unknown 31"			   }
+#endif	/* CONFIG_ARM_LPAE */
 };
 
 void __init
@@ -548,6 +622,9 @@ do_DataAbort(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 }
 
 
+#ifdef CONFIG_ARM_LPAE
+#define ifsr_info	fsr_info
+#else	/* !CONFIG_ARM_LPAE */
 static struct fsr_info ifsr_info[] = {
 	{ do_bad,		SIGBUS,  0,		"unknown 0"			   },
 	{ do_bad,		SIGBUS,  0,		"unknown 1"			   },
@@ -582,6 +659,7 @@ static struct fsr_info ifsr_info[] = {
 	{ do_bad,		SIGBUS,  0,		"unknown 30"			   },
 	{ do_bad,		SIGBUS,  0,		"unknown 31"			   },
 };
+#endif	/* CONFIG_ARM_LPAE */
 
 void __init
 hook_ifault_code(int nr, int (*fn)(unsigned long, unsigned int, struct pt_regs *),
@@ -617,6 +695,7 @@ do_PrefetchAbort(unsigned long addr, unsigned int ifsr, struct pt_regs *regs)
 
 static int __init exceptions_init(void)
 {
+#ifndef CONFIG_ARM_LPAE
 	if (cpu_architecture() >= CPU_ARCH_ARMv6) {
 		hook_fault_code(4, do_translation_fault, SIGSEGV, SEGV_MAPERR,
 				"I-cache maintenance fault");
@@ -632,6 +711,7 @@ static int __init exceptions_init(void)
 		hook_fault_code(6, do_bad, SIGSEGV, SEGV_MAPERR,
 				"section access flag fault");
 	}
+#endif
 
 	return 0;
 }


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 12/19] ARM: LPAE: Add context switching support
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (10 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 11/19] ARM: LPAE: Add fault handling support Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-02-12 10:44   ` Russell King - ARM Linux
  2011-01-24 17:55 ` [PATCH v4 13/19] ARM: LPAE: Add identity mapping support for the 3-level page table format Catalin Marinas
                   ` (6 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

With LPAE, TTBRx registers are 64-bit. The ASID is stored in TTBR0
rather than a separate Context ID register. This patch makes the
necessary changes to handle context switching on LPAE.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/mm/context.c |   18 ++++++++++++++++--
 arch/arm/mm/proc-v7.S |    8 +++++++-
 2 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index b0ee9ba..d40d3fa 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -22,6 +22,20 @@ unsigned int cpu_last_asid = ASID_FIRST_VERSION;
 DEFINE_PER_CPU(struct mm_struct *, current_mm);
 #endif
 
+#ifdef CONFIG_ARM_LPAE
+#define cpu_set_asid(asid) {						\
+	unsigned long ttbl, ttbh;					\
+	asm("	mrrc	p15, 0, %0, %1, c2		@ read TTBR0\n"	\
+	    "	mov	%1, %1, lsl #(48 - 32)		@ set ASID\n"	\
+	    "	mcrr	p15, 0, %0, %1, c2		@ set TTBR0\n"	\
+	    : "=r" (ttbl), "=r" (ttbh)					\
+	    : "r" (asid & ~ASID_MASK));					\
+}
+#else
+#define cpu_set_asid(asid) \
+	asm("	mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (asid))
+#endif
+
 /*
  * We fork()ed a process, and we need a new context for the child
  * to run in.  We reserve version 0 for initial tasks so we will
@@ -37,7 +51,7 @@ void __init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 static void flush_context(void)
 {
 	/* set the reserved ASID before flushing the TLB */
-	asm("mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (0));
+	cpu_set_asid(0);
 	isb();
 	local_flush_tlb_all();
 	if (icache_is_vivt_asid_tagged()) {
@@ -99,7 +113,7 @@ static void reset_context(void *info)
 	set_mm_context(mm, asid);
 
 	/* set the new ASID */
-	asm("mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (mm->context.id));
+	cpu_set_asid(mm->context.id);
 	isb();
 }
 
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index a22b89f..ed4f3cb 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -117,6 +117,11 @@ ENTRY(cpu_v7_switch_mm)
 #ifdef CONFIG_MMU
 	mov	r2, #0
 	ldr	r1, [r1, #MM_CONTEXT_ID]	@ get mm->context.id
+#ifdef CONFIG_ARM_LPAE
+	and	r3, r1, #0xff
+	mov	r3, r3, lsl #(48 - 32)		@ ASID
+	mcrr	p15, 0, r0, r3, c2		@ set TTB 0
+#else	/* !CONFIG_ARM_LPAE */
 	ALT_SMP(orr	r0, r0, #TTB_FLAGS_SMP)
 	ALT_UP(orr	r0, r0, #TTB_FLAGS_UP)
 #ifdef CONFIG_ARM_ERRATA_430973
@@ -124,9 +129,10 @@ ENTRY(cpu_v7_switch_mm)
 #endif
 	mcr	p15, 0, r2, c13, c0, 1		@ set reserved context ID
 	isb
-1:	mcr	p15, 0, r0, c2, c0, 0		@ set TTB 0
+	mcr	p15, 0, r0, c2, c0, 0		@ set TTB 0
 	isb
 	mcr	p15, 0, r1, c13, c0, 1		@ set context ID
+#endif	/* CONFIG_ARM_LPAE */
 	isb
 #endif
 	mov	pc, lr


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 13/19] ARM: LPAE: Add identity mapping support for the 3-level page table format
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (11 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 12/19] ARM: LPAE: Add context switching support Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 14/19] ARM: LPAE: Add SMP " Catalin Marinas
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

With LPAE, the pgd is a separate page table with entries pointing to the
pmd. The identity_mapping_add() function needs to ensure that the pgd is
populated before populating the pmd level. The do..while blocks now loop
over the pmd in order to have the same implementation for the two page
table formats. The pmd_addr_end() definition has been removed and the
generic one used instead. The pmd clean-up is done in the pgd_free()
function.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/pgtable.h |    4 ----
 arch/arm/mm/idmap.c            |   39 ++++++++++++++++++++++++++++-----------
 2 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index a833701..40b21c1 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -257,10 +257,6 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
 
 #define pmd_page(pmd)		pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK))
 
-/* we don't need complex calculations here as the pmd is folded into the pgd */
-#define pmd_addr_end(addr,end)	(end)
-
-
 #ifndef CONFIG_HIGHPTE
 #define __pte_map(pmd)		pmd_page_vaddr(*(pmd))
 #define __pte_unmap(pte)	do { } while (0)
diff --git a/arch/arm/mm/idmap.c b/arch/arm/mm/idmap.c
index 5729944..dffc7a2 100644
--- a/arch/arm/mm/idmap.c
+++ b/arch/arm/mm/idmap.c
@@ -1,3 +1,5 @@
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
 #include <linux/kernel.h>
 
 #include <asm/cputype.h>
@@ -7,12 +9,25 @@
 static void idmap_add_pmd(pgd_t *pgd, unsigned long addr, unsigned long end,
 	unsigned long prot)
 {
-	pmd_t *pmd = pmd_offset(pgd, addr);
+	pmd_t *pmd;
+
+	if (pgd_none_or_clear_bad(pgd) || (pgd_val(*pgd) & L_PGD_SWAPPER)) {
+		pmd = pmd_alloc_one(NULL, addr);
+		if (!pmd) {
+			pr_warning("Failed to allocate identity pmd.\n");
+			return;
+		}
+		pgd_populate(NULL, pgd, pmd);
+		pmd += pmd_index(addr);
+	} else
+		pmd = pmd_offset(pgd, addr);
 
 	addr = (addr & PMD_MASK) | prot;
 	pmd[0] = __pmd(addr);
+#ifndef CONFIG_ARM_LPAE
 	addr += SECTION_SIZE;
 	pmd[1] = __pmd(addr);
+#endif
 	flush_pmd_entry(pmd);
 }
 
@@ -20,21 +35,24 @@ void identity_mapping_add(pgd_t *pgd, unsigned long addr, unsigned long end)
 {
 	unsigned long prot, next;
 
-	prot = PMD_TYPE_SECT | PMD_SECT_AP_WRITE;
+	prot = PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AF;
 	if (cpu_architecture() <= CPU_ARCH_ARMv5TEJ && !cpu_is_xscale())
 		prot |= PMD_BIT4;
 
-	pgd += pgd_index(addr);
 	do {
-		next = pgd_addr_end(addr, end);
-		idmap_add_pmd(pgd, addr, next, prot);
-	} while (pgd++, addr = next, addr != end);
+		next = pmd_addr_end(addr, end);
+		idmap_add_pmd(pgd + pgd_index(addr), addr, next, prot);
+	} while (addr = next, addr < end);
 }
 
 #ifdef CONFIG_SMP
 static void idmap_del_pmd(pgd_t *pgd, unsigned long addr, unsigned long end)
 {
-	pmd_t *pmd = pmd_offset(pgd, addr);
+	pmd_t *pmd;
+
+	if (pgd_none_or_clear_bad(pgd))
+		return;
+	pmd = pmd_offset(pgd, addr);
 	pmd_clear(pmd);
 }
 
@@ -42,11 +60,10 @@ void identity_mapping_del(pgd_t *pgd, unsigned long addr, unsigned long end)
 {
 	unsigned long next;
 
-	pgd += pgd_index(addr);
 	do {
-		next = pgd_addr_end(addr, end);
-		idmap_del_pmd(pgd, addr, next);
-	} while (pgd++, addr = next, addr != end);
+		next = pmd_addr_end(addr, end);
+		idmap_del_pmd(pgd + pgd_index(addr), addr, next);
+	} while (addr = next, addr < end);
 }
 #endif
 


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 14/19] ARM: LPAE: Add SMP support for the 3-level page table format
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (12 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 13/19] ARM: LPAE: Add identity mapping support for the 3-level page table format Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-01-24 17:55 ` [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses Catalin Marinas
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

With 3-level page tables, starting secondary CPUs required allocating
the pgd as well. Since LPAE Linux uses TTBR1 for the kernel page tables,
this patch reorders the CPU setup call in the head.S file so that the
swapper_pg_dir is used. TTBR0 is set to the value generated by the
primary CPU.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---

There was a comment that secondary_startup hunk below should not reorder
the code but pass two registers for TTBR0 and TTBR1. I still find this
approach simpler __v7_setup never programs TTBR0 so it would ignore one
of the registers.

 arch/arm/kernel/head.S |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index d96986c..bade113 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -331,6 +331,10 @@ ENTRY(secondary_startup)
  THUMB( it	eq )		@ force fixup-able long branch encoding
 	beq	__error_p
 
+	pgtbl	r4
+	add	r12, r10, #BSYM(PROCINFO_INITFUNC)
+	blx	r12				@ initialise processor
+						@ (return control reg)
 	/*
 	 * Use the page tables supplied from  __cpu_up.
 	 */
@@ -338,12 +342,8 @@ ENTRY(secondary_startup)
 	ldmia	r4, {r5, r7, r12}		@ address to jump to after
 	sub	r4, r4, r5			@ mmu has been enabled
 	ldr	r4, [r7, r4]			@ get secondary_data.pgdir
-	adr	lr, BSYM(__enable_mmu)		@ return address
 	mov	r13, r12			@ __secondary_switched address
- ARM(	add	pc, r10, #PROCINFO_INITFUNC	) @ initialise processor
-						  @ (return control reg)
- THUMB(	add	r12, r10, #PROCINFO_INITFUNC	)
- THUMB(	mov	pc, r12				)
+	b	__enable_mmu
 ENDPROC(secondary_startup)
 
 	/*


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (13 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 14/19] ARM: LPAE: Add SMP " Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-02-12 10:28   ` Russell King - ARM Linux
  2011-02-19 18:26   ` Russell King - ARM Linux
  2011-01-24 17:55 ` [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition Catalin Marinas
                   ` (3 subsequent siblings)
  18 siblings, 2 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: Will Deacon

From: Will Deacon <will.deacon@arm.com>

The unsigned long datatype is not sufficient for mapping physical addresses
>= 4GB.

This patch ensures that the phys_addr_t datatype is used to represent
physical addresses which may be beyond the range of an unsigned long.
The virt <-> phys macros are updated accordingly to ensure that virtual
addresses can remain as they are.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/memory.h     |   17 +++++++++--------
 arch/arm/include/asm/outercache.h |   14 ++++++++------
 arch/arm/include/asm/pgtable.h    |    2 +-
 arch/arm/include/asm/setup.h      |    2 +-
 arch/arm/kernel/setup.c           |    5 +++--
 arch/arm/mm/init.c                |    6 +++---
 arch/arm/mm/mmu.c                 |    7 ++++---
 7 files changed, 29 insertions(+), 24 deletions(-)

diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index d0ee74b..44ea5cd 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -15,6 +15,7 @@
 
 #include <linux/compiler.h>
 #include <linux/const.h>
+#include <linux/types.h>
 #include <mach/memory.h>
 #include <asm/sizes.h>
 
@@ -138,15 +139,15 @@
  * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
  */
 #ifndef __virt_to_phys
-#define __virt_to_phys(x)	((x) - PAGE_OFFSET + PHYS_OFFSET)
-#define __phys_to_virt(x)	((x) - PHYS_OFFSET + PAGE_OFFSET)
+#define __virt_to_phys(x)	(((phys_addr_t)(x) - PAGE_OFFSET + PHYS_OFFSET))
+#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
 #endif
 
 /*
  * Convert a physical address to a Page Frame Number and back
  */
-#define	__phys_to_pfn(paddr)	((paddr) >> PAGE_SHIFT)
-#define	__pfn_to_phys(pfn)	((pfn) << PAGE_SHIFT)
+#define	__phys_to_pfn(paddr)	((unsigned long)((paddr) >> PAGE_SHIFT))
+#define	__pfn_to_phys(pfn)	((phys_addr_t)(pfn) << PAGE_SHIFT)
 
 /*
  * Convert a page to/from a physical address
@@ -188,21 +189,21 @@
  * translation for translating DMA addresses.  Use the driver
  * DMA support - see dma-mapping.h.
  */
-static inline unsigned long virt_to_phys(const volatile void *x)
+static inline phys_addr_t virt_to_phys(const volatile void *x)
 {
 	return __virt_to_phys((unsigned long)(x));
 }
 
-static inline void *phys_to_virt(unsigned long x)
+static inline void *phys_to_virt(phys_addr_t x)
 {
-	return (void *)(__phys_to_virt((unsigned long)(x)));
+	return (void *)(__phys_to_virt(x));
 }
 
 /*
  * Drivers should NOT use these either.
  */
 #define __pa(x)			__virt_to_phys((unsigned long)(x))
-#define __va(x)			((void *)__phys_to_virt((unsigned long)(x)))
+#define __va(x)			((void *)__phys_to_virt((phys_addr_t)(x)))
 #define pfn_to_kaddr(pfn)	__va((pfn) << PAGE_SHIFT)
 
 /*
diff --git a/arch/arm/include/asm/outercache.h b/arch/arm/include/asm/outercache.h
index fc19009..88ad892 100644
--- a/arch/arm/include/asm/outercache.h
+++ b/arch/arm/include/asm/outercache.h
@@ -21,6 +21,8 @@
 #ifndef __ASM_OUTERCACHE_H
 #define __ASM_OUTERCACHE_H
 
+#include <linux/types.h>
+
 struct outer_cache_fns {
 	void (*inv_range)(unsigned long, unsigned long);
 	void (*clean_range)(unsigned long, unsigned long);
@@ -37,17 +39,17 @@ struct outer_cache_fns {
 
 extern struct outer_cache_fns outer_cache;
 
-static inline void outer_inv_range(unsigned long start, unsigned long end)
+static inline void outer_inv_range(phys_addr_t start, phys_addr_t end)
 {
 	if (outer_cache.inv_range)
 		outer_cache.inv_range(start, end);
 }
-static inline void outer_clean_range(unsigned long start, unsigned long end)
+static inline void outer_clean_range(phys_addr_t start, phys_addr_t end)
 {
 	if (outer_cache.clean_range)
 		outer_cache.clean_range(start, end);
 }
-static inline void outer_flush_range(unsigned long start, unsigned long end)
+static inline void outer_flush_range(phys_addr_t start, phys_addr_t end)
 {
 	if (outer_cache.flush_range)
 		outer_cache.flush_range(start, end);
@@ -73,11 +75,11 @@ static inline void outer_disable(void)
 
 #else
 
-static inline void outer_inv_range(unsigned long start, unsigned long end)
+static inline void outer_inv_range(phys_addr_t start, phys_addr_t end)
 { }
-static inline void outer_clean_range(unsigned long start, unsigned long end)
+static inline void outer_clean_range(phys_addr_t start, phys_addr_t end)
 { }
-static inline void outer_flush_range(unsigned long start, unsigned long end)
+static inline void outer_flush_range(phys_addr_t start, phys_addr_t end)
 { }
 static inline void outer_flush_all(void) { }
 static inline void outer_inv_all(void) { }
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 40b21c1..110f6f4 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -273,7 +273,7 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
 #define pte_unmap(pte)			__pte_unmap(pte)
 
 #define pte_pfn(pte)		((pte_val(pte) & PHYS_MASK) >> PAGE_SHIFT)
-#define pfn_pte(pfn,prot)	__pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+#define pfn_pte(pfn,prot)	__pte(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))
 
 #define pte_page(pte)		pfn_to_page(pte_pfn(pte))
 #define mk_pte(page,prot)	pfn_pte(page_to_pfn(page), prot)
diff --git a/arch/arm/include/asm/setup.h b/arch/arm/include/asm/setup.h
index f1e5a9b..5092118 100644
--- a/arch/arm/include/asm/setup.h
+++ b/arch/arm/include/asm/setup.h
@@ -199,7 +199,7 @@ static struct tagtable __tagtable_##fn __tag = { tag, fn }
 #endif
 
 struct membank {
-	unsigned long start;
+	phys_addr_t start;
 	unsigned long size;
 	unsigned int highmem;
 };
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 3d23f0f..fe951e4 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -443,7 +443,7 @@ static struct machine_desc * __init setup_machine(unsigned int nr)
 	return list;
 }
 
-static int __init arm_add_memory(unsigned long start, unsigned long size)
+static int __init arm_add_memory(phys_addr_t start, unsigned long size)
 {
 	struct membank *bank = &meminfo.bank[meminfo.nr_banks];
 
@@ -479,7 +479,8 @@ static int __init arm_add_memory(unsigned long start, unsigned long size)
 static int __init early_mem(char *p)
 {
 	static int usermem __initdata = 0;
-	unsigned long size, start;
+	unsigned long size;
+	phys_addr_t start;
 	char *endp;
 
 	/*
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 5164069..14a00a1 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -344,7 +344,7 @@ void __init bootmem_init(void)
 	 */
 	arm_bootmem_free(min, max_low, max_high);
 
-	high_memory = __va((max_low << PAGE_SHIFT) - 1) + 1;
+	high_memory = __va(((phys_addr_t)max_low << PAGE_SHIFT) - 1) + 1;
 
 	/*
 	 * This doesn't seem to be used by the Linux memory manager any
@@ -392,8 +392,8 @@ free_memmap(unsigned long start_pfn, unsigned long end_pfn)
 	 * Convert to physical addresses, and
 	 * round start upwards and end downwards.
 	 */
-	pg = PAGE_ALIGN(__pa(start_pg));
-	pgend = __pa(end_pg) & PAGE_MASK;
+	pg = (unsigned long)PAGE_ALIGN(__pa(start_pg));
+	pgend = (unsigned long)__pa(end_pg) & PAGE_MASK;
 
 	/*
 	 * If there are free pages between these,
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 251056a..a1d8a07 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -611,7 +611,7 @@ static void __init create_36bit_mapping(struct map_desc *md,
 	pgd_t *pgd;
 
 	addr = md->virtual;
-	phys = (unsigned long)__pfn_to_phys(md->pfn);
+	phys = __pfn_to_phys(md->pfn);
 	length = PAGE_ALIGN(md->length);
 
 	if (!(cpu_architecture() >= CPU_ARCH_ARMv6 || cpu_is_xsc3())) {
@@ -672,7 +672,8 @@ static void __init create_36bit_mapping(struct map_desc *md,
  */
 static void __init create_mapping(struct map_desc *md)
 {
-	unsigned long phys, addr, length, end;
+	unsigned long addr, length, end;
+	phys_addr_t phys;
 	const struct mem_type *type;
 	pgd_t *pgd;
 
@@ -703,7 +704,7 @@ static void __init create_mapping(struct map_desc *md)
 #endif
 
 	addr = md->virtual & PAGE_MASK;
-	phys = (unsigned long)__pfn_to_phys(md->pfn);
+	phys = __pfn_to_phys(md->pfn);
 	length = PAGE_ALIGN(md->length + (md->virtual & ~PAGE_MASK));
 
 	if (type->prot_l1 == 0 && ((addr | phys | length) & ~SECTION_MASK)) {


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (14 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-02-12 10:34   ` Russell King - ARM Linux
  2011-01-24 17:55 ` [PATCH v4 17/19] ARM: LPAE: mark memory banks with start > ULONG_MAX as highmem Catalin Marinas
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: Will Deacon

From: Will Deacon <will.deacon@arm.com>

This patch uses the types.h implementation in asm-generic to define the
dma_addr_t type as the same width as phys_addr_t.

NOTE: this is a temporary patch until the corresponding patches unifying
the dma_addr_t and removing the dma64_addr_t are merged into mainline.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/types.h |   20 +-------------------
 1 files changed, 1 insertions(+), 19 deletions(-)

diff --git a/arch/arm/include/asm/types.h b/arch/arm/include/asm/types.h
index 345df01..dc1bdbb 100644
--- a/arch/arm/include/asm/types.h
+++ b/arch/arm/include/asm/types.h
@@ -1,30 +1,12 @@
 #ifndef __ASM_ARM_TYPES_H
 #define __ASM_ARM_TYPES_H
 
-#include <asm-generic/int-ll64.h>
+#include <asm-generic/types.h>
 
-#ifndef __ASSEMBLY__
-
-typedef unsigned short umode_t;
-
-#endif /* __ASSEMBLY__ */
-
-/*
- * These aren't exported outside the kernel to avoid name space clashes
- */
 #ifdef __KERNEL__
 
 #define BITS_PER_LONG 32
 
-#ifndef __ASSEMBLY__
-
-/* Dma addresses are 32-bits wide.  */
-
-typedef u32 dma_addr_t;
-typedef u32 dma64_addr_t;
-
-#endif /* __ASSEMBLY__ */
-
 #endif /* __KERNEL__ */
 
 #endif


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 17/19] ARM: LPAE: mark memory banks with start > ULONG_MAX as highmem
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (15 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition Catalin Marinas
@ 2011-01-24 17:55 ` Catalin Marinas
  2011-02-12 10:36   ` Russell King - ARM Linux
  2011-01-24 17:56 ` [PATCH v4 18/19] ARM: LPAE: add support for ATAG_MEM64 Catalin Marinas
  2011-01-24 17:56 ` [PATCH v4 19/19] ARM: LPAE: Add the Kconfig entries Catalin Marinas
  18 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:55 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: Will Deacon

From: Will Deacon <will.deacon@arm.com>

Memory banks living outside of the 32-bit physical address
space do not have a 1:1 pa <-> va mapping and therefore the
__va macro may wrap.

This patch ensures that such banks are marked as highmem so
that the Kernel doesn't try to split them up when it sees that
the wrapped virtual address overlaps the vmalloc space.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/mm/mmu.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index a1d8a07..8a55be4 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -782,7 +782,8 @@ static void __init sanity_check_meminfo(void)
 
 #ifdef CONFIG_HIGHMEM
 		if (__va(bank->start) > vmalloc_min ||
-		    __va(bank->start) < (void *)PAGE_OFFSET)
+		    __va(bank->start) < (void *)PAGE_OFFSET ||
+		    bank->start > ULONG_MAX)
 			highmem = 1;
 
 		bank->highmem = highmem;
@@ -791,7 +792,7 @@ static void __init sanity_check_meminfo(void)
 		 * Split those memory banks which are partially overlapping
 		 * the vmalloc area greatly simplifying things later.
 		 */
-		if (__va(bank->start) < vmalloc_min &&
+		if (!highmem && __va(bank->start) < vmalloc_min &&
 		    bank->size > vmalloc_min - __va(bank->start)) {
 			if (meminfo.nr_banks >= NR_BANKS) {
 				printk(KERN_CRIT "NR_BANKS too low, "


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 18/19] ARM: LPAE: add support for ATAG_MEM64
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (16 preceding siblings ...)
  2011-01-24 17:55 ` [PATCH v4 17/19] ARM: LPAE: mark memory banks with start > ULONG_MAX as highmem Catalin Marinas
@ 2011-01-24 17:56 ` Catalin Marinas
  2011-01-24 17:56 ` [PATCH v4 19/19] ARM: LPAE: Add the Kconfig entries Catalin Marinas
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:56 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel; +Cc: Will Deacon

From: Will Deacon <will.deacon@arm.com>

LPAE provides support for memory banks with physical addresses of up
to 40 bits.

This patch adds a new atag, ATAG_MEM64, so that the Kernel can be
informed about memory that exists above the 4GB boundary.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/include/asm/setup.h |   10 +++++++++-
 arch/arm/kernel/compat.c     |    4 ++--
 arch/arm/kernel/setup.c      |   12 +++++++++++-
 3 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/arm/include/asm/setup.h b/arch/arm/include/asm/setup.h
index 5092118..fab849f 100644
--- a/arch/arm/include/asm/setup.h
+++ b/arch/arm/include/asm/setup.h
@@ -43,6 +43,13 @@ struct tag_mem32 {
 	__u32	start;	/* physical start address */
 };
 
+#define ATAG_MEM64	0x54420002
+
+struct tag_mem64 {
+	__u64	size;
+	__u64	start;	/* physical start address */
+};
+
 /* VGA text type displays */
 #define ATAG_VIDEOTEXT	0x54410003
 
@@ -147,7 +154,8 @@ struct tag {
 	struct tag_header hdr;
 	union {
 		struct tag_core		core;
-		struct tag_mem32	mem;
+		struct tag_mem32	mem32;
+		struct tag_mem64	mem64;
 		struct tag_videotext	videotext;
 		struct tag_ramdisk	ramdisk;
 		struct tag_initrd	initrd;
diff --git a/arch/arm/kernel/compat.c b/arch/arm/kernel/compat.c
index 9256523..f224d95 100644
--- a/arch/arm/kernel/compat.c
+++ b/arch/arm/kernel/compat.c
@@ -86,8 +86,8 @@ static struct tag * __init memtag(struct tag *tag, unsigned long start, unsigned
 	tag = tag_next(tag);
 	tag->hdr.tag = ATAG_MEM;
 	tag->hdr.size = tag_size(tag_mem32);
-	tag->u.mem.size = size;
-	tag->u.mem.start = start;
+	tag->u.mem32.size = size;
+	tag->u.mem32.start = start;
 
 	return tag;
 }
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index fe951e4..420a4e1 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -588,11 +588,21 @@ __tagtable(ATAG_CORE, parse_tag_core);
 
 static int __init parse_tag_mem32(const struct tag *tag)
 {
-	return arm_add_memory(tag->u.mem.start, tag->u.mem.size);
+	return arm_add_memory(tag->u.mem32.start, tag->u.mem32.size);
 }
 
 __tagtable(ATAG_MEM, parse_tag_mem32);
 
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+static int __init parse_tag_mem64(const struct tag *tag)
+{
+	/* We only use 32-bits for the size. */
+	return arm_add_memory(tag->u.mem64.start, (unsigned long)tag->u.mem64.size);
+}
+
+__tagtable(ATAG_MEM64, parse_tag_mem64);
+#endif /* CONFIG_PHYS_ADDR_T_64BIT */
+
 #if defined(CONFIG_VGA_CONSOLE) || defined(CONFIG_DUMMY_CONSOLE)
 struct screen_info screen_info = {
  .orig_video_lines	= 30,


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH v4 19/19] ARM: LPAE: Add the Kconfig entries
  2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
                   ` (17 preceding siblings ...)
  2011-01-24 17:56 ` [PATCH v4 18/19] ARM: LPAE: add support for ATAG_MEM64 Catalin Marinas
@ 2011-01-24 17:56 ` Catalin Marinas
  18 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-24 17:56 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

This patch adds the ARM_LPAE and ARCH_PHYS_ADDR_T_64BIT Kconfig entries
allowing LPAE support to be compiled into the kernel.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm/Kconfig    |    2 +-
 arch/arm/mm/Kconfig |   13 +++++++++++++
 2 files changed, 14 insertions(+), 1 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 35f32e1..8c82454 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1656,7 +1656,7 @@ config CMDLINE_FORCE
 
 config XIP_KERNEL
 	bool "Kernel Execute-In-Place from ROM"
-	depends on !ZBOOT_ROM
+	depends on !ZBOOT_ROM && !ARM_LPAE
 	help
 	  Execute-In-Place allows the kernel to run from non-volatile storage
 	  directly addressable by the CPU, such as NOR flash. This saves RAM
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 9d30c6f..2ec3951 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -621,6 +621,19 @@ config IO_36
 
 comment "Processor Features"
 
+config ARM_LPAE
+	bool "Support for the Large Physical Address Extension"
+	depends on MMU && CPU_V7
+	help
+	  Say Y if you have an ARMv7 processor supporting the LPAE page table
+	  format and you would like to access memory beyond the 4GB limit.
+
+config ARCH_PHYS_ADDR_T_64BIT
+	def_bool ARM_LPAE
+
+config ARCH_DMA_ADDR_T_64BIT
+	def_bool ARM_LPAE
+
 config ARM_THUMB
 	bool "Support Thumb user binaries"
 	depends on CPU_ARM720T || CPU_ARM740T || CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM940T || CPU_ARM946E || CPU_ARM1020 || CPU_ARM1020E || CPU_ARM1022 || CPU_ARM1026 || CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_V6 || CPU_V7 || CPU_FEROCEON


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile"
  2011-01-24 17:55 ` [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile" Catalin Marinas
@ 2011-01-24 19:19   ` Stephen Boyd
  2011-01-24 23:38     ` Russell King - ARM Linux
  2011-01-25 10:00     ` Arnd Bergmann
  0 siblings, 2 replies; 59+ messages in thread
From: Stephen Boyd @ 2011-01-24 19:19 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel, Arnd Bergmann

On 01/24/2011 09:55 AM, Catalin Marinas wrote:
> Changing the virt_to_phys() argument to "const volatile void *" avoids
> compiler warnings in some situations where this function is used.
>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Stephen Boyd <sboyd@codeaurora.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> ---

Acked-by: Stephen Boyd <sboyd@codeaurora.org>

Any chance we can get this one patch into 2.6.38? It fixes a warning for
MSM.

-- 
Sent by an employee of the Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions
  2011-01-24 17:55 ` [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions Catalin Marinas
@ 2011-01-24 21:26   ` Nick Piggin
  2011-01-24 21:42     ` Russell King - ARM Linux
  2011-02-03 17:11   ` Catalin Marinas
  1 sibling, 1 reply; 59+ messages in thread
From: Nick Piggin @ 2011-01-24 21:26 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel

Too bad about the PAE thing; my condolences.


On Tue, Jan 25, 2011 at 4:55 AM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> This patch introduces the pgtable-3level*.h files with definitions
> specific to the LPAE page table format (3 levels of page tables).

Seeing as you're shaking up these definitions, what do you think about
switching from 4level-fixup.h to pgtable-nopud.h / pgtable-nopmd.h headers?
One day eventually it would be nice to get rid of the fixup mode.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions
  2011-01-24 21:26   ` Nick Piggin
@ 2011-01-24 21:42     ` Russell King - ARM Linux
  2011-01-25 10:04       ` Catalin Marinas
  2011-03-21 12:36       ` Catalin Marinas
  0 siblings, 2 replies; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-01-24 21:42 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Catalin Marinas, linux-kernel, linux-arm-kernel

On Tue, Jan 25, 2011 at 08:26:42AM +1100, Nick Piggin wrote:
> Too bad about the PAE thing; my condolences.
> 
> 
> On Tue, Jan 25, 2011 at 4:55 AM, Catalin Marinas
> <catalin.marinas@arm.com> wrote:
> > This patch introduces the pgtable-3level*.h files with definitions
> > specific to the LPAE page table format (3 levels of page tables).
> 
> Seeing as you're shaking up these definitions, what do you think about
> switching from 4level-fixup.h to pgtable-nopud.h / pgtable-nopmd.h headers?
> One day eventually it would be nice to get rid of the fixup mode.

I have patches to do this which I de-queued for the last merge window.
It's not entirely trivial and without problem.  You can find the patches
in linux-next now.

I was waiting for the new set of patches from Catalin before going back
and working out the solutions to some of those problems.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile"
  2011-01-24 19:19   ` Stephen Boyd
@ 2011-01-24 23:38     ` Russell King - ARM Linux
  2011-01-25 10:00     ` Arnd Bergmann
  1 sibling, 0 replies; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-01-24 23:38 UTC (permalink / raw)
  To: Stephen Boyd, Catalin Marinas
  Cc: Arnd Bergmann, linux-kernel, linux-arm-kernel

On Mon, Jan 24, 2011 at 11:19:52AM -0800, Stephen Boyd wrote:
> On 01/24/2011 09:55 AM, Catalin Marinas wrote:
> > Changing the virt_to_phys() argument to "const volatile void *" avoids
> > compiler warnings in some situations where this function is used.
> >
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Stephen Boyd <sboyd@codeaurora.org>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> 
> Acked-by: Stephen Boyd <sboyd@codeaurora.org>
> 
> Any chance we can get this one patch into 2.6.38? It fixes a warning for
> MSM.

I don't see any reason why this can't.  Catalin, can you put it in the
patch system with Stephen's ack please?

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile"
  2011-01-24 19:19   ` Stephen Boyd
  2011-01-24 23:38     ` Russell King - ARM Linux
@ 2011-01-25 10:00     ` Arnd Bergmann
  2011-01-25 10:29       ` Russell King - ARM Linux
  1 sibling, 1 reply; 59+ messages in thread
From: Arnd Bergmann @ 2011-01-25 10:00 UTC (permalink / raw)
  To: Stephen Boyd; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

On Monday 24 January 2011, Stephen Boyd wrote:
> On 01/24/2011 09:55 AM, Catalin Marinas wrote:
> > Changing the virt_to_phys() argument to "const volatile void *" avoids
> > compiler warnings in some situations where this function is used.
> >
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Stephen Boyd <sboyd@codeaurora.org>
> > Cc: Arnd Bergmann <arnd@arndb.de>

Acked-by: Arnd Bergmann <arnd@arndb.de>

> Acked-by: Stephen Boyd <sboyd@codeaurora.org>
> 
> Any chance we can get this one patch into 2.6.38? It fixes a warning for
> MSM.

Stephen, you might want to have a look at why the warning even appears
on MSM. Most uses of 'volatile' are misguided, and there could be an
actual bug in there.

	Arnd

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions
  2011-01-24 21:42     ` Russell King - ARM Linux
@ 2011-01-25 10:04       ` Catalin Marinas
  2011-03-21 12:36       ` Catalin Marinas
  1 sibling, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-01-25 10:04 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: Nick Piggin, linux-kernel, linux-arm-kernel

On Mon, 2011-01-24 at 21:42 +0000, Russell King - ARM Linux wrote:
> On Tue, Jan 25, 2011 at 08:26:42AM +1100, Nick Piggin wrote:
> > On Tue, Jan 25, 2011 at 4:55 AM, Catalin Marinas
> > <catalin.marinas@arm.com> wrote:
> > > This patch introduces the pgtable-3level*.h files with definitions
> > > specific to the LPAE page table format (3 levels of page tables).
> >
> > Seeing as you're shaking up these definitions, what do you think about
> > switching from 4level-fixup.h to pgtable-nopud.h / pgtable-nopmd.h headers?
> > One day eventually it would be nice to get rid of the fixup mode.
> 
> I have patches to do this which I de-queued for the last merge window.
> It's not entirely trivial and without problem.  You can find the patches
> in linux-next now.
> 
> I was waiting for the new set of patches from Catalin before going back
> and working out the solutions to some of those problems.

I'm fine with going this route but I haven't had time to review your
patches yet (have they been posted to the list, I've been busy lately
and haven't followed everything).

Thanks.

-- 
Catalin



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile"
  2011-01-25 10:00     ` Arnd Bergmann
@ 2011-01-25 10:29       ` Russell King - ARM Linux
  2011-01-25 14:14         ` Arnd Bergmann
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-01-25 10:29 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Stephen Boyd, Catalin Marinas, linux-kernel, linux-arm-kernel

On Tue, Jan 25, 2011 at 11:00:10AM +0100, Arnd Bergmann wrote:
> On Monday 24 January 2011, Stephen Boyd wrote:
> > On 01/24/2011 09:55 AM, Catalin Marinas wrote:
> > > Changing the virt_to_phys() argument to "const volatile void *" avoids
> > > compiler warnings in some situations where this function is used.
> > >
> > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Stephen Boyd <sboyd@codeaurora.org>
> > > Cc: Arnd Bergmann <arnd@arndb.de>
> 
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> 
> > Acked-by: Stephen Boyd <sboyd@codeaurora.org>
> > 
> > Any chance we can get this one patch into 2.6.38? It fixes a warning for
> > MSM.
> 
> Stephen, you might want to have a look at why the warning even appears
> on MSM. Most uses of 'volatile' are misguided, and there could be an
> actual bug in there.

It's actually the right thing - look at x86's definition:

static inline phys_addr_t virt_to_phys(volatile void *address)

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile"
  2011-01-25 10:29       ` Russell King - ARM Linux
@ 2011-01-25 14:14         ` Arnd Bergmann
  0 siblings, 0 replies; 59+ messages in thread
From: Arnd Bergmann @ 2011-01-25 14:14 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Stephen Boyd, Catalin Marinas, linux-kernel, linux-arm-kernel

On Tuesday 25 January 2011, Russell King - ARM Linux wrote:
> > Stephen, you might want to have a look at why the warning even appears
> > on MSM. Most uses of 'volatile' are misguided, and there could be an
> > actual bug in there.
> 
> It's actually the right thing - look at x86's definition:
> 
> static inline phys_addr_t virt_to_phys(volatile void *address)

Yes, the definition of virt_to_phys using a volatile pointer makes sense
because it allows you to pass volatile pointers, even if it doesn't
make any volatile accesses itself, hence my Acked-by.

However, marking variables as volatile needs to be done very carefully,
and the particular use in arch/arm/mach-msm/smd.c looks suspicious.
I don't think it can cause any actual harm to add volatile to the
smd_half_channel variables, but it disables some optimizations that
gcc can otherwise make, and it's not a replacement for locking or
atomic accesses.

	Arnd

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format
  2011-01-24 17:55 ` [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format Catalin Marinas
@ 2011-02-03 17:09   ` Catalin Marinas
  2011-02-03 17:56   ` Russell King - ARM Linux
  1 sibling, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-02-03 17:09 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

On Mon, 2011-01-24 at 17:55 +0000, Catalin Marinas wrote:
> --- a/arch/arm/mm/pgd.c
> +++ b/arch/arm/mm/pgd.c
> @@ -80,20 +98,36 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd_base)
>         if (!pgd_base)
>                 return;
> 
> -       pgd = pgd_base + pgd_index(0);
> -       if (pgd_none_or_clear_bad(pgd))
> -               goto no_pgd;
> +       if (!vectors_high()) {
> +               pgd = pgd_base + pgd_index(0);
> +               if (pgd_none_or_clear_bad(pgd))
> +                       goto no_pgd;
> 
> -       pmd = pmd_offset(pgd, 0);
> -       if (pmd_none_or_clear_bad(pmd))
> -               goto no_pmd;
> +               pmd = pmd_offset(pgd, 0);
> +               if (pmd_none_or_clear_bad(pmd))
> +                       goto no_pmd;
> 
> -       pte = pmd_pgtable(*pmd);
> -       pmd_clear(pmd);
> -       pte_free(mm, pte);
> +               pte = pmd_pgtable(*pmd);
> +               pmd_clear(pmd);
> +               pte_free(mm, pte);
>  no_pmd:
> -       pgd_clear(pgd);
> -       pmd_free(mm, pmd);
> +               pgd_clear(pgd);
> +               pmd_free(mm, pmd);
> +       }
>  no_pgd:

I pushed some fixups to the arm-lpae branch mentioned in the cover
letter.

The hunk above doesn't need to be applied since FIRST_USER_ADDRESS is
non-zero on ARM and free_pgtables() misses the first PMD when cleaning
up user page tables (hence leaking some memory).

-- 
Catalin



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions
  2011-01-24 17:55 ` [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions Catalin Marinas
  2011-01-24 21:26   ` Nick Piggin
@ 2011-02-03 17:11   ` Catalin Marinas
  1 sibling, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-02-03 17:11 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

On Mon, 2011-01-24 at 17:55 +0000, Catalin Marinas wrote:
> --- /dev/null
> +++ b/arch/arm/include/asm/pgtable-3level-hwdef.h
> @@ -0,0 +1,81 @@
[...]
> +#define PMD_SECT_UNCACHED      (_AT(pteval_t, 0) << 2) /* strongly ordered */
> +#define PMD_SECT_BUFFERED      (_AT(pteval_t, 1) << 2) /* normal non-cacheable */
> +#define PMD_SECT_WT            (_AT(pteval_t, 2) << 2) /* normal inner write-through */
> +#define PMD_SECT_WB            (_AT(pteval_t, 3) << 2) /* normal inner write-back */
> +#define PMD_SECT_WBWA          (_AT(pteval_t, 7) << 2) /* normal inner write-alloc */

The above definitions should use pmdval_t rather than pteval_t (fixup
pushed to the arm-lpae branch).

-- 
Catalin



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 06/19] ARM: LPAE: Add (pte|pmd|pgd|pgprot)val_t type definitions as u32
  2011-01-24 17:55 ` [PATCH v4 06/19] ARM: LPAE: Add (pte|pmd|pgd|pgprot)val_t type definitions as u32 Catalin Marinas
@ 2011-02-03 17:13   ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-02-03 17:13 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel

On Mon, 2011-01-24 at 17:55 +0000, Catalin Marinas wrote:
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -290,7 +290,7 @@ static void __init build_mem_type_table(void)
>  {
>         struct cachepolicy *cp;
>         unsigned int cr = get_cr();
> -       unsigned int user_pgprot, kern_pgprot, vecs_pgprot;
> +       pgprotval_t user_pgprot, kern_pgprot, vecs_pgprot;
>         int cpu_arch = cpu_architecture();
>         int i;

I have an additional hunk for this file as pmd's have are 64-bit long
with LPAE (fixup pushed to the arm-lpae branch):

@@ -62,7 +62,7 @@ EXPORT_SYMBOL(pgprot_kernel);
 struct cachepolicy {
 	const char	policy[16];
 	unsigned int	cr_mask;
-	unsigned int	pmd;
+	pmdval_t	pmd;
 	pteval_t	pte;
 }; 

-- 
Catalin



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format
  2011-01-24 17:55 ` [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format Catalin Marinas
  2011-02-03 17:09   ` Catalin Marinas
@ 2011-02-03 17:56   ` Russell King - ARM Linux
  2011-02-03 22:00     ` Catalin Marinas
  1 sibling, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-03 17:56 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel

On Mon, Jan 24, 2011 at 05:55:51PM +0000, Catalin Marinas wrote:
> The patch also introduces the L_PGD_SWAPPER flag to mark pgd entries
> pointing to pmd tables pre-allocated in the swapper_pg_dir and avoid
> trying to free them at run-time. This flag is 0 with the classic page
> table format.

This shouldn't be necessary.

> diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
> index 709244c..003587d 100644
> --- a/arch/arm/mm/pgd.c
> +++ b/arch/arm/mm/pgd.c
> @@ -10,6 +10,7 @@
>  #include <linux/mm.h>
>  #include <linux/gfp.h>
>  #include <linux/highmem.h>
> +#include <linux/slab.h>
>  
>  #include <asm/pgalloc.h>
>  #include <asm/page.h>
> @@ -17,6 +18,14 @@
>  
>  #include "mm.h"
>  
> +#ifdef CONFIG_ARM_LPAE
> +#define __pgd_alloc()	kmalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL)
> +#define __pgd_free(pgd)	kfree(pgd)
> +#else
> +#define __pgd_alloc()	(pgd_t *)__get_free_pages(GFP_KERNEL, 2)
> +#define __pgd_free(pgd)	free_pages((unsigned long)pgd, 2)
> +#endif
> +
>  /*
>   * need to get a 16k page for level 1
>   */
> @@ -26,7 +35,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
>  	pmd_t *new_pmd, *init_pmd;
>  	pte_t *new_pte, *init_pte;
>  
> -	new_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 2);
> +	new_pgd = __pgd_alloc();
>  	if (!new_pgd)
>  		goto no_pgd;
>  
> @@ -41,12 +50,21 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
>  
>  	clean_dcache_area(new_pgd, PTRS_PER_PGD * sizeof(pgd_t));
>  
> +#ifdef CONFIG_ARM_LPAE
> +	/*
> +	 * Allocate PMD table for modules and pkmap mappings.
> +	 */
> +	new_pmd = pmd_alloc(mm, new_pgd + pgd_index(MODULES_VADDR), 0);
> +	if (!new_pmd)
> +		goto no_pmd;

This should be a copy of the same page tables found in swapper_pg_dir -
that's what the memcpy() above is doing.

> +#endif
> +
>  	if (!vectors_high()) {
>  		/*
>  		 * On ARM, first page must always be allocated since it
>  		 * contains the machine vectors.
>  		 */
> -		new_pmd = pmd_alloc(mm, new_pgd, 0);
> +		new_pmd = pmd_alloc(mm, new_pgd + pgd_index(0), 0);

However, the first pmd table, and the first pte table only need to be
present for the reason stated in the comment, and these need to be
allocated.

>  		if (!new_pmd)
>  			goto no_pmd;
>  
> @@ -66,7 +84,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
>  no_pte:
>  	pmd_free(mm, new_pmd);
>  no_pmd:
> -	free_pages((unsigned long)new_pgd, 2);
> +	__pgd_free(new_pgd);
>  no_pgd:
>  	return NULL;
>  }
> @@ -80,20 +98,36 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd_base)
>  	if (!pgd_base)
>  		return;
>  
> -	pgd = pgd_base + pgd_index(0);
> -	if (pgd_none_or_clear_bad(pgd))
> -		goto no_pgd;
> +	if (!vectors_high()) {

No, that's wrong.  As FIRST_USER_ADDRESS is nonzero, the first pmd and
pte table will remain allocated in spite of free_pgtables(), so this
results in a memory leak.

> +		pgd = pgd_base + pgd_index(0);
> +		if (pgd_none_or_clear_bad(pgd))
> +			goto no_pgd;
>  
> -	pmd = pmd_offset(pgd, 0);
> -	if (pmd_none_or_clear_bad(pmd))
> -		goto no_pmd;
> +		pmd = pmd_offset(pgd, 0);
> +		if (pmd_none_or_clear_bad(pmd))
> +			goto no_pmd;
>  
> -	pte = pmd_pgtable(*pmd);
> -	pmd_clear(pmd);
> -	pte_free(mm, pte);
> +		pte = pmd_pgtable(*pmd);
> +		pmd_clear(pmd);
> +		pte_free(mm, pte);
>  no_pmd:
> -	pgd_clear(pgd);
> -	pmd_free(mm, pmd);
> +		pgd_clear(pgd);
> +		pmd_free(mm, pmd);
> +	}
>  no_pgd:
> -	free_pages((unsigned long) pgd_base, 2);
> +#ifdef CONFIG_ARM_LPAE
> +	/*
> +	 * Free modules/pkmap or identity pmd tables.
> +	 */
> +	for (pgd = pgd_base; pgd < pgd_base + PTRS_PER_PGD; pgd++) {
> +		if (pgd_none_or_clear_bad(pgd))
> +			continue;
> +		if (pgd_val(*pgd) & L_PGD_SWAPPER)
> +			continue;
> +		pmd = pmd_offset(pgd, 0);
> +		pgd_clear(pgd);
> +		pmd_free(mm, pmd);
> +	}
> +#endif

And as kernel mappings in the pgd above TASK_SIZE are supposed to be
identical across all page tables, this shouldn't be necessary.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format
  2011-02-03 17:56   ` Russell King - ARM Linux
@ 2011-02-03 22:00     ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-02-03 22:00 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: linux-arm-kernel, linux-kernel

On 3 February 2011 17:56, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
> On Mon, Jan 24, 2011 at 05:55:51PM +0000, Catalin Marinas wrote:
>> The patch also introduces the L_PGD_SWAPPER flag to mark pgd entries
>> pointing to pmd tables pre-allocated in the swapper_pg_dir and avoid
>> trying to free them at run-time. This flag is 0 with the classic page
>> table format.
>
> This shouldn't be necessary.

I tried hard to find a simple way around this but couldn't, so any
suggestion is welcomed. Basically we have two situations where
pgd_alloc/pgd_free are called: (1) new user mm and (2) identity
mapping. As long as we allocate a PMD for the modules/pkmap mappings,
we need to make sure it is freed (more why this allocation is needed
below).

For (1), we can (safely?) assume that we always have a vma in the same
1GB range with the MODULES_VADDR. I suspect the stack always gets at
the top of TASK_SIZE.

For (2), there is no guarantee that this PMD is freed, so we need to
explicit freeing in pgd_free().

But we can't simply try to free the previously allocated PMD
corresponding to MODULES_VADDR. There is a situation when the user
page tables had been cleared and we get an abort for modules/pkmap. We
than copy (safely, that's only temporarily used) the corresponding
pgd_k entry (1GB) into the soon to be freed pgd. At this point
pgd_free() would try to free the PMD from swapper_pg_dir and that's
not possible.

The L_PGD_SWAPPER also comes in handy when setting up identity
mappings. Since the top PGD entries (starting with PAGE_OFFSET >>
PGDIR_SHIFT) are copied by pgd_alloc from swapper_pg_dir, we don't
want the init pgd being corrupted when PHYS_OFFSET > PAGE_OFFSET.
Hence we check L_PGD_SWAPPER and allocate another PMD if necessary.
But at some point we need to free such PMD and can't blindly try to
free the swapper_pg_dir pages.

>> diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c
>> index 709244c..003587d 100644
>> --- a/arch/arm/mm/pgd.c
>> +++ b/arch/arm/mm/pgd.c
>> @@ -10,6 +10,7 @@
>>  #include <linux/mm.h>
>>  #include <linux/gfp.h>
>>  #include <linux/highmem.h>
>> +#include <linux/slab.h>
>>
>>  #include <asm/pgalloc.h>
>>  #include <asm/page.h>
>> @@ -17,6 +18,14 @@
>>
>>  #include "mm.h"
>>
>> +#ifdef CONFIG_ARM_LPAE
>> +#define __pgd_alloc()        kmalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL)
>> +#define __pgd_free(pgd)      kfree(pgd)
>> +#else
>> +#define __pgd_alloc()        (pgd_t *)__get_free_pages(GFP_KERNEL, 2)
>> +#define __pgd_free(pgd)      free_pages((unsigned long)pgd, 2)
>> +#endif
>> +
>>  /*
>>   * need to get a 16k page for level 1
>>   */
>> @@ -26,7 +35,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
>>       pmd_t *new_pmd, *init_pmd;
>>       pte_t *new_pte, *init_pte;
>>
>> -     new_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL, 2);
>> +     new_pgd = __pgd_alloc();
>>       if (!new_pgd)
>>               goto no_pgd;
>>
>> @@ -41,12 +50,21 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
>>
>>       clean_dcache_area(new_pgd, PTRS_PER_PGD * sizeof(pgd_t));
>>
>> +#ifdef CONFIG_ARM_LPAE
>> +     /*
>> +      * Allocate PMD table for modules and pkmap mappings.
>> +      */
>> +     new_pmd = pmd_alloc(mm, new_pgd + pgd_index(MODULES_VADDR), 0);
>> +     if (!new_pmd)
>> +             goto no_pmd;
>
> This should be a copy of the same page tables found in swapper_pg_dir -
> that's what the memcpy() above is doing.

The memcpy() above only copied between 1 and 3 entries in the pgd_k
(corresponding to 1 to 3GB kernel space). It doesn't copy the entry
corresponding to 1GB below PAGE_OFFSET that would be used by modules.
We need to allocate a new PMD for that.

The problem with the current memory map is that one PGD entry covers
1GB and the one corresponding to MODULES_VADDR is shared between user
and kernel. An alternative would be to move the kernel a bit higher
(and allow MODULES_VADDR at a 1GB boundary. The PAGE_OFFSET would be
something like 3GB + 16M, though I'm not sure what other implications
this would have.

Yet another alternative which I don't like at all is to pretend that
we only have 2 levels of page tables and always allocate 4 PMD pages +
1 PGD.

>> +#endif
>> +
>>       if (!vectors_high()) {
>>               /*
>>                * On ARM, first page must always be allocated since it
>>                * contains the machine vectors.
>>                */
>> -             new_pmd = pmd_alloc(mm, new_pgd, 0);
>> +             new_pmd = pmd_alloc(mm, new_pgd + pgd_index(0), 0);
>
> However, the first pmd table, and the first pte table only need to be
> present for the reason stated in the comment, and these need to be
> allocated.

The above change is harmless, I just added it for correctness.

>>               if (!new_pmd)
>>                       goto no_pmd;
>>
>> @@ -66,7 +84,7 @@ pgd_t *pgd_alloc(struct mm_struct *mm)
>>  no_pte:
>>       pmd_free(mm, new_pmd);
>>  no_pmd:
>> -     free_pages((unsigned long)new_pgd, 2);
>> +     __pgd_free(new_pgd);
>>  no_pgd:
>>       return NULL;
>>  }
>> @@ -80,20 +98,36 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd_base)
>>       if (!pgd_base)
>>               return;
>>
>> -     pgd = pgd_base + pgd_index(0);
>> -     if (pgd_none_or_clear_bad(pgd))
>> -             goto no_pgd;
>> +     if (!vectors_high()) {
>
> No, that's wrong.  As FIRST_USER_ADDRESS is nonzero, the first pmd and
> pte table will remain allocated in spite of free_pgtables(), so this
> results in a memory leak.

I agree (and I replied to my own post earlier today), we found the
leak in testing. It is safe to remove this hunk (I had a thought that
it may trigger a bad pmd because of the identity mapping but that's
cleared already via identity_mapping_del().

>> +             pgd = pgd_base + pgd_index(0);
>> +             if (pgd_none_or_clear_bad(pgd))
>> +                     goto no_pgd;
>>
>> -     pmd = pmd_offset(pgd, 0);
>> -     if (pmd_none_or_clear_bad(pmd))
>> -             goto no_pmd;
>> +             pmd = pmd_offset(pgd, 0);
>> +             if (pmd_none_or_clear_bad(pmd))
>> +                     goto no_pmd;
>>
>> -     pte = pmd_pgtable(*pmd);
>> -     pmd_clear(pmd);
>> -     pte_free(mm, pte);
>> +             pte = pmd_pgtable(*pmd);
>> +             pmd_clear(pmd);
>> +             pte_free(mm, pte);
>>  no_pmd:
>> -     pgd_clear(pgd);
>> -     pmd_free(mm, pmd);
>> +             pgd_clear(pgd);
>> +             pmd_free(mm, pmd);
>> +     }
>>  no_pgd:
>> -     free_pages((unsigned long) pgd_base, 2);
>> +#ifdef CONFIG_ARM_LPAE
>> +     /*
>> +      * Free modules/pkmap or identity pmd tables.
>> +      */
>> +     for (pgd = pgd_base; pgd < pgd_base + PTRS_PER_PGD; pgd++) {
>> +             if (pgd_none_or_clear_bad(pgd))
>> +                     continue;
>> +             if (pgd_val(*pgd) & L_PGD_SWAPPER)
>> +                     continue;
>> +             pmd = pmd_offset(pgd, 0);
>> +             pgd_clear(pgd);
>> +             pmd_free(mm, pmd);
>> +     }
>> +#endif
>
> And as kernel mappings in the pgd above TASK_SIZE are supposed to be
> identical across all page tables, this shouldn't be necessary.

For tasks yes, but what about the identity mapping allocations? We
could change the name of pgd_alloc() and add another parameter to
distinguish between these two scenarios.

-- 
Catalin

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 02/19] ARM: LPAE: Fix early_pte_alloc() assumption about the Linux PTE
  2011-01-24 17:55 ` [PATCH v4 02/19] ARM: LPAE: Fix early_pte_alloc() assumption about the Linux PTE Catalin Marinas
@ 2011-02-12  9:56   ` Russell King - ARM Linux
  0 siblings, 0 replies; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-12  9:56 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel

On Mon, Jan 24, 2011 at 05:55:44PM +0000, Catalin Marinas wrote:
> With LPAE we no longer have software bits in a separate Linux PTE and
> the early_pte_alloc() function should pass PTE_HWTABLE_OFF +
> PTE_HWTABLE_SIZE to early_alloc() to avoid allocating extra memory.

This one can also go to the patch system too.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 03/19] ARM: LPAE: use long long format when printing physical addresses and ptes
  2011-01-24 17:55 ` [PATCH v4 03/19] ARM: LPAE: use long long format when printing physical addresses and ptes Catalin Marinas
@ 2011-02-12  9:59   ` Russell King - ARM Linux
  0 siblings, 0 replies; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-12  9:59 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel, Will Deacon

On Mon, Jan 24, 2011 at 05:55:45PM +0000, Catalin Marinas wrote:
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index 5ea4fb7..3d23f0f 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -449,7 +449,7 @@ static int __init arm_add_memory(unsigned long start, unsigned long size)
>  
>  	if (meminfo.nr_banks >= NR_BANKS) {
>  		printk(KERN_CRIT "NR_BANKS too low, "
> -			"ignoring memory at %#lx\n", start);
> +			"ignoring memory at %#08llx\n", (long long)start);

This is not equivalent.  %#lx produces '0x0'.  %#08llx produces '0x000000'
not '0x00000000' - the '0x' is included in the field width.  So you want
'%#010llx' or '0x%08llx' - there's no real advantage to either.  Or just
convert '%#lx' to '%#llx'.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-01-24 17:55 ` [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses Catalin Marinas
@ 2011-02-12 10:28   ` Russell King - ARM Linux
  2011-02-15 11:52     ` Will Deacon
  2011-02-19 18:26   ` Russell King - ARM Linux
  1 sibling, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-12 10:28 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel, Will Deacon

On Mon, Jan 24, 2011 at 05:55:57PM +0000, Catalin Marinas wrote:
>  arch/arm/include/asm/memory.h     |   17 +++++++++--------
>  arch/arm/include/asm/outercache.h |   14 ++++++++------
>  arch/arm/include/asm/pgtable.h    |    2 +-
>  arch/arm/include/asm/setup.h      |    2 +-
>  arch/arm/kernel/setup.c           |    5 +++--
>  arch/arm/mm/init.c                |    6 +++---
>  arch/arm/mm/mmu.c                 |    7 ++++---
>  7 files changed, 29 insertions(+), 24 deletions(-)

If this is split up into four separate patches, we can probably sort out
merging this for the upcoming window.

asm/memory.h will conflict non-trivially with p2v patch set, but I think
we can merge the changes to everything but __virt_to_phys/__phys_to_virt.

asm/outercache.h changes are stand-alone.

asm/pgtable.h looks like it could use __pfn_to_phys(pfn) rather than
adding the cast, and can be combined with mm/init.c and mm/mmu.c.

asm/setup.h and arch/arm/kernel/setup.c form another logical group.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition
  2011-01-24 17:55 ` [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition Catalin Marinas
@ 2011-02-12 10:34   ` Russell King - ARM Linux
  2011-02-14 13:01     ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-12 10:34 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel, Will Deacon

On Mon, Jan 24, 2011 at 05:55:58PM +0000, Catalin Marinas wrote:
> From: Will Deacon <will.deacon@arm.com>
> 
> This patch uses the types.h implementation in asm-generic to define the
> dma_addr_t type as the same width as phys_addr_t.
> 
> NOTE: this is a temporary patch until the corresponding patches unifying
> the dma_addr_t and removing the dma64_addr_t are merged into mainline.

I'm not too sure about this patch.  All of the DMA devices we have only
take 32-bit addresses for their DMA, so making dma_addr_t 64-bit seems
wrong as we'll implicitly truncate these addresses.

As ARM platforms don't (sanely) support DMA, I think dropping this patch
for the time being would be a good idea, and stick with 32-bit dma_addr_t,
especially as we need to first do a sweep for dma_addr_t usage in device
driver structures (such as dma engine scatter lists.)  These really should
use __le32/__be32/u32 depending on whether they're little endian, big
endian or native endian.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 17/19] ARM: LPAE: mark memory banks with start > ULONG_MAX as highmem
  2011-01-24 17:55 ` [PATCH v4 17/19] ARM: LPAE: mark memory banks with start > ULONG_MAX as highmem Catalin Marinas
@ 2011-02-12 10:36   ` Russell King - ARM Linux
  0 siblings, 0 replies; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-12 10:36 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel, Will Deacon

On Mon, Jan 24, 2011 at 05:55:59PM +0000, Catalin Marinas wrote:
> @@ -782,7 +782,8 @@ static void __init sanity_check_meminfo(void)
>  
>  #ifdef CONFIG_HIGHMEM
>  		if (__va(bank->start) > vmalloc_min ||
> -		    __va(bank->start) < (void *)PAGE_OFFSET)
> +		    __va(bank->start) < (void *)PAGE_OFFSET ||
> +		    bank->start > ULONG_MAX)

I think this check should be first, so that we don't try to evaluate __va()
on phys addresses > ULONG_MAX, possibly resulting in truncation.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 12/19] ARM: LPAE: Add context switching support
  2011-01-24 17:55 ` [PATCH v4 12/19] ARM: LPAE: Add context switching support Catalin Marinas
@ 2011-02-12 10:44   ` Russell King - ARM Linux
  2011-02-14 13:24     ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-12 10:44 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel

On Mon, Jan 24, 2011 at 05:55:54PM +0000, Catalin Marinas wrote:
> +#ifdef CONFIG_ARM_LPAE
> +#define cpu_set_asid(asid) {						\
> +	unsigned long ttbl, ttbh;					\
> +	asm("	mrrc	p15, 0, %0, %1, c2		@ read TTBR0\n"	\
> +	    "	mov	%1, %1, lsl #(48 - 32)		@ set ASID\n"	\
> +	    "	mcrr	p15, 0, %0, %1, c2		@ set TTBR0\n"	\
> +	    : "=r" (ttbl), "=r" (ttbh)					\
> +	    : "r" (asid & ~ASID_MASK));					\

This is wrong:
1. It does nothing with %2 (the new asid)
2. it shifts the high address bits of TTBR0 left 16 places each time its
   called.

> +}
> +#else
> +#define cpu_set_asid(asid) \
> +	asm("	mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (asid))
> +#endif
> +
>  /*
>   * We fork()ed a process, and we need a new context for the child
>   * to run in.  We reserve version 0 for initial tasks so we will
> @@ -37,7 +51,7 @@ void __init_new_context(struct task_struct *tsk, struct mm_struct *mm)
>  static void flush_context(void)
>  {
>  	/* set the reserved ASID before flushing the TLB */
> -	asm("mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (0));
> +	cpu_set_asid(0);
>  	isb();
>  	local_flush_tlb_all();
>  	if (icache_is_vivt_asid_tagged()) {
> @@ -99,7 +113,7 @@ static void reset_context(void *info)
>  	set_mm_context(mm, asid);
>  
>  	/* set the new ASID */
> -	asm("mcr	p15, 0, %0, c13, c0, 1\n" : : "r" (mm->context.id));
> +	cpu_set_asid(mm->context.id);
>  	isb();
>  }
>  
> diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
> index a22b89f..ed4f3cb 100644
> --- a/arch/arm/mm/proc-v7.S
> +++ b/arch/arm/mm/proc-v7.S
> @@ -117,6 +117,11 @@ ENTRY(cpu_v7_switch_mm)
>  #ifdef CONFIG_MMU
>  	mov	r2, #0
>  	ldr	r1, [r1, #MM_CONTEXT_ID]	@ get mm->context.id

How about swapping the order here to avoid r1 being referenced in the very
next instruction?

> +#ifdef CONFIG_ARM_LPAE
> +	and	r3, r1, #0xff
> +	mov	r3, r3, lsl #(48 - 32)		@ ASID
> +	mcrr	p15, 0, r0, r3, c2		@ set TTB 0
> +#else	/* !CONFIG_ARM_LPAE */
>  	ALT_SMP(orr	r0, r0, #TTB_FLAGS_SMP)
>  	ALT_UP(orr	r0, r0, #TTB_FLAGS_UP)
>  #ifdef CONFIG_ARM_ERRATA_430973
> @@ -124,9 +129,10 @@ ENTRY(cpu_v7_switch_mm)
>  #endif
>  	mcr	p15, 0, r2, c13, c0, 1		@ set reserved context ID
>  	isb
> -1:	mcr	p15, 0, r0, c2, c0, 0		@ set TTB 0
> +	mcr	p15, 0, r0, c2, c0, 0		@ set TTB 0
>  	isb
>  	mcr	p15, 0, r1, c13, c0, 1		@ set context ID
> +#endif	/* CONFIG_ARM_LPAE */
>  	isb
>  #endif
>  	mov	pc, lr
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition
  2011-02-12 10:34   ` Russell King - ARM Linux
@ 2011-02-14 13:01     ` Catalin Marinas
  2011-02-15 14:27       ` Russell King - ARM Linux
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-02-14 13:01 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: linux-arm-kernel, linux-kernel, Will Deacon

On Sat, 2011-02-12 at 10:34 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 24, 2011 at 05:55:58PM +0000, Catalin Marinas wrote:
> > From: Will Deacon <will.deacon@arm.com>
> >
> > This patch uses the types.h implementation in asm-generic to define the
> > dma_addr_t type as the same width as phys_addr_t.
> >
> > NOTE: this is a temporary patch until the corresponding patches unifying
> > the dma_addr_t and removing the dma64_addr_t are merged into mainline.
> 
> I'm not too sure about this patch.  All of the DMA devices we have only
> take 32-bit addresses for their DMA, so making dma_addr_t 64-bit seems
> wrong as we'll implicitly truncate these addresses.

If we don't enable LPAE, the dma_addr_t is 32-bit, so existing platforms
are not affected. With Cortex-A15, new platforms may have PCIe and be
able to access memory beyond 32-bit (if they don't support >32-bit DMA
at least for some critical devices, I'm not sure why they would use
A15).

For things like hard drives for example, it becomes problematic as pages
are allocated by the VFS layer from highmem and passed to the driver for
DMA. If we keep dma_addr_t to 32-bit you would need to use DMA bounce
even if the PCIe device supports >32-bit physical addresses.

> As ARM platforms don't (sanely) support DMA, I think dropping this patch
> for the time being would be a good idea, and stick with 32-bit dma_addr_t,
> especially as we need to first do a sweep for dma_addr_t usage in device
> driver structures (such as dma engine scatter lists.)  These really should
> use __le32/__be32/u32 depending on whether they're little endian, big
> endian or native endian.

Maybe we could make the dma_addr_t size configurable (and disabled by
default) since I expect there'll be platforms capable of >32-bit DMA.

-- 
Catalin



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 12/19] ARM: LPAE: Add context switching support
  2011-02-12 10:44   ` Russell King - ARM Linux
@ 2011-02-14 13:24     ` Catalin Marinas
  2011-02-19 18:30       ` Russell King - ARM Linux
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-02-14 13:24 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: linux-arm-kernel, linux-kernel

On Sat, 2011-02-12 at 10:44 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 24, 2011 at 05:55:54PM +0000, Catalin Marinas wrote:
> > +#ifdef CONFIG_ARM_LPAE
> > +#define cpu_set_asid(asid) {                                         \
> > +     unsigned long ttbl, ttbh;                                       \
> > +     asm("   mrrc    p15, 0, %0, %1, c2              @ read TTBR0\n" \
> > +         "   mov     %1, %1, lsl #(48 - 32)          @ set ASID\n"   \
> > +         "   mcrr    p15, 0, %0, %1, c2              @ set TTBR0\n"  \
> > +         : "=r" (ttbl), "=r" (ttbh)                                  \
> > +         : "r" (asid & ~ASID_MASK));                                 \
> 
> This is wrong:
> 1. It does nothing with %2 (the new asid)
> 2. it shifts the high address bits of TTBR0 left 16 places each time its
>    called.

It was worse actually, not even compiled in because it had output
arguments but it wasn't volatile. Some early clobber is also needed.
What about this:

#define cpu_set_asid(asid) {						\
	unsigned long ttbl, ttbh;					\
	asm volatile(							\
	"	mrrc	p15, 0, %0, %1, c2		@ read TTBR0\n"	\
	"	mov	%1, %2, lsl #(48 - 32)		@ set ASID\n"	\
	"	mcrr	p15, 0, %0, %1, c2		@ set TTBR0\n"	\
	: "=&r" (ttbl), "=&r" (ttbh)					\
	: "r" (asid & ~ASID_MASK));					\
}

-- 
Catalin



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-12 10:28   ` Russell King - ARM Linux
@ 2011-02-15 11:52     ` Will Deacon
  2011-02-15 12:35       ` Russell King - ARM Linux
  0 siblings, 1 reply; 59+ messages in thread
From: Will Deacon @ 2011-02-15 11:52 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

Hi Russell,

On Sat, 2011-02-12 at 10:28 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 24, 2011 at 05:55:57PM +0000, Catalin Marinas wrote:
> >  arch/arm/include/asm/memory.h     |   17 +++++++++--------
> >  arch/arm/include/asm/outercache.h |   14 ++++++++------
> >  arch/arm/include/asm/pgtable.h    |    2 +-
> >  arch/arm/include/asm/setup.h      |    2 +-
> >  arch/arm/kernel/setup.c           |    5 +++--
> >  arch/arm/mm/init.c                |    6 +++---
> >  arch/arm/mm/mmu.c                 |    7 ++++---
> >  7 files changed, 29 insertions(+), 24 deletions(-)
> 
> If this is split up into four separate patches, we can probably sort out
> merging this for the upcoming window.
> 
Excellent! I've split the patch up into four distinct parts, as per your
suggestions. I've submitted these to your patch system (6670/1-6673/1)
alongside a fixed version of the printf format patch (6669/1) because
without that, you get a bunch of compiler warnings.

Will



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-15 11:52     ` Will Deacon
@ 2011-02-15 12:35       ` Russell King - ARM Linux
  2011-02-15 12:39         ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-15 12:35 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

On Tue, Feb 15, 2011 at 11:52:22AM +0000, Will Deacon wrote:
> Excellent! I've split the patch up into four distinct parts, as per your
> suggestions. I've submitted these to your patch system (6670/1-6673/1)
> alongside a fixed version of the printf format patch (6669/1) because
> without that, you get a bunch of compiler warnings.

Except 6669/1 still suffers from "%#08llx".  For a value of one, that prints:

	0x000001

five zeros following the 0x rather than seven.  The width in the format
string includes the 0x prefix.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-15 12:35       ` Russell King - ARM Linux
@ 2011-02-15 12:39         ` Catalin Marinas
  2011-02-15 13:37           ` Will Deacon
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-02-15 12:39 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: Will Deacon, linux-arm-kernel, linux-kernel

On Tue, 2011-02-15 at 12:35 +0000, Russell King - ARM Linux wrote:
> On Tue, Feb 15, 2011 at 11:52:22AM +0000, Will Deacon wrote:
> > Excellent! I've split the patch up into four distinct parts, as per your
> > suggestions. I've submitted these to your patch system (6670/1-6673/1)
> > alongside a fixed version of the printf format patch (6669/1) because
> > without that, you get a bunch of compiler warnings.
> 
> Except 6669/1 still suffers from "%#08llx".  For a value of one, that prints:
> 
>         0x000001
> 
> five zeros following the 0x rather than seven.  The width in the format
> string includes the 0x prefix.

Ah, sorry, I only fixed one case and forgot about the rest (and
misleading Will).

-- 
Catalin



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-15 12:39         ` Catalin Marinas
@ 2011-02-15 13:37           ` Will Deacon
  2011-02-15 14:23             ` Russell King - ARM Linux
  0 siblings, 1 reply; 59+ messages in thread
From: Will Deacon @ 2011-02-15 13:37 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Russell King - ARM Linux, linux-arm-kernel, linux-kernel

On Tue, 2011-02-15 at 12:39 +0000, Catalin Marinas wrote:
> On Tue, 2011-02-15 at 12:35 +0000, Russell King - ARM Linux wrote:
> > On Tue, Feb 15, 2011 at 11:52:22AM +0000, Will Deacon wrote:
> > > Excellent! I've split the patch up into four distinct parts, as per your
> > > suggestions. I've submitted these to your patch system (6670/1-6673/1)
> > > alongside a fixed version of the printf format patch (6669/1) because
> > > without that, you get a bunch of compiler warnings.
> >
> > Except 6669/1 still suffers from "%#08llx".  For a value of one, that prints:
> >
> >         0x000001
> >
> > five zeros following the 0x rather than seven.  The width in the format
> > string includes the 0x prefix.
> 
> Ah, sorry, I only fixed one case and forgot about the rest (and
> misleading Will).
> 

I should've spotted this either way. I've superseded the old patch with
6674/1.

Apologies for the confusion,

Will




^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-15 13:37           ` Will Deacon
@ 2011-02-15 14:23             ` Russell King - ARM Linux
  2011-02-15 15:26               ` Will Deacon
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-15 14:23 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

On Tue, Feb 15, 2011 at 01:37:07PM +0000, Will Deacon wrote:
> I should've spotted this either way. I've superseded the old patch with
> 6674/1.

One additional thing that I think has been lost.  I said in the original
reply to Catalin:
| asm/memory.h will conflict non-trivially with p2v patch set, but I think
| we can merge the changes to everything but __virt_to_phys/__phys_to_virt.

So 6670/1 which I'm intending to apply to the p2v branch can't be merged
as-is.  The ideal solution would be a version of 6670/1 to apply on top
of the existing p2v branch.

Thanks.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition
  2011-02-14 13:01     ` Catalin Marinas
@ 2011-02-15 14:27       ` Russell King - ARM Linux
  2011-02-15 15:24         ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-15 14:27 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel, Will Deacon

On Mon, Feb 14, 2011 at 01:01:30PM +0000, Catalin Marinas wrote:
> Maybe we could make the dma_addr_t size configurable (and disabled by
> default) since I expect there'll be platforms capable of >32-bit DMA.

It would be far better to fix the dma_addr_t abuses.  I've already fixed
those in the pl08x driver:

struct lli {
        dma_addr_t src;
        dma_addr_t dst;
        dma_addr_t next;
        u32 cctl;
};

became:

struct pl08x_lli {
        u32 src;
        u32 dst;
        u32 lli;
        u32 cctl;
};

and similar needs to be done elsewhere in ARM specific drivers.
dma_addr_t has no business being in structures that describe data which
hardware accesses.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition
  2011-02-15 14:27       ` Russell King - ARM Linux
@ 2011-02-15 15:24         ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-02-15 15:24 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: linux-arm-kernel, linux-kernel, Will Deacon

On Tue, 2011-02-15 at 14:27 +0000, Russell King - ARM Linux wrote:
> On Mon, Feb 14, 2011 at 01:01:30PM +0000, Catalin Marinas wrote:
> > Maybe we could make the dma_addr_t size configurable (and disabled by
> > default) since I expect there'll be platforms capable of >32-bit DMA.
> 
> It would be far better to fix the dma_addr_t abuses.

That's not a simple task, I have no idea how many drivers get used on
ARM systems. We can defer this until people start using Cortex-A15 in
real hardware and only fix the drivers they need.

BTW, can sparse help here (I haven't used it much)?

-- 
Catalin



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-15 14:23             ` Russell King - ARM Linux
@ 2011-02-15 15:26               ` Will Deacon
  2011-02-15 15:48                 ` Russell King - ARM Linux
  0 siblings, 1 reply; 59+ messages in thread
From: Will Deacon @ 2011-02-15 15:26 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

Hi Russell,

On Tue, 2011-02-15 at 14:23 +0000, Russell King - ARM Linux wrote:
> On Tue, Feb 15, 2011 at 01:37:07PM +0000, Will Deacon wrote:
> > I should've spotted this either way. I've superseded the old patch with
> > 6674/1.
> 
> One additional thing that I think has been lost.  I said in the original
> reply to Catalin:
> | asm/memory.h will conflict non-trivially with p2v patch set, but I think
> | we can merge the changes to everything but __virt_to_phys/__phys_to_virt.
> 
> So 6670/1 which I'm intending to apply to the p2v branch can't be merged
> as-is.  The ideal solution would be a version of 6670/1 to apply on top
> of the existing p2v branch.
> 

The conflict with the p2v branch is fairly hefty, but something like
this should do (if you're happy I'll submit it to replace 6670/1):

Note that because the v2p macros only work for lowmem, I've not bothered
to add casts for the __v2p macros (rather, I've just changed the types
of the static inline functions). This simplifies the code and means we
can stay clear of the runtime fixup stuff.


diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 015fd5e..791cb3e 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -15,6 +15,7 @@
 
 #include <linux/compiler.h>
 #include <linux/const.h>
+#include <linux/types.h>
 #include <mach/memory.h>
 #include <asm/sizes.h>
 
@@ -135,8 +136,8 @@
 /*
  * Convert a physical address to a Page Frame Number and back
  */
-#define        __phys_to_pfn(paddr)    ((paddr) >> PAGE_SHIFT)
-#define        __pfn_to_phys(pfn)      ((pfn) << PAGE_SHIFT)
+#define        __phys_to_pfn(paddr)    ((unsigned long)((paddr) >> PAGE_SHIFT))
+#define        __pfn_to_phys(pfn)      ((phys_addr_t)(pfn) << PAGE_SHIFT)
 
 /*
  * Convert a page to/from a physical address
@@ -234,12 +235,12 @@ static inline unsigned long __phys_to_virt(unsigned long x)
  * translation for translating DMA addresses.  Use the driver
  * DMA support - see dma-mapping.h.
  */
-static inline unsigned long virt_to_phys(void *x)
+static inline phys_addr_t virt_to_phys(void *x)
 {
        return __virt_to_phys((unsigned long)(x));
 }
 
-static inline void *phys_to_virt(unsigned long x)
+static inline void *phys_to_virt(phys_addr_t x)
 {
        return (void *)(__phys_to_virt((unsigned long)(x)));
 }


Cheers,

Will



^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-15 15:26               ` Will Deacon
@ 2011-02-15 15:48                 ` Russell King - ARM Linux
  0 siblings, 0 replies; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-15 15:48 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

On Tue, Feb 15, 2011 at 03:26:49PM +0000, Will Deacon wrote:
> The conflict with the p2v branch is fairly hefty, but something like
> this should do (if you're happy I'll submit it to replace 6670/1):

I was thinking that was the case - which is why I wanted this split up
so this can be tackled separately.

> Note that because the v2p macros only work for lowmem, I've not bothered
> to add casts for the __v2p macros (rather, I've just changed the types
> of the static inline functions). This simplifies the code and means we
> can stay clear of the runtime fixup stuff.

This patch looks good enough, thanks.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-01-24 17:55 ` [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses Catalin Marinas
  2011-02-12 10:28   ` Russell King - ARM Linux
@ 2011-02-19 18:26   ` Russell King - ARM Linux
  2011-02-21 14:36     ` Will Deacon
  1 sibling, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-19 18:26 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel, Will Deacon

On Mon, Jan 24, 2011 at 05:55:57PM +0000, Catalin Marinas wrote:
> From: Will Deacon <will.deacon@arm.com>
> 
> The unsigned long datatype is not sufficient for mapping physical addresses
> >= 4GB.
> 
> This patch ensures that the phys_addr_t datatype is used to represent
> physical addresses which may be beyond the range of an unsigned long.
> The virt <-> phys macros are updated accordingly to ensure that virtual
> addresses can remain as they are.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

This patch needs some more things fixed to prevent warnings:

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index a81355d..6cf76b3 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -809,9 +809,10 @@ static void __init sanity_check_meminfo(void)
 		 */
 		if (__va(bank->start) >= vmalloc_min ||
 		    __va(bank->start) < (void *)PAGE_OFFSET) {
-			printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx "
+			printk(KERN_NOTICE "Ignoring RAM at %.8llx-%.8llx "
 			       "(vmalloc region overlap).\n",
-			       bank->start, bank->start + bank->size - 1);
+			       (unsigned long long)bank->start,
+			       (unsigned long long)bank->start + bank->size - 1);
 			continue;
 		}
 
@@ -822,10 +823,11 @@ static void __init sanity_check_meminfo(void)
 		if (__va(bank->start + bank->size) > vmalloc_min ||
 		    __va(bank->start + bank->size) < __va(bank->start)) {
 			unsigned long newsize = vmalloc_min - __va(bank->start);
-			printk(KERN_NOTICE "Truncating RAM at %.8lx-%.8lx "
-			       "to -%.8lx (vmalloc region overlap).\n",
-			       bank->start, bank->start + bank->size - 1,
-			       bank->start + newsize - 1);
+			printk(KERN_NOTICE "Truncating RAM at %.8llx-%.8llx "
+			       "to -%.8llx (vmalloc region overlap).\n",
+			       (unsigned long long)bank->start,
+			       (unsigned long long)bank->start + bank->size - 1,
+			       (unsigned long long)bank->start + newsize - 1);
 			bank->size = newsize;
 		}
 #endif

^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 12/19] ARM: LPAE: Add context switching support
  2011-02-14 13:24     ` Catalin Marinas
@ 2011-02-19 18:30       ` Russell King - ARM Linux
  2011-02-19 23:16         ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-19 18:30 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-arm-kernel, linux-kernel

On Mon, Feb 14, 2011 at 01:24:06PM +0000, Catalin Marinas wrote:
> On Sat, 2011-02-12 at 10:44 +0000, Russell King - ARM Linux wrote:
> > On Mon, Jan 24, 2011 at 05:55:54PM +0000, Catalin Marinas wrote:
> > > +#ifdef CONFIG_ARM_LPAE
> > > +#define cpu_set_asid(asid) {                                         \
> > > +     unsigned long ttbl, ttbh;                                       \
> > > +     asm("   mrrc    p15, 0, %0, %1, c2              @ read TTBR0\n" \
> > > +         "   mov     %1, %1, lsl #(48 - 32)          @ set ASID\n"   \
> > > +         "   mcrr    p15, 0, %0, %1, c2              @ set TTBR0\n"  \
> > > +         : "=r" (ttbl), "=r" (ttbh)                                  \
> > > +         : "r" (asid & ~ASID_MASK));                                 \
> > 
> > This is wrong:
> > 1. It does nothing with %2 (the new asid)
> > 2. it shifts the high address bits of TTBR0 left 16 places each time its
> >    called.
> 
> It was worse actually, not even compiled in because it had output
> arguments but it wasn't volatile. Some early clobber is also needed.
> What about this:
> 
> #define cpu_set_asid(asid) {						\
> 	unsigned long ttbl, ttbh;					\
> 	asm volatile(							\
> 	"	mrrc	p15, 0, %0, %1, c2		@ read TTBR0\n"	\
> 	"	mov	%1, %2, lsl #(48 - 32)		@ set ASID\n"	\
> 	"	mcrr	p15, 0, %0, %1, c2		@ set TTBR0\n"	\
> 	: "=&r" (ttbl), "=&r" (ttbh)					\
> 	: "r" (asid & ~ASID_MASK));					\
> }

So we don't care about the low 16 bits of ttbh which can be simply zeroed?

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 12/19] ARM: LPAE: Add context switching support
  2011-02-19 18:30       ` Russell King - ARM Linux
@ 2011-02-19 23:16         ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-02-19 23:16 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 2517 bytes --]

On Saturday, 19 February 2011, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
> On Mon, Feb 14, 2011 at 01:24:06PM +0000, Catalin Marinas wrote:
>> On Sat, 2011-02-12 at 10:44 +0000, Russell King - ARM Linux wrote:
>> > On Mon, Jan 24, 2011 at 05:55:54PM +0000, Catalin Marinas wrote:
>> > > +#ifdef CONFIG_ARM_LPAE
>> > > +#define cpu_set_asid(asid) {                                         \
>> > > +     unsigned long ttbl, ttbh;                                       \
>> > > +     asm("   mrrc    p15, 0, %0, %1, c2              @ read TTBR0\n" \
>> > > +         "   mov     %1, %1, lsl #(48 - 32)          @ set ASID\n"   \
>> > > +         "   mcrr    p15, 0, %0, %1, c2              @ set TTBR0\n"  \
>> > > +         : "=r" (ttbl), "=r" (ttbh)                                  \
>> > > +         : "r" (asid & ~ASID_MASK));                                 \
>> >
>> > This is wrong:
>> > 1. It does nothing with %2 (the new asid)
>> > 2. it shifts the high address bits of TTBR0 left 16 places each time its
>> >    called.
>>
>> It was worse actually, not even compiled in because it had output
>> arguments but it wasn't volatile. Some early clobber is also needed.
>> What about this:
>>
>> #define cpu_set_asid(asid) {                                          \
>>       unsigned long ttbl, ttbh;                                       \
>>       asm volatile(                                                   \
>>       "       mrrc    p15, 0, %0, %1, c2              @ read TTBR0\n" \
>>       "       mov     %1, %2, lsl #(48 - 32)          @ set ASID\n"   \
>>       "       mcrr    p15, 0, %0, %1, c2              @ set TTBR0\n"  \
>>       : "=&r" (ttbl), "=&r" (ttbh)                                    \
>>       : "r" (asid & ~ASID_MASK));                                     \
>> }
>
> So we don't care about the low 16 bits of ttbh which can be simply zeroed?

Since the pgd is always allocated from lowmem, it is within 32-bit of
physical address and we can safely ignore ttbh. I could write a
comment here to this.

Catalin

-- 
Catalin
ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-19 18:26   ` Russell King - ARM Linux
@ 2011-02-21 14:36     ` Will Deacon
  2011-02-21 14:58       ` Russell King - ARM Linux
  0 siblings, 1 reply; 59+ messages in thread
From: Will Deacon @ 2011-02-21 14:36 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

Hi Russell,

On Sat, 2011-02-19 at 18:26 +0000, Russell King - ARM Linux wrote:
> On Mon, Jan 24, 2011 at 05:55:57PM +0000, Catalin Marinas wrote:
> > From: Will Deacon <will.deacon@arm.com>
> >
> > The unsigned long datatype is not sufficient for mapping physical addresses
> > >= 4GB.
> >
> > This patch ensures that the phys_addr_t datatype is used to represent
> > physical addresses which may be beyond the range of an unsigned long.
> > The virt <-> phys macros are updated accordingly to ensure that virtual
> > addresses can remain as they are.
> >
> > Signed-off-by: Will Deacon <will.deacon@arm.com>
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> 
> This patch needs some more things fixed to prevent warnings:

Ah yes, this is for the non-HIGHMEM case which I hadn't considered for
LPAE. It's a perfectly reasonable configuration I suppose so this needs
fixing.

> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index a81355d..6cf76b3 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -809,9 +809,10 @@ static void __init sanity_check_meminfo(void)
>                  */
>                 if (__va(bank->start) >= vmalloc_min ||
>                     __va(bank->start) < (void *)PAGE_OFFSET) {
> -                       printk(KERN_NOTICE "Ignoring RAM at %.8lx-%.8lx "
> +                       printk(KERN_NOTICE "Ignoring RAM at %.8llx-%.8llx "
>                                "(vmalloc region overlap).\n",
> -                              bank->start, bank->start + bank->size - 1);
> +                              (unsigned long long)bank->start,
> +                              (unsigned long long)bank->start + bank->size - 1);
>                         continue;
>                 }
> 
> @@ -822,10 +823,11 @@ static void __init sanity_check_meminfo(void)
>                 if (__va(bank->start + bank->size) > vmalloc_min ||
>                     __va(bank->start + bank->size) < __va(bank->start)) {
>                         unsigned long newsize = vmalloc_min - __va(bank->start);
> -                       printk(KERN_NOTICE "Truncating RAM at %.8lx-%.8lx "
> -                              "to -%.8lx (vmalloc region overlap).\n",
> -                              bank->start, bank->start + bank->size - 1,
> -                              bank->start + newsize - 1);
> +                       printk(KERN_NOTICE "Truncating RAM at %.8llx-%.8llx "
> +                              "to -%.8llx (vmalloc region overlap).\n",
> +                              (unsigned long long)bank->start,
> +                              (unsigned long long)bank->start + bank->size - 1,
> +                              (unsigned long long)bank->start + newsize - 1);
>                         bank->size = newsize;
>                 }
>  #endif


Would you like me to submit an additional patch or are you happy merging
this diff in with my ack?

Cheers,

Will



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-21 14:36     ` Will Deacon
@ 2011-02-21 14:58       ` Russell King - ARM Linux
  2011-02-21 15:01         ` Will Deacon
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-02-21 14:58 UTC (permalink / raw)
  To: Will Deacon; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

On Mon, Feb 21, 2011 at 02:36:47PM +0000, Will Deacon wrote:
> Would you like me to submit an additional patch or are you happy merging
> this diff in with my ack?

The latter - I've already merged it and if you give an ack I'll add that.

Thanks.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses
  2011-02-21 14:58       ` Russell King - ARM Linux
@ 2011-02-21 15:01         ` Will Deacon
  0 siblings, 0 replies; 59+ messages in thread
From: Will Deacon @ 2011-02-21 15:01 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: Catalin Marinas, linux-arm-kernel, linux-kernel

On Mon, 2011-02-21 at 14:58 +0000, Russell King - ARM Linux wrote:
> On Mon, Feb 21, 2011 at 02:36:47PM +0000, Will Deacon wrote:
> > Would you like me to submit an additional patch or are you happy merging
> > this diff in with my ack?
> 
> The latter - I've already merged it and if you give an ack I'll add that.
> 
Acked-by: Will Deacon <will.deacon@arm.com>

Thanks,

Will



^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions
  2011-01-24 21:42     ` Russell King - ARM Linux
  2011-01-25 10:04       ` Catalin Marinas
@ 2011-03-21 12:36       ` Catalin Marinas
  2011-03-21 12:56         ` Russell King - ARM Linux
  1 sibling, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2011-03-21 12:36 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: Nick Piggin, linux-kernel, linux-arm-kernel

Hi Russell,

On 24 January 2011 21:42, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
> On Tue, Jan 25, 2011 at 08:26:42AM +1100, Nick Piggin wrote:
>> Too bad about the PAE thing; my condolences.
>>
>>
>> On Tue, Jan 25, 2011 at 4:55 AM, Catalin Marinas
>> <catalin.marinas@arm.com> wrote:
>> > This patch introduces the pgtable-3level*.h files with definitions
>> > specific to the LPAE page table format (3 levels of page tables).
>>
>> Seeing as you're shaking up these definitions, what do you think about
>> switching from 4level-fixup.h to pgtable-nopud.h / pgtable-nopmd.h headers?
>> One day eventually it would be nice to get rid of the fixup mode.
>
> I have patches to do this which I de-queued for the last merge window.
> It's not entirely trivial and without problem.  You can find the patches
> in linux-next now.
>
> I was waiting for the new set of patches from Catalin before going back
> and working out the solutions to some of those problems.

Any plans for the nopmd patches? I haven't seen them in -next or on the list.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions
  2011-03-21 12:36       ` Catalin Marinas
@ 2011-03-21 12:56         ` Russell King - ARM Linux
  2011-03-21 13:19           ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Russell King - ARM Linux @ 2011-03-21 12:56 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Nick Piggin, linux-kernel, linux-arm-kernel

On Mon, Mar 21, 2011 at 12:36:55PM +0000, Catalin Marinas wrote:
> Any plans for the nopmd patches? I haven't seen them in -next or on the list.

I dropped them again because of those pesky warnings, so again I'm not
planning to push them this window as I don't wish to be deluged in
people reporting the warnings.

They really need fixing once we know how the LPAE stuff interacts with
the change.  At the moment I've no idea whether the existing section
support ends up at pgd or pmd level with LPAE.

Obviously that matters as with LPAE, pgd and pmd are different hardware
levels, but without LPAE they're the same hardware level.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions
  2011-03-21 12:56         ` Russell King - ARM Linux
@ 2011-03-21 13:19           ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2011-03-21 13:19 UTC (permalink / raw)
  To: Russell King - ARM Linux; +Cc: linux-arm-kernel, linux-kernel, Nick Piggin

On 21 March 2011 12:56, Russell King - ARM Linux <linux@arm.linux.org.uk> wrote:
> On Mon, Mar 21, 2011 at 12:36:55PM +0000, Catalin Marinas wrote:
>> Any plans for the nopmd patches? I haven't seen them in -next or on the list.
>
> I dropped them again because of those pesky warnings, so again I'm not
> planning to push them this window as I don't wish to be deluged in
> people reporting the warnings.
>
> They really need fixing once we know how the LPAE stuff interacts with
> the change.  At the moment I've no idea whether the existing section
> support ends up at pgd or pmd level with LPAE.

With LPAE, the section should be at the pmd level (2nd level page
table) as the PGDIR_SHIFT is 30. But if we standardise on using the
pmd in both cases, macros like pmd_val() would expand to the right
level with classic page tables (it goes up to pgd_val).

PMD_SHIFT also gets defined as PGDIR_SHIFT for the classic page
tables. One of my patches in the series converts the existing code to
PMD_SHIFT from PGDIR_SHIFT.

I'm happy to give this a try if you have some existing patches for the
classic page tables (I can even start from scratch but I don't want to
duplicate work).

-- 
Catalin

^ permalink raw reply	[flat|nested] 59+ messages in thread

end of thread, other threads:[~2011-03-21 13:19 UTC | newest]

Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-24 17:55 [PATCH v4 00/19] ARM: Add support for the Large Physical Address Extensions Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 01/19] ARM: Make the argument to virt_to_phys() "const volatile" Catalin Marinas
2011-01-24 19:19   ` Stephen Boyd
2011-01-24 23:38     ` Russell King - ARM Linux
2011-01-25 10:00     ` Arnd Bergmann
2011-01-25 10:29       ` Russell King - ARM Linux
2011-01-25 14:14         ` Arnd Bergmann
2011-01-24 17:55 ` [PATCH v4 02/19] ARM: LPAE: Fix early_pte_alloc() assumption about the Linux PTE Catalin Marinas
2011-02-12  9:56   ` Russell King - ARM Linux
2011-01-24 17:55 ` [PATCH v4 03/19] ARM: LPAE: use long long format when printing physical addresses and ptes Catalin Marinas
2011-02-12  9:59   ` Russell King - ARM Linux
2011-01-24 17:55 ` [PATCH v4 04/19] ARM: LPAE: Use PMD_(SHIFT|SIZE|MASK) instead of PGDIR_* Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 05/19] ARM: LPAE: Factor out 2-level page table definitions into separate files Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 06/19] ARM: LPAE: Add (pte|pmd|pgd|pgprot)val_t type definitions as u32 Catalin Marinas
2011-02-03 17:13   ` Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 07/19] ARM: LPAE: Use a mask for physical addresses in page table entries Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 08/19] ARM: LPAE: Introduce the 3-level page table format definitions Catalin Marinas
2011-01-24 21:26   ` Nick Piggin
2011-01-24 21:42     ` Russell King - ARM Linux
2011-01-25 10:04       ` Catalin Marinas
2011-03-21 12:36       ` Catalin Marinas
2011-03-21 12:56         ` Russell King - ARM Linux
2011-03-21 13:19           ` Catalin Marinas
2011-02-03 17:11   ` Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 09/19] ARM: LPAE: Page table maintenance for the 3-level format Catalin Marinas
2011-02-03 17:09   ` Catalin Marinas
2011-02-03 17:56   ` Russell King - ARM Linux
2011-02-03 22:00     ` Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 10/19] ARM: LPAE: MMU setup for the 3-level page table format Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 11/19] ARM: LPAE: Add fault handling support Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 12/19] ARM: LPAE: Add context switching support Catalin Marinas
2011-02-12 10:44   ` Russell King - ARM Linux
2011-02-14 13:24     ` Catalin Marinas
2011-02-19 18:30       ` Russell King - ARM Linux
2011-02-19 23:16         ` Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 13/19] ARM: LPAE: Add identity mapping support for the 3-level page table format Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 14/19] ARM: LPAE: Add SMP " Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 15/19] ARM: LPAE: use phys_addr_t instead of unsigned long for physical addresses Catalin Marinas
2011-02-12 10:28   ` Russell King - ARM Linux
2011-02-15 11:52     ` Will Deacon
2011-02-15 12:35       ` Russell King - ARM Linux
2011-02-15 12:39         ` Catalin Marinas
2011-02-15 13:37           ` Will Deacon
2011-02-15 14:23             ` Russell King - ARM Linux
2011-02-15 15:26               ` Will Deacon
2011-02-15 15:48                 ` Russell King - ARM Linux
2011-02-19 18:26   ` Russell King - ARM Linux
2011-02-21 14:36     ` Will Deacon
2011-02-21 14:58       ` Russell King - ARM Linux
2011-02-21 15:01         ` Will Deacon
2011-01-24 17:55 ` [PATCH v4 16/19] ARM: LPAE: Use generic dma_addr_t type definition Catalin Marinas
2011-02-12 10:34   ` Russell King - ARM Linux
2011-02-14 13:01     ` Catalin Marinas
2011-02-15 14:27       ` Russell King - ARM Linux
2011-02-15 15:24         ` Catalin Marinas
2011-01-24 17:55 ` [PATCH v4 17/19] ARM: LPAE: mark memory banks with start > ULONG_MAX as highmem Catalin Marinas
2011-02-12 10:36   ` Russell King - ARM Linux
2011-01-24 17:56 ` [PATCH v4 18/19] ARM: LPAE: add support for ATAG_MEM64 Catalin Marinas
2011-01-24 17:56 ` [PATCH v4 19/19] ARM: LPAE: Add the Kconfig entries Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).