All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Mapped kernel support for Broadcom XLP/XLPII
@ 2015-01-16 12:38 ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

This patchset adds support for loading a XLR/XLP kernel compiled with a
CKSEG2 load address.

The changes are to:
 - move the existing MAPPED_KERNEL option from sgi-ip27 to common config
 - Add a plat_mem_fixup function to arch_mem_init which will allow
   the platform to calculate the kernel wired TLB entries and save
   them so that all the CPUs can set them up at boot.
 - Update PAGE_OFFSET, MAP_BASE and MODULE_START when mapped kernel
   is enabled.
 - Update compressed kernel code to generate the final executable in
   KSEG0 and map the load address of the embedded kernel before loading
   it.
 - Use wired entries of sizes 256M/1G/4G to map the available memory on
   XLP9xx and XLP2xx

Comments and suggestions welcome.

Ashok Kumar (1):
  MIPS: Netlogic: Mapped kernel support

Jayachandran C (4):
  MIPS: Make MAPPED_KERNEL config option common
  MIPS: Add platform function to fixup memory
  MIPS: Compress MAPPED kernels
  MIPS: Netlogic: Map kernel with 1G/4G pages on XLPII

 arch/mips/Kconfig                                  |   7 +
 arch/mips/boot/compressed/calc_vmlinuz_load_addr.c |   9 +
 arch/mips/boot/compressed/decompress.c             |   6 +-
 arch/mips/boot/compressed/head.S                   |   5 +
 .../include/asm/mach-netlogic/kernel-entry-init.h  |  50 +++++
 arch/mips/include/asm/mach-netlogic/spaces.h       |  25 +++
 arch/mips/include/asm/netlogic/common.h            |  15 ++
 arch/mips/include/asm/pgtable-64.h                 |   4 +
 arch/mips/kernel/setup.c                           |   4 +
 arch/mips/mm/tlb-r4k.c                             |   2 +
 arch/mips/netlogic/Platform                        |   5 +
 arch/mips/netlogic/common/Makefile                 |   1 +
 arch/mips/netlogic/common/memory.c                 | 247 +++++++++++++++++++++
 arch/mips/netlogic/common/reset.S                  |  40 ++++
 arch/mips/netlogic/common/smpboot.S                |  16 +-
 arch/mips/netlogic/xlp/setup.c                     |  14 --
 arch/mips/netlogic/xlr/setup.c                     |   3 +-
 arch/mips/netlogic/xlr/wakeup.c                    |   7 +-
 arch/mips/sgi-ip27/Kconfig                         |   8 -
 19 files changed, 440 insertions(+), 28 deletions(-)
 create mode 100644 arch/mips/include/asm/mach-netlogic/kernel-entry-init.h
 create mode 100644 arch/mips/include/asm/mach-netlogic/spaces.h
 create mode 100644 arch/mips/netlogic/common/memory.c

-- 
1.9.1

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 0/5] Mapped kernel support for Broadcom XLP/XLPII
@ 2015-01-16 12:38 ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

This patchset adds support for loading a XLR/XLP kernel compiled with a
CKSEG2 load address.

The changes are to:
 - move the existing MAPPED_KERNEL option from sgi-ip27 to common config
 - Add a plat_mem_fixup function to arch_mem_init which will allow
   the platform to calculate the kernel wired TLB entries and save
   them so that all the CPUs can set them up at boot.
 - Update PAGE_OFFSET, MAP_BASE and MODULE_START when mapped kernel
   is enabled.
 - Update compressed kernel code to generate the final executable in
   KSEG0 and map the load address of the embedded kernel before loading
   it.
 - Use wired entries of sizes 256M/1G/4G to map the available memory on
   XLP9xx and XLP2xx

Comments and suggestions welcome.

Ashok Kumar (1):
  MIPS: Netlogic: Mapped kernel support

Jayachandran C (4):
  MIPS: Make MAPPED_KERNEL config option common
  MIPS: Add platform function to fixup memory
  MIPS: Compress MAPPED kernels
  MIPS: Netlogic: Map kernel with 1G/4G pages on XLPII

 arch/mips/Kconfig                                  |   7 +
 arch/mips/boot/compressed/calc_vmlinuz_load_addr.c |   9 +
 arch/mips/boot/compressed/decompress.c             |   6 +-
 arch/mips/boot/compressed/head.S                   |   5 +
 .../include/asm/mach-netlogic/kernel-entry-init.h  |  50 +++++
 arch/mips/include/asm/mach-netlogic/spaces.h       |  25 +++
 arch/mips/include/asm/netlogic/common.h            |  15 ++
 arch/mips/include/asm/pgtable-64.h                 |   4 +
 arch/mips/kernel/setup.c                           |   4 +
 arch/mips/mm/tlb-r4k.c                             |   2 +
 arch/mips/netlogic/Platform                        |   5 +
 arch/mips/netlogic/common/Makefile                 |   1 +
 arch/mips/netlogic/common/memory.c                 | 247 +++++++++++++++++++++
 arch/mips/netlogic/common/reset.S                  |  40 ++++
 arch/mips/netlogic/common/smpboot.S                |  16 +-
 arch/mips/netlogic/xlp/setup.c                     |  14 --
 arch/mips/netlogic/xlr/setup.c                     |   3 +-
 arch/mips/netlogic/xlr/wakeup.c                    |   7 +-
 arch/mips/sgi-ip27/Kconfig                         |   8 -
 19 files changed, 440 insertions(+), 28 deletions(-)
 create mode 100644 arch/mips/include/asm/mach-netlogic/kernel-entry-init.h
 create mode 100644 arch/mips/include/asm/mach-netlogic/spaces.h
 create mode 100644 arch/mips/netlogic/common/memory.c

-- 
1.9.1

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/5] MIPS: Make MAPPED_KERNEL config option common
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

This will be needed for the mapped kernel patches for Netlogic XLP.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/Kconfig          | 8 ++++++++
 arch/mips/sgi-ip27/Kconfig | 8 --------
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 74a76da..c8c3fe1 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2124,6 +2124,14 @@ config SB1_PASS_2_1_WORKAROUNDS
 	depends on CPU_SB1 && CPU_SB1_PASS_2
 	default y
 
+config MAPPED_KERNEL
+	bool "Mapped kernel support"
+	depends on SGI_IP27
+	help
+	  Change the way a Linux kernel is loaded into memory on a MIPS64
+	  machine.  This is required in order to support text replication on
+	  NUMA.  If you need to understand it, read the source code.
+
 
 config ARCH_PHYS_ADDR_T_64BIT
        bool
diff --git a/arch/mips/sgi-ip27/Kconfig b/arch/mips/sgi-ip27/Kconfig
index 4d8705a..2c9dd3d 100644
--- a/arch/mips/sgi-ip27/Kconfig
+++ b/arch/mips/sgi-ip27/Kconfig
@@ -21,14 +21,6 @@ config SGI_SN_N_MODE
 
 endchoice
 
-config MAPPED_KERNEL
-	bool "Mapped kernel support"
-	depends on SGI_IP27
-	help
-	  Change the way a Linux kernel is loaded into memory on a MIPS64
-	  machine.  This is required in order to support text replication on
-	  NUMA.  If you need to understand it, read the source code.
-
 config REPLICATE_KTEXT
 	bool "Kernel text replication support"
 	depends on SGI_IP27
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 1/5] MIPS: Make MAPPED_KERNEL config option common
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

This will be needed for the mapped kernel patches for Netlogic XLP.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/Kconfig          | 8 ++++++++
 arch/mips/sgi-ip27/Kconfig | 8 --------
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 74a76da..c8c3fe1 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2124,6 +2124,14 @@ config SB1_PASS_2_1_WORKAROUNDS
 	depends on CPU_SB1 && CPU_SB1_PASS_2
 	default y
 
+config MAPPED_KERNEL
+	bool "Mapped kernel support"
+	depends on SGI_IP27
+	help
+	  Change the way a Linux kernel is loaded into memory on a MIPS64
+	  machine.  This is required in order to support text replication on
+	  NUMA.  If you need to understand it, read the source code.
+
 
 config ARCH_PHYS_ADDR_T_64BIT
        bool
diff --git a/arch/mips/sgi-ip27/Kconfig b/arch/mips/sgi-ip27/Kconfig
index 4d8705a..2c9dd3d 100644
--- a/arch/mips/sgi-ip27/Kconfig
+++ b/arch/mips/sgi-ip27/Kconfig
@@ -21,14 +21,6 @@ config SGI_SN_N_MODE
 
 endchoice
 
-config MAPPED_KERNEL
-	bool "Mapped kernel support"
-	depends on SGI_IP27
-	help
-	  Change the way a Linux kernel is loaded into memory on a MIPS64
-	  machine.  This is required in order to support text replication on
-	  NUMA.  If you need to understand it, read the source code.
-
 config REPLICATE_KTEXT
 	bool "Kernel text replication support"
 	depends on SGI_IP27
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/5] MIPS: Add platform function to fixup memory
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

Provide a function plat_mem_fixup() that is called after memory
command line args are parsed. This can be used to fixup memory
regions including those passed thru device tree and command line.

In XLR/XLP, use this to reduce the size of the memory segments so
that prefetch will not access illegal addresses.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/kernel/setup.c           |  4 +++
 arch/mips/netlogic/common/Makefile |  1 +
 arch/mips/netlogic/common/memory.c | 53 ++++++++++++++++++++++++++++++++++++++
 arch/mips/netlogic/xlp/setup.c     | 14 ----------
 arch/mips/netlogic/xlr/setup.c     |  3 +--
 5 files changed, 59 insertions(+), 16 deletions(-)
 create mode 100644 arch/mips/netlogic/common/memory.c

diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index 0589290..d995904 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -614,6 +614,7 @@ static void __init arch_mem_init(char **cmdline_p)
 {
 	struct memblock_region *reg;
 	extern void plat_mem_setup(void);
+	extern void plat_mem_fixup(void);
 
 	/* call board setup routine */
 	plat_mem_setup();
@@ -657,6 +658,9 @@ static void __init arch_mem_init(char **cmdline_p)
 		pr_info("User-defined physical RAM map:\n");
 		print_memory_map();
 	}
+#if defined(CONFIG_CPU_XLP) || defined(CONFIG_CPU_XLR)
+	plat_mem_fixup();
+#endif
 
 	bootmem_init();
 #ifdef CONFIG_PROC_VMCORE
diff --git a/arch/mips/netlogic/common/Makefile b/arch/mips/netlogic/common/Makefile
index 362739d..44b0e7e 100644
--- a/arch/mips/netlogic/common/Makefile
+++ b/arch/mips/netlogic/common/Makefile
@@ -1,5 +1,6 @@
 obj-y				+= irq.o time.o
 obj-y				+= nlm-dma.o
 obj-y				+= reset.o
+obj-y				+= memory.o
 obj-$(CONFIG_SMP)		+= smp.o smpboot.o
 obj-$(CONFIG_EARLY_PRINTK)	+= earlycons.o
diff --git a/arch/mips/netlogic/common/memory.c b/arch/mips/netlogic/common/memory.c
new file mode 100644
index 00000000..980c102
--- /dev/null
+++ b/arch/mips/netlogic/common/memory.c
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2003-2014 Broadcom Corporation
+ * All Rights Reserved
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the Broadcom
+ * license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY BROADCOM ``AS IS'' AND ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL BROADCOM OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#include <asm/bootinfo.h>
+#include <asm/types.h>
+
+static const int prefetch_backup = 512;
+
+void __init plat_mem_fixup(void)
+{
+	int i;
+
+	/* fixup entries for prefetch */
+	for (i = 0; i < boot_mem_map.nr_map; i++) {
+		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
+			continue;
+		boot_mem_map.map[i].size -= prefetch_backup;
+	}
+}
diff --git a/arch/mips/netlogic/xlp/setup.c b/arch/mips/netlogic/xlp/setup.c
index f743fd9..adc6390 100644
--- a/arch/mips/netlogic/xlp/setup.c
+++ b/arch/mips/netlogic/xlp/setup.c
@@ -64,18 +64,6 @@ static void nlm_linux_exit(void)
 		cpu_wait();
 }
 
-static void nlm_fixup_mem(void)
-{
-	const int pref_backup = 512;
-	int i;
-
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		boot_mem_map.map[i].size -= pref_backup;
-	}
-}
-
 static void __init xlp_init_mem_from_bars(void)
 {
 	uint64_t map[16];
@@ -114,8 +102,6 @@ void __init plat_mem_setup(void)
 		pr_info("Using DRAM BARs for memory map.\n");
 		xlp_init_mem_from_bars();
 	}
-	/* Calculate and setup wired entries for mapped kernel */
-	nlm_fixup_mem();
 }
 
 const char *get_system_type(void)
diff --git a/arch/mips/netlogic/xlr/setup.c b/arch/mips/netlogic/xlr/setup.c
index d118b9a..714f6a3 100644
--- a/arch/mips/netlogic/xlr/setup.c
+++ b/arch/mips/netlogic/xlr/setup.c
@@ -144,7 +144,6 @@ static void prom_add_memory(void)
 {
 	struct nlm_boot_mem_map *bootm;
 	u64 start, size;
-	u64 pref_backup = 512;	/* avoid pref walking beyond end */
 	int i;
 
 	bootm = (void *)(long)nlm_prom_info.psb_mem_map;
@@ -158,7 +157,7 @@ static void prom_add_memory(void)
 		if (i == 0 && start == 0 && size == 0x0c000000)
 			size = 0x0ff00000;
 
-		add_memory_region(start, size - pref_backup, BOOT_MEM_RAM);
+		add_memory_region(start, size, BOOT_MEM_RAM);
 	}
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/5] MIPS: Add platform function to fixup memory
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

Provide a function plat_mem_fixup() that is called after memory
command line args are parsed. This can be used to fixup memory
regions including those passed thru device tree and command line.

In XLR/XLP, use this to reduce the size of the memory segments so
that prefetch will not access illegal addresses.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/kernel/setup.c           |  4 +++
 arch/mips/netlogic/common/Makefile |  1 +
 arch/mips/netlogic/common/memory.c | 53 ++++++++++++++++++++++++++++++++++++++
 arch/mips/netlogic/xlp/setup.c     | 14 ----------
 arch/mips/netlogic/xlr/setup.c     |  3 +--
 5 files changed, 59 insertions(+), 16 deletions(-)
 create mode 100644 arch/mips/netlogic/common/memory.c

diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index 0589290..d995904 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -614,6 +614,7 @@ static void __init arch_mem_init(char **cmdline_p)
 {
 	struct memblock_region *reg;
 	extern void plat_mem_setup(void);
+	extern void plat_mem_fixup(void);
 
 	/* call board setup routine */
 	plat_mem_setup();
@@ -657,6 +658,9 @@ static void __init arch_mem_init(char **cmdline_p)
 		pr_info("User-defined physical RAM map:\n");
 		print_memory_map();
 	}
+#if defined(CONFIG_CPU_XLP) || defined(CONFIG_CPU_XLR)
+	plat_mem_fixup();
+#endif
 
 	bootmem_init();
 #ifdef CONFIG_PROC_VMCORE
diff --git a/arch/mips/netlogic/common/Makefile b/arch/mips/netlogic/common/Makefile
index 362739d..44b0e7e 100644
--- a/arch/mips/netlogic/common/Makefile
+++ b/arch/mips/netlogic/common/Makefile
@@ -1,5 +1,6 @@
 obj-y				+= irq.o time.o
 obj-y				+= nlm-dma.o
 obj-y				+= reset.o
+obj-y				+= memory.o
 obj-$(CONFIG_SMP)		+= smp.o smpboot.o
 obj-$(CONFIG_EARLY_PRINTK)	+= earlycons.o
diff --git a/arch/mips/netlogic/common/memory.c b/arch/mips/netlogic/common/memory.c
new file mode 100644
index 00000000..980c102
--- /dev/null
+++ b/arch/mips/netlogic/common/memory.c
@@ -0,0 +1,53 @@
+/*
+ * Copyright (c) 2003-2014 Broadcom Corporation
+ * All Rights Reserved
+ *
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the Broadcom
+ * license below:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY BROADCOM ``AS IS'' AND ANY EXPRESS OR
+ * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL BROADCOM OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+ * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
+ * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+ * IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#include <asm/bootinfo.h>
+#include <asm/types.h>
+
+static const int prefetch_backup = 512;
+
+void __init plat_mem_fixup(void)
+{
+	int i;
+
+	/* fixup entries for prefetch */
+	for (i = 0; i < boot_mem_map.nr_map; i++) {
+		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
+			continue;
+		boot_mem_map.map[i].size -= prefetch_backup;
+	}
+}
diff --git a/arch/mips/netlogic/xlp/setup.c b/arch/mips/netlogic/xlp/setup.c
index f743fd9..adc6390 100644
--- a/arch/mips/netlogic/xlp/setup.c
+++ b/arch/mips/netlogic/xlp/setup.c
@@ -64,18 +64,6 @@ static void nlm_linux_exit(void)
 		cpu_wait();
 }
 
-static void nlm_fixup_mem(void)
-{
-	const int pref_backup = 512;
-	int i;
-
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		boot_mem_map.map[i].size -= pref_backup;
-	}
-}
-
 static void __init xlp_init_mem_from_bars(void)
 {
 	uint64_t map[16];
@@ -114,8 +102,6 @@ void __init plat_mem_setup(void)
 		pr_info("Using DRAM BARs for memory map.\n");
 		xlp_init_mem_from_bars();
 	}
-	/* Calculate and setup wired entries for mapped kernel */
-	nlm_fixup_mem();
 }
 
 const char *get_system_type(void)
diff --git a/arch/mips/netlogic/xlr/setup.c b/arch/mips/netlogic/xlr/setup.c
index d118b9a..714f6a3 100644
--- a/arch/mips/netlogic/xlr/setup.c
+++ b/arch/mips/netlogic/xlr/setup.c
@@ -144,7 +144,6 @@ static void prom_add_memory(void)
 {
 	struct nlm_boot_mem_map *bootm;
 	u64 start, size;
-	u64 pref_backup = 512;	/* avoid pref walking beyond end */
 	int i;
 
 	bootm = (void *)(long)nlm_prom_info.psb_mem_map;
@@ -158,7 +157,7 @@ static void prom_add_memory(void)
 		if (i == 0 && start == 0 && size == 0x0c000000)
 			size = 0x0ff00000;
 
-		add_memory_region(start, size - pref_backup, BOOT_MEM_RAM);
+		add_memory_region(start, size, BOOT_MEM_RAM);
 	}
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/5] MIPS: Netlogic: Mapped kernel support
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Ashok Kumar, Jayachandran C

From: Ashok Kumar <ashoks@broadcom.com>

Add support for loading kernel compiled for CKSEG2. This is done by
adding wired TLB entries for the CKSEG2 at boot up and slave CPU init.

In 64-bit we will map the whole physical memory to mapped area starting
at XKSEG and move the MAP_OFFSET higher to accomodate this.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/Kconfig                                  |   9 +-
 .../include/asm/mach-netlogic/kernel-entry-init.h  |  50 +++++++++
 arch/mips/include/asm/mach-netlogic/spaces.h       |  25 +++++
 arch/mips/include/asm/netlogic/common.h            |  15 +++
 arch/mips/include/asm/pgtable-64.h                 |   4 +
 arch/mips/mm/tlb-r4k.c                             |   2 +
 arch/mips/netlogic/Platform                        |   5 +
 arch/mips/netlogic/common/memory.c                 | 113 +++++++++++++++++++++
 arch/mips/netlogic/common/reset.S                  |  40 ++++++++
 arch/mips/netlogic/common/smpboot.S                |  16 ++-
 arch/mips/netlogic/xlr/wakeup.c                    |   7 +-
 11 files changed, 278 insertions(+), 8 deletions(-)
 create mode 100644 arch/mips/include/asm/mach-netlogic/kernel-entry-init.h
 create mode 100644 arch/mips/include/asm/mach-netlogic/spaces.h

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index c8c3fe1..45cbb09 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2126,12 +2126,11 @@ config SB1_PASS_2_1_WORKAROUNDS
 
 config MAPPED_KERNEL
 	bool "Mapped kernel support"
-	depends on SGI_IP27
+	depends on SGI_IP27 || CPU_XLP || CPU_XLR
 	help
-	  Change the way a Linux kernel is loaded into memory on a MIPS64
-	  machine.  This is required in order to support text replication on
-	  NUMA.  If you need to understand it, read the source code.
-
+	  Enable compiling and loading the Linux kernel compiled to
+	  run in the KSEG2/CKSEG2. Wired TLB entries are used to map
+	  the available DRAM into either XKSEG or KSEG2.
 
 config ARCH_PHYS_ADDR_T_64BIT
        bool
diff --git a/arch/mips/include/asm/mach-netlogic/kernel-entry-init.h b/arch/mips/include/asm/mach-netlogic/kernel-entry-init.h
new file mode 100644
index 00000000..c1fd37a
--- /dev/null
+++ b/arch/mips/include/asm/mach-netlogic/kernel-entry-init.h
@@ -0,0 +1,50 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2013 Broadcom Corporation
+ *
+ * Based on arch/mips/include/asm/mach-generic/kernel-entry-init.h:
+ *
+ * Copyright (C) 2005 Embedded Alley Solutions, Inc
+ * Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
+ */
+#ifndef __ASM_MACH_NETLOGIC_KERNEL_ENTRY_H
+#define __ASM_MACH_NETLOGIC_KERNEL_ENTRY_H
+
+#ifdef CONFIG_CPU_MIPSR2
+#define JRHB		jr.hb
+#else
+#define JRHB		jr
+#endif
+
+.macro	kernel_entry_setup
+#ifdef CONFIG_MAPPED_KERNEL
+	PTR_LA	t0, 91f
+	move	t3, t0
+	li	t1, 0x1fffffff		/* VA to PA mask for CKSEG */
+	or	t0, t1
+	xor	t0, t1			/* clear PA bits to get mapping */
+	dmtc0	t0, CP0_ENTRYHI
+	li	t1, 0x2f		/* CCA 5, dirty, valid, global */
+	dmtc0	t1, CP0_ENTRYLO0
+	li	t0, 0x1			/* not valid, global */
+	dmtc0	t0, CP0_ENTRYLO1
+	li	t1, PM_256M		/* 256 MB */
+	mtc0	t1, CP0_PAGEMASK
+	mtc0	zero, CP0_INDEX		/* index 0 in TLB */
+	tlbwi
+	li	t0, 1
+	mtc0	t0, CP0_WIRED
+	_ehb
+	JRHB	t3			/* jump to 'real' mapped address */
+	nop
+91:
+#endif
+.endm
+
+.macro	smp_slave_setup
+.endm
+
+#endif /* __ASM_MACH_NETLOGIC_KERNEL_ENTRY_H */
diff --git a/arch/mips/include/asm/mach-netlogic/spaces.h b/arch/mips/include/asm/mach-netlogic/spaces.h
new file mode 100644
index 00000000..6f32f07
--- /dev/null
+++ b/arch/mips/include/asm/mach-netlogic/spaces.h
@@ -0,0 +1,25 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1994 - 1999, 2000, 03, 04 Ralf Baechle
+ * Copyright (C) 2000, 2002  Maciej W. Rozycki
+ * Copyright (C) 1990, 1999, 2000 Silicon Graphics, Inc.
+ */
+#ifndef _ASM_NETLOGIC_SPACES_H
+#define _ASM_NETLOGIC_SPACES_H
+
+#if defined(CONFIG_MAPPED_KERNEL)
+#ifdef CONFIG_32BIT
+#define PAGE_OFFSET		CKSEG2
+#define MAP_BASE		CKSEG3
+#else
+#define PAGE_OFFSET		_AC(0xc000000000000000, UL)
+#define MAP_BASE		_AC(0xd000000000000000, UL)
+#endif
+#endif /* CONFIG_MAPPED_KERNEL */
+
+#include <asm/mach-generic/spaces.h>
+
+#endif /* __ASM_NETLOGIC_SPACES_H */
diff --git a/arch/mips/include/asm/netlogic/common.h b/arch/mips/include/asm/netlogic/common.h
index 2a4c128..e9dd00e 100644
--- a/arch/mips/include/asm/netlogic/common.h
+++ b/arch/mips/include/asm/netlogic/common.h
@@ -47,6 +47,17 @@
 #define BOOT_NMI_LOCK		4
 #define BOOT_NMI_HANDLER	8
 
+/* TLB entry save area for mapped kernel */
+#define BOOT_NTLBS		64
+#define BOOT_TLBS_START		72
+#define BOOT_TLB_SIZE		32
+
+/* four u64 entries per TLB */
+#define BOOT_TLB_ENTRYHI	0
+#define BOOT_TLB_ENTRYLO0	1
+#define BOOT_TLB_ENTRYLO1	2
+#define BOOT_TLB_PAGEMASK	3
+
 /* CPU ready flags for each CPU */
 #define BOOT_CPU_READY		2048
 
@@ -90,6 +101,10 @@ extern char nlm_reset_entry[], nlm_reset_entry_end[];
 /* SWIOTLB */
 extern struct dma_map_ops nlm_swiotlb_dma_ops;
 
+/* mapped kernel */
+void nlm_fixup_mem(void);
+void nlm_setup_wired_tlbs(void);
+
 extern unsigned int nlm_threads_per_core;
 extern cpumask_t nlm_cpumask;
 
diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h
index e1c49a9..21a204f 100644
--- a/arch/mips/include/asm/pgtable-64.h
+++ b/arch/mips/include/asm/pgtable-64.h
@@ -135,7 +135,11 @@
 #if defined(CONFIG_MODULES) && defined(KBUILD_64BIT_SYM32) && \
 	VMALLOC_START != CKSSEG
 /* Load modules into 32bit-compatible segment. */
+#ifdef CONFIG_MAPPED_KERNEL
+#define MODULE_START	CKSEG3	/* don't overlap with mapped kernel */
+#else
 #define MODULE_START	CKSSEG
+#endif
 #define MODULE_END	(FIXADDR_START-2*PAGE_SIZE)
 #endif
 
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index e90b2e8..7b402fc 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -474,7 +474,9 @@ static void r4k_tlb_configure(void)
 	 *     be set to fixed-size pages.
 	 */
 	write_c0_pagemask(PM_DEFAULT_MASK);
+#ifndef CONFIG_MAPPED_KERNEL
 	write_c0_wired(0);
+#endif
 	if (current_cpu_type() == CPU_R10000 ||
 	    current_cpu_type() == CPU_R12000 ||
 	    current_cpu_type() == CPU_R14000)
diff --git a/arch/mips/netlogic/Platform b/arch/mips/netlogic/Platform
index fb8eb4c..3124510 100644
--- a/arch/mips/netlogic/Platform
+++ b/arch/mips/netlogic/Platform
@@ -14,4 +14,9 @@ cflags-$(CONFIG_CPU_XLP)	+= $(call cc-option,-march=xlp,-march=mips64r2)
 # NETLOGIC processor support
 #
 platform-$(CONFIG_NLM_COMMON)	+= netlogic/
+
+ifdef CONFIG_MAPPED_KERNEL
+load-$(CONFIG_NLM_COMMON)	+= 0xffffffffc0100000
+else
 load-$(CONFIG_NLM_COMMON)	+= 0xffffffff80100000
+endif
diff --git a/arch/mips/netlogic/common/memory.c b/arch/mips/netlogic/common/memory.c
index 980c102..6d967ce 100644
--- a/arch/mips/netlogic/common/memory.c
+++ b/arch/mips/netlogic/common/memory.c
@@ -36,14 +36,127 @@
 #include <linux/types.h>
 
 #include <asm/bootinfo.h>
+#include <asm/pgtable.h>
 #include <asm/types.h>
+#include <asm/tlb.h>
+
+#include <asm/netlogic/common.h>
+
+#define TLBSZ		(256 * 1024 * 1024)
+#define PM_TLBSZ	PM_256M
+#define PTE_MAPKERN(pa)	(((pa >> 12) << 6) | 0x2f)
+#define TLB_MAXWIRED	28
 
 static const int prefetch_backup = 512;
 
+#if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
+static void nlm_tlb_align(struct boot_mem_map_entry *map)
+{
+	phys_t astart, aend, start, end;
+
+	start = map->addr;
+	end = start + map->size;
+
+	/* fudge first entry for now  */
+	if (start < 0x10000000) {
+		start = 0;
+		end = 0x10000000;
+	}
+	astart = round_up(start, TLBSZ);
+	aend = round_down(end, TLBSZ);
+	if (aend <= astart) {
+		pr_info("Boot mem map: discard seg %lx-%lx\n",
+				(unsigned long)start, (unsigned long)end);
+		map->size = 0;
+		return;
+	}
+	if (astart != start || aend != end) {
+		if (start != 0) {
+			map->addr = astart;
+			map->size = aend - astart;
+		}
+		pr_info("Boot mem map: %lx - %lx -> %lx-%lx\n",
+			(unsigned long)start, (unsigned long)end,
+			(unsigned long)astart, (unsigned long)aend);
+	} else
+		pr_info("Boot mem map: added %lx - %lx\n",
+			(unsigned long)astart, (unsigned long)aend);
+}
+
+static void nlm_calc_wired_tlbs(void)
+{
+	u64 *tlbarr;
+	u32 *tlbcount;
+	u64 lo0, lo1, vaddr;
+	phys_addr_t astart, aend, p;
+	unsigned long bootdata = CKSEG1ADDR(RESET_DATA_PHYS);
+	int i, pos;
+
+	tlbarr = (u64 *)(bootdata + BOOT_TLBS_START);
+	tlbcount = (u32 *)(bootdata + BOOT_NTLBS);
+
+	pos = 0;
+	for (i = 0; i < boot_mem_map.nr_map; i++) {
+		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
+			continue;
+		astart = boot_mem_map.map[i].addr;
+		aend =	astart + boot_mem_map.map[i].size;
+
+		/* fudge first entry for now  */
+		if (astart < 0x10000000) {
+			astart = 0;
+			aend = 0x10000000;
+		}
+		for (p = round_down(astart, 2 * TLBSZ);
+			p < round_up(aend, 2 * TLBSZ);) {
+				vaddr = PAGE_OFFSET + p;
+				lo0 = (p >= astart) ? PTE_MAPKERN(p) : 1;
+				p += TLBSZ;
+				lo1 = (p < aend) ? PTE_MAPKERN(p) : 1;
+				p += TLBSZ;
+
+				tlbarr[BOOT_TLB_ENTRYHI] = vaddr;
+				tlbarr[BOOT_TLB_ENTRYLO0] = lo0;
+				tlbarr[BOOT_TLB_ENTRYLO1] = lo1;
+				tlbarr[BOOT_TLB_PAGEMASK] = PM_TLBSZ;
+				tlbarr += (BOOT_TLB_SIZE / sizeof(tlbarr[0]));
+
+				if (++pos >= TLB_MAXWIRED) {
+					pr_err("Ran out of TLBs at %llx, ",
+							(unsigned long long)p);
+					pr_err("Discarding rest of memory!\n");
+					boot_mem_map.nr_map = i + 1;
+					boot_mem_map.map[i].size = p -
+						boot_mem_map.map[i].addr;
+					goto out;
+				}
+		}
+	}
+out:
+	*tlbcount = pos;
+	pr_info("%d TLB entires used for mapped kernel.\n", pos);
+}
+#endif
+
 void __init plat_mem_fixup(void)
 {
 	int i;
 
+#if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
+	/* trim memory regions to PM_TLBSZ boundaries */
+	for (i = 0; i < boot_mem_map.nr_map; i++) {
+		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
+			continue;
+		nlm_tlb_align(&boot_mem_map.map[i]);
+	}
+
+	/* calculate and save wired TLB entries */
+	nlm_calc_wired_tlbs();
+
+	/* set them up for boot cpu */
+	nlm_setup_wired_tlbs();
+#endif
+
 	/* fixup entries for prefetch */
 	for (i = 0; i < boot_mem_map.nr_map; i++) {
 		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
diff --git a/arch/mips/netlogic/common/reset.S b/arch/mips/netlogic/common/reset.S
index edbab9b..b095475 100644
--- a/arch/mips/netlogic/common/reset.S
+++ b/arch/mips/netlogic/common/reset.S
@@ -43,6 +43,7 @@
 #include <asm/asmmacro.h>
 #include <asm/addrspace.h>
 
+#include <kernel-entry-init.h>
 #include <asm/netlogic/common.h>
 
 #include <asm/netlogic/xlp-hal/iomap.h>
@@ -274,6 +275,10 @@ EXPORT(nlm_boot_siblings)
 	PTR_ADDU t1, v1
 	li	t2, 1
 	sw	t2, 0(t1)
+
+	/* mapped first wired TLB for mapped kernel */
+	kernel_entry_setup
+
 	/* Wait until NMI hits */
 3:	wait
 	b	3b
@@ -298,3 +303,38 @@ LEAF(nlm_init_boot_cpu)
 	jr	ra
 	nop
 END(nlm_init_boot_cpu)
+
+LEAF(nlm_setup_wired_tlbs)
+#ifdef CONFIG_MAPPED_KERNEL
+	li	t0, CKSEG1ADDR(RESET_DATA_PHYS)
+	lw	t3, BOOT_NTLBS(t0)
+	ADDIU	t0, t0, BOOT_TLBS_START
+	li	t1, 1		/* entry 0 is wired at boot, start at 1 */
+	addiu	t3, 1		/* final number of wired entries */
+
+1:	sltu	v1, t1, t3
+	beqz	v1, 2f
+	nop
+	ld	a0, (8 * BOOT_TLB_ENTRYLO0)(t0)
+	dmtc0	a0, CP0_ENTRYLO0
+	ld	a1, (8 * BOOT_TLB_ENTRYLO1)(t0)
+	dmtc0	a1, CP0_ENTRYLO1
+	ld	a2, (8 * BOOT_TLB_ENTRYHI)(t0)
+	dmtc0	a2, CP0_ENTRYHI
+	ld	a2, (8 * BOOT_TLB_PAGEMASK)(t0)
+	dmtc0	a2, CP0_PAGEMASK
+	mtc0	t1, CP0_INDEX
+	_ehb
+	tlbwi
+	ADDIU	t0, BOOT_TLB_SIZE
+	addiu	t1, 1
+	b	1b
+	nop
+2:
+	mtc0	t3, CP0_WIRED
+	JRHB	ra
+#else
+	jr	ra
+#endif
+	nop
+END(nlm_setup_wired_tlbs)
diff --git a/arch/mips/netlogic/common/smpboot.S b/arch/mips/netlogic/common/smpboot.S
index 805355b..c720492 100644
--- a/arch/mips/netlogic/common/smpboot.S
+++ b/arch/mips/netlogic/common/smpboot.S
@@ -41,6 +41,7 @@
 #include <asm/asmmacro.h>
 #include <asm/addrspace.h>
 
+#include <kernel-entry-init.h>
 #include <asm/netlogic/common.h>
 
 #include <asm/netlogic/xlp-hal/iomap.h>
@@ -80,6 +81,13 @@ NESTED(nlm_boot_secondary_cpus, 16, sp)
 	ori	t1, ST0_KX
 #endif
 	mtc0	t1, CP0_STATUS
+
+#ifdef CONFIG_64BIT
+	/* set the full wired TLBs needed for mapped kernel */
+	jal	nlm_setup_wired_tlbs
+	nop
+#endif
+
 	PTR_LA	t1, nlm_next_sp
 	PTR_L	sp, 0(t1)
 	PTR_LA	t1, nlm_next_gp
@@ -136,8 +144,12 @@ NESTED(nlm_rmiboot_preboot, 16, sp)
 	or	t1, t2, v1	/* put in new value */
 	mtcr	t1, t0		/* update core control */
 
+1:
+	/* mapped first wired TLB for mapped kernel */
+	kernel_entry_setup
+
 	/* wait for NMI to hit */
-1:	wait
-	b	1b
+2:	wait
+	b	2b
 	nop
 END(nlm_rmiboot_preboot)
diff --git a/arch/mips/netlogic/xlr/wakeup.c b/arch/mips/netlogic/xlr/wakeup.c
index d61cba1..4ed5204 100644
--- a/arch/mips/netlogic/xlr/wakeup.c
+++ b/arch/mips/netlogic/xlr/wakeup.c
@@ -53,6 +53,7 @@ int xlr_wakeup_secondary_cpus(void)
 	struct nlm_soc_info *nodep;
 	unsigned int i, j, boot_cpu;
 	volatile u32 *cpu_ready = nlm_get_boot_data(BOOT_CPU_READY);
+	void *handler;
 
 	/*
 	 *  In case of RMI boot, hit with NMI to get the cores
@@ -60,7 +61,11 @@ int xlr_wakeup_secondary_cpus(void)
 	 */
 	nodep = nlm_get_node(0);
 	boot_cpu = hard_smp_processor_id();
-	nlm_set_nmi_handler(nlm_rmiboot_preboot);
+	handler = nlm_rmiboot_preboot;
+#ifdef CONFIG_MAPPED_KERNEL
+	handler = (void *)CKSEG0ADDR((long)handler);
+#endif
+	nlm_set_nmi_handler(handler);
 	for (i = 0; i < NR_CPUS; i++) {
 		if (i == boot_cpu || !cpumask_test_cpu(i, &nlm_cpumask))
 			continue;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/5] MIPS: Netlogic: Mapped kernel support
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Ashok Kumar, Jayachandran C

From: Ashok Kumar <ashoks@broadcom.com>

Add support for loading kernel compiled for CKSEG2. This is done by
adding wired TLB entries for the CKSEG2 at boot up and slave CPU init.

In 64-bit we will map the whole physical memory to mapped area starting
at XKSEG and move the MAP_OFFSET higher to accomodate this.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/Kconfig                                  |   9 +-
 .../include/asm/mach-netlogic/kernel-entry-init.h  |  50 +++++++++
 arch/mips/include/asm/mach-netlogic/spaces.h       |  25 +++++
 arch/mips/include/asm/netlogic/common.h            |  15 +++
 arch/mips/include/asm/pgtable-64.h                 |   4 +
 arch/mips/mm/tlb-r4k.c                             |   2 +
 arch/mips/netlogic/Platform                        |   5 +
 arch/mips/netlogic/common/memory.c                 | 113 +++++++++++++++++++++
 arch/mips/netlogic/common/reset.S                  |  40 ++++++++
 arch/mips/netlogic/common/smpboot.S                |  16 ++-
 arch/mips/netlogic/xlr/wakeup.c                    |   7 +-
 11 files changed, 278 insertions(+), 8 deletions(-)
 create mode 100644 arch/mips/include/asm/mach-netlogic/kernel-entry-init.h
 create mode 100644 arch/mips/include/asm/mach-netlogic/spaces.h

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index c8c3fe1..45cbb09 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2126,12 +2126,11 @@ config SB1_PASS_2_1_WORKAROUNDS
 
 config MAPPED_KERNEL
 	bool "Mapped kernel support"
-	depends on SGI_IP27
+	depends on SGI_IP27 || CPU_XLP || CPU_XLR
 	help
-	  Change the way a Linux kernel is loaded into memory on a MIPS64
-	  machine.  This is required in order to support text replication on
-	  NUMA.  If you need to understand it, read the source code.
-
+	  Enable compiling and loading the Linux kernel compiled to
+	  run in the KSEG2/CKSEG2. Wired TLB entries are used to map
+	  the available DRAM into either XKSEG or KSEG2.
 
 config ARCH_PHYS_ADDR_T_64BIT
        bool
diff --git a/arch/mips/include/asm/mach-netlogic/kernel-entry-init.h b/arch/mips/include/asm/mach-netlogic/kernel-entry-init.h
new file mode 100644
index 00000000..c1fd37a
--- /dev/null
+++ b/arch/mips/include/asm/mach-netlogic/kernel-entry-init.h
@@ -0,0 +1,50 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2013 Broadcom Corporation
+ *
+ * Based on arch/mips/include/asm/mach-generic/kernel-entry-init.h:
+ *
+ * Copyright (C) 2005 Embedded Alley Solutions, Inc
+ * Copyright (C) 2005 Ralf Baechle (ralf@linux-mips.org)
+ */
+#ifndef __ASM_MACH_NETLOGIC_KERNEL_ENTRY_H
+#define __ASM_MACH_NETLOGIC_KERNEL_ENTRY_H
+
+#ifdef CONFIG_CPU_MIPSR2
+#define JRHB		jr.hb
+#else
+#define JRHB		jr
+#endif
+
+.macro	kernel_entry_setup
+#ifdef CONFIG_MAPPED_KERNEL
+	PTR_LA	t0, 91f
+	move	t3, t0
+	li	t1, 0x1fffffff		/* VA to PA mask for CKSEG */
+	or	t0, t1
+	xor	t0, t1			/* clear PA bits to get mapping */
+	dmtc0	t0, CP0_ENTRYHI
+	li	t1, 0x2f		/* CCA 5, dirty, valid, global */
+	dmtc0	t1, CP0_ENTRYLO0
+	li	t0, 0x1			/* not valid, global */
+	dmtc0	t0, CP0_ENTRYLO1
+	li	t1, PM_256M		/* 256 MB */
+	mtc0	t1, CP0_PAGEMASK
+	mtc0	zero, CP0_INDEX		/* index 0 in TLB */
+	tlbwi
+	li	t0, 1
+	mtc0	t0, CP0_WIRED
+	_ehb
+	JRHB	t3			/* jump to 'real' mapped address */
+	nop
+91:
+#endif
+.endm
+
+.macro	smp_slave_setup
+.endm
+
+#endif /* __ASM_MACH_NETLOGIC_KERNEL_ENTRY_H */
diff --git a/arch/mips/include/asm/mach-netlogic/spaces.h b/arch/mips/include/asm/mach-netlogic/spaces.h
new file mode 100644
index 00000000..6f32f07
--- /dev/null
+++ b/arch/mips/include/asm/mach-netlogic/spaces.h
@@ -0,0 +1,25 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1994 - 1999, 2000, 03, 04 Ralf Baechle
+ * Copyright (C) 2000, 2002  Maciej W. Rozycki
+ * Copyright (C) 1990, 1999, 2000 Silicon Graphics, Inc.
+ */
+#ifndef _ASM_NETLOGIC_SPACES_H
+#define _ASM_NETLOGIC_SPACES_H
+
+#if defined(CONFIG_MAPPED_KERNEL)
+#ifdef CONFIG_32BIT
+#define PAGE_OFFSET		CKSEG2
+#define MAP_BASE		CKSEG3
+#else
+#define PAGE_OFFSET		_AC(0xc000000000000000, UL)
+#define MAP_BASE		_AC(0xd000000000000000, UL)
+#endif
+#endif /* CONFIG_MAPPED_KERNEL */
+
+#include <asm/mach-generic/spaces.h>
+
+#endif /* __ASM_NETLOGIC_SPACES_H */
diff --git a/arch/mips/include/asm/netlogic/common.h b/arch/mips/include/asm/netlogic/common.h
index 2a4c128..e9dd00e 100644
--- a/arch/mips/include/asm/netlogic/common.h
+++ b/arch/mips/include/asm/netlogic/common.h
@@ -47,6 +47,17 @@
 #define BOOT_NMI_LOCK		4
 #define BOOT_NMI_HANDLER	8
 
+/* TLB entry save area for mapped kernel */
+#define BOOT_NTLBS		64
+#define BOOT_TLBS_START		72
+#define BOOT_TLB_SIZE		32
+
+/* four u64 entries per TLB */
+#define BOOT_TLB_ENTRYHI	0
+#define BOOT_TLB_ENTRYLO0	1
+#define BOOT_TLB_ENTRYLO1	2
+#define BOOT_TLB_PAGEMASK	3
+
 /* CPU ready flags for each CPU */
 #define BOOT_CPU_READY		2048
 
@@ -90,6 +101,10 @@ extern char nlm_reset_entry[], nlm_reset_entry_end[];
 /* SWIOTLB */
 extern struct dma_map_ops nlm_swiotlb_dma_ops;
 
+/* mapped kernel */
+void nlm_fixup_mem(void);
+void nlm_setup_wired_tlbs(void);
+
 extern unsigned int nlm_threads_per_core;
 extern cpumask_t nlm_cpumask;
 
diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h
index e1c49a9..21a204f 100644
--- a/arch/mips/include/asm/pgtable-64.h
+++ b/arch/mips/include/asm/pgtable-64.h
@@ -135,7 +135,11 @@
 #if defined(CONFIG_MODULES) && defined(KBUILD_64BIT_SYM32) && \
 	VMALLOC_START != CKSSEG
 /* Load modules into 32bit-compatible segment. */
+#ifdef CONFIG_MAPPED_KERNEL
+#define MODULE_START	CKSEG3	/* don't overlap with mapped kernel */
+#else
 #define MODULE_START	CKSSEG
+#endif
 #define MODULE_END	(FIXADDR_START-2*PAGE_SIZE)
 #endif
 
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index e90b2e8..7b402fc 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -474,7 +474,9 @@ static void r4k_tlb_configure(void)
 	 *     be set to fixed-size pages.
 	 */
 	write_c0_pagemask(PM_DEFAULT_MASK);
+#ifndef CONFIG_MAPPED_KERNEL
 	write_c0_wired(0);
+#endif
 	if (current_cpu_type() == CPU_R10000 ||
 	    current_cpu_type() == CPU_R12000 ||
 	    current_cpu_type() == CPU_R14000)
diff --git a/arch/mips/netlogic/Platform b/arch/mips/netlogic/Platform
index fb8eb4c..3124510 100644
--- a/arch/mips/netlogic/Platform
+++ b/arch/mips/netlogic/Platform
@@ -14,4 +14,9 @@ cflags-$(CONFIG_CPU_XLP)	+= $(call cc-option,-march=xlp,-march=mips64r2)
 # NETLOGIC processor support
 #
 platform-$(CONFIG_NLM_COMMON)	+= netlogic/
+
+ifdef CONFIG_MAPPED_KERNEL
+load-$(CONFIG_NLM_COMMON)	+= 0xffffffffc0100000
+else
 load-$(CONFIG_NLM_COMMON)	+= 0xffffffff80100000
+endif
diff --git a/arch/mips/netlogic/common/memory.c b/arch/mips/netlogic/common/memory.c
index 980c102..6d967ce 100644
--- a/arch/mips/netlogic/common/memory.c
+++ b/arch/mips/netlogic/common/memory.c
@@ -36,14 +36,127 @@
 #include <linux/types.h>
 
 #include <asm/bootinfo.h>
+#include <asm/pgtable.h>
 #include <asm/types.h>
+#include <asm/tlb.h>
+
+#include <asm/netlogic/common.h>
+
+#define TLBSZ		(256 * 1024 * 1024)
+#define PM_TLBSZ	PM_256M
+#define PTE_MAPKERN(pa)	(((pa >> 12) << 6) | 0x2f)
+#define TLB_MAXWIRED	28
 
 static const int prefetch_backup = 512;
 
+#if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
+static void nlm_tlb_align(struct boot_mem_map_entry *map)
+{
+	phys_t astart, aend, start, end;
+
+	start = map->addr;
+	end = start + map->size;
+
+	/* fudge first entry for now  */
+	if (start < 0x10000000) {
+		start = 0;
+		end = 0x10000000;
+	}
+	astart = round_up(start, TLBSZ);
+	aend = round_down(end, TLBSZ);
+	if (aend <= astart) {
+		pr_info("Boot mem map: discard seg %lx-%lx\n",
+				(unsigned long)start, (unsigned long)end);
+		map->size = 0;
+		return;
+	}
+	if (astart != start || aend != end) {
+		if (start != 0) {
+			map->addr = astart;
+			map->size = aend - astart;
+		}
+		pr_info("Boot mem map: %lx - %lx -> %lx-%lx\n",
+			(unsigned long)start, (unsigned long)end,
+			(unsigned long)astart, (unsigned long)aend);
+	} else
+		pr_info("Boot mem map: added %lx - %lx\n",
+			(unsigned long)astart, (unsigned long)aend);
+}
+
+static void nlm_calc_wired_tlbs(void)
+{
+	u64 *tlbarr;
+	u32 *tlbcount;
+	u64 lo0, lo1, vaddr;
+	phys_addr_t astart, aend, p;
+	unsigned long bootdata = CKSEG1ADDR(RESET_DATA_PHYS);
+	int i, pos;
+
+	tlbarr = (u64 *)(bootdata + BOOT_TLBS_START);
+	tlbcount = (u32 *)(bootdata + BOOT_NTLBS);
+
+	pos = 0;
+	for (i = 0; i < boot_mem_map.nr_map; i++) {
+		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
+			continue;
+		astart = boot_mem_map.map[i].addr;
+		aend =	astart + boot_mem_map.map[i].size;
+
+		/* fudge first entry for now  */
+		if (astart < 0x10000000) {
+			astart = 0;
+			aend = 0x10000000;
+		}
+		for (p = round_down(astart, 2 * TLBSZ);
+			p < round_up(aend, 2 * TLBSZ);) {
+				vaddr = PAGE_OFFSET + p;
+				lo0 = (p >= astart) ? PTE_MAPKERN(p) : 1;
+				p += TLBSZ;
+				lo1 = (p < aend) ? PTE_MAPKERN(p) : 1;
+				p += TLBSZ;
+
+				tlbarr[BOOT_TLB_ENTRYHI] = vaddr;
+				tlbarr[BOOT_TLB_ENTRYLO0] = lo0;
+				tlbarr[BOOT_TLB_ENTRYLO1] = lo1;
+				tlbarr[BOOT_TLB_PAGEMASK] = PM_TLBSZ;
+				tlbarr += (BOOT_TLB_SIZE / sizeof(tlbarr[0]));
+
+				if (++pos >= TLB_MAXWIRED) {
+					pr_err("Ran out of TLBs at %llx, ",
+							(unsigned long long)p);
+					pr_err("Discarding rest of memory!\n");
+					boot_mem_map.nr_map = i + 1;
+					boot_mem_map.map[i].size = p -
+						boot_mem_map.map[i].addr;
+					goto out;
+				}
+		}
+	}
+out:
+	*tlbcount = pos;
+	pr_info("%d TLB entires used for mapped kernel.\n", pos);
+}
+#endif
+
 void __init plat_mem_fixup(void)
 {
 	int i;
 
+#if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
+	/* trim memory regions to PM_TLBSZ boundaries */
+	for (i = 0; i < boot_mem_map.nr_map; i++) {
+		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
+			continue;
+		nlm_tlb_align(&boot_mem_map.map[i]);
+	}
+
+	/* calculate and save wired TLB entries */
+	nlm_calc_wired_tlbs();
+
+	/* set them up for boot cpu */
+	nlm_setup_wired_tlbs();
+#endif
+
 	/* fixup entries for prefetch */
 	for (i = 0; i < boot_mem_map.nr_map; i++) {
 		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
diff --git a/arch/mips/netlogic/common/reset.S b/arch/mips/netlogic/common/reset.S
index edbab9b..b095475 100644
--- a/arch/mips/netlogic/common/reset.S
+++ b/arch/mips/netlogic/common/reset.S
@@ -43,6 +43,7 @@
 #include <asm/asmmacro.h>
 #include <asm/addrspace.h>
 
+#include <kernel-entry-init.h>
 #include <asm/netlogic/common.h>
 
 #include <asm/netlogic/xlp-hal/iomap.h>
@@ -274,6 +275,10 @@ EXPORT(nlm_boot_siblings)
 	PTR_ADDU t1, v1
 	li	t2, 1
 	sw	t2, 0(t1)
+
+	/* mapped first wired TLB for mapped kernel */
+	kernel_entry_setup
+
 	/* Wait until NMI hits */
 3:	wait
 	b	3b
@@ -298,3 +303,38 @@ LEAF(nlm_init_boot_cpu)
 	jr	ra
 	nop
 END(nlm_init_boot_cpu)
+
+LEAF(nlm_setup_wired_tlbs)
+#ifdef CONFIG_MAPPED_KERNEL
+	li	t0, CKSEG1ADDR(RESET_DATA_PHYS)
+	lw	t3, BOOT_NTLBS(t0)
+	ADDIU	t0, t0, BOOT_TLBS_START
+	li	t1, 1		/* entry 0 is wired at boot, start at 1 */
+	addiu	t3, 1		/* final number of wired entries */
+
+1:	sltu	v1, t1, t3
+	beqz	v1, 2f
+	nop
+	ld	a0, (8 * BOOT_TLB_ENTRYLO0)(t0)
+	dmtc0	a0, CP0_ENTRYLO0
+	ld	a1, (8 * BOOT_TLB_ENTRYLO1)(t0)
+	dmtc0	a1, CP0_ENTRYLO1
+	ld	a2, (8 * BOOT_TLB_ENTRYHI)(t0)
+	dmtc0	a2, CP0_ENTRYHI
+	ld	a2, (8 * BOOT_TLB_PAGEMASK)(t0)
+	dmtc0	a2, CP0_PAGEMASK
+	mtc0	t1, CP0_INDEX
+	_ehb
+	tlbwi
+	ADDIU	t0, BOOT_TLB_SIZE
+	addiu	t1, 1
+	b	1b
+	nop
+2:
+	mtc0	t3, CP0_WIRED
+	JRHB	ra
+#else
+	jr	ra
+#endif
+	nop
+END(nlm_setup_wired_tlbs)
diff --git a/arch/mips/netlogic/common/smpboot.S b/arch/mips/netlogic/common/smpboot.S
index 805355b..c720492 100644
--- a/arch/mips/netlogic/common/smpboot.S
+++ b/arch/mips/netlogic/common/smpboot.S
@@ -41,6 +41,7 @@
 #include <asm/asmmacro.h>
 #include <asm/addrspace.h>
 
+#include <kernel-entry-init.h>
 #include <asm/netlogic/common.h>
 
 #include <asm/netlogic/xlp-hal/iomap.h>
@@ -80,6 +81,13 @@ NESTED(nlm_boot_secondary_cpus, 16, sp)
 	ori	t1, ST0_KX
 #endif
 	mtc0	t1, CP0_STATUS
+
+#ifdef CONFIG_64BIT
+	/* set the full wired TLBs needed for mapped kernel */
+	jal	nlm_setup_wired_tlbs
+	nop
+#endif
+
 	PTR_LA	t1, nlm_next_sp
 	PTR_L	sp, 0(t1)
 	PTR_LA	t1, nlm_next_gp
@@ -136,8 +144,12 @@ NESTED(nlm_rmiboot_preboot, 16, sp)
 	or	t1, t2, v1	/* put in new value */
 	mtcr	t1, t0		/* update core control */
 
+1:
+	/* mapped first wired TLB for mapped kernel */
+	kernel_entry_setup
+
 	/* wait for NMI to hit */
-1:	wait
-	b	1b
+2:	wait
+	b	2b
 	nop
 END(nlm_rmiboot_preboot)
diff --git a/arch/mips/netlogic/xlr/wakeup.c b/arch/mips/netlogic/xlr/wakeup.c
index d61cba1..4ed5204 100644
--- a/arch/mips/netlogic/xlr/wakeup.c
+++ b/arch/mips/netlogic/xlr/wakeup.c
@@ -53,6 +53,7 @@ int xlr_wakeup_secondary_cpus(void)
 	struct nlm_soc_info *nodep;
 	unsigned int i, j, boot_cpu;
 	volatile u32 *cpu_ready = nlm_get_boot_data(BOOT_CPU_READY);
+	void *handler;
 
 	/*
 	 *  In case of RMI boot, hit with NMI to get the cores
@@ -60,7 +61,11 @@ int xlr_wakeup_secondary_cpus(void)
 	 */
 	nodep = nlm_get_node(0);
 	boot_cpu = hard_smp_processor_id();
-	nlm_set_nmi_handler(nlm_rmiboot_preboot);
+	handler = nlm_rmiboot_preboot;
+#ifdef CONFIG_MAPPED_KERNEL
+	handler = (void *)CKSEG0ADDR((long)handler);
+#endif
+	nlm_set_nmi_handler(handler);
 	for (i = 0; i < NR_CPUS; i++) {
 		if (i == boot_cpu || !cpumask_test_cpu(i, &nlm_cpumask))
 			continue;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/5] MIPS: Compress MAPPED kernels
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

Add code to build the wrapper in KSEG0, which in turn loads the
mapped kernel.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/boot/compressed/calc_vmlinuz_load_addr.c | 9 +++++++++
 arch/mips/boot/compressed/decompress.c             | 6 +++++-
 arch/mips/boot/compressed/head.S                   | 5 +++++
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
index 37fe58c..9791e39 100644
--- a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
+++ b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
@@ -51,6 +51,15 @@ int main(int argc, char *argv[])
 
 	vmlinuz_load_addr += (16 - vmlinux_size % 16);
 
+	/* handle 32/64 bit mapped addresses */
+	if (vmlinuz_load_addr >= 0xffffffffc0000000ULL)
+		vmlinuz_load_addr = 0xffffffff80000000ull |
+					(vmlinuz_load_addr & 0x1fffffff);
+	if (vmlinuz_load_addr >= 0xc0000000ULL &&
+				vmlinuz_load_addr < 0xffffffffULL)
+		vmlinuz_load_addr = 0x80000000ull |
+					(vmlinuz_load_addr & 0x1fffffff);
+
 	printf("0x%llx\n", vmlinuz_load_addr);
 
 	return EXIT_SUCCESS;
diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c
index 31903cf..f9a2ab9 100644
--- a/arch/mips/boot/compressed/decompress.c
+++ b/arch/mips/boot/compressed/decompress.c
@@ -83,6 +83,7 @@ void __stack_chk_fail(void)
 void decompress_kernel(unsigned long boot_heap_start)
 {
 	unsigned long zimage_start, zimage_size;
+	unsigned long long loadaddr = VMLINUX_LOAD_ADDRESS_ULL;
 
 	__stack_chk_guard_setup();
 
@@ -105,9 +106,12 @@ void decompress_kernel(unsigned long boot_heap_start)
 	puthex(VMLINUX_LOAD_ADDRESS_ULL);
 	puts("\n");
 
+#ifdef CONFIG_MAPPED_KERNEL
+	loadaddr = CKSEG0ADDR(loadaddr);
+#endif
 	/* Decompress the kernel with according algorithm */
 	decompress((char *)zimage_start, zimage_size, 0, 0,
-		   (void *)VMLINUX_LOAD_ADDRESS_ULL, 0, error);
+		   (void *)(unsigned long)loadaddr, 0, error);
 
 	/* FIXME: should we flush cache here? */
 	puts("Now, booting the kernel...\n");
diff --git a/arch/mips/boot/compressed/head.S b/arch/mips/boot/compressed/head.S
index 409cb48..47ab26f 100644
--- a/arch/mips/boot/compressed/head.S
+++ b/arch/mips/boot/compressed/head.S
@@ -14,6 +14,7 @@
 
 #include <asm/asm.h>
 #include <asm/regdef.h>
+#include <asm/addrspace.h>
 
 	.set noreorder
 	.cprestore
@@ -44,7 +45,11 @@ start:
 	move	a1, s1
 	move	a2, s2
 	move	a3, s3
+#ifdef CONFIG_MAPPED_KERNEL
+	PTR_LI	k0, CKSEG0ADDR(KERNEL_ENTRY)
+#else
 	PTR_LI	k0, KERNEL_ENTRY
+#endif
 	jr	k0
 	 nop
 3:
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/5] MIPS: Compress MAPPED kernels
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

Add code to build the wrapper in KSEG0, which in turn loads the
mapped kernel.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/boot/compressed/calc_vmlinuz_load_addr.c | 9 +++++++++
 arch/mips/boot/compressed/decompress.c             | 6 +++++-
 arch/mips/boot/compressed/head.S                   | 5 +++++
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
index 37fe58c..9791e39 100644
--- a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
+++ b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c
@@ -51,6 +51,15 @@ int main(int argc, char *argv[])
 
 	vmlinuz_load_addr += (16 - vmlinux_size % 16);
 
+	/* handle 32/64 bit mapped addresses */
+	if (vmlinuz_load_addr >= 0xffffffffc0000000ULL)
+		vmlinuz_load_addr = 0xffffffff80000000ull |
+					(vmlinuz_load_addr & 0x1fffffff);
+	if (vmlinuz_load_addr >= 0xc0000000ULL &&
+				vmlinuz_load_addr < 0xffffffffULL)
+		vmlinuz_load_addr = 0x80000000ull |
+					(vmlinuz_load_addr & 0x1fffffff);
+
 	printf("0x%llx\n", vmlinuz_load_addr);
 
 	return EXIT_SUCCESS;
diff --git a/arch/mips/boot/compressed/decompress.c b/arch/mips/boot/compressed/decompress.c
index 31903cf..f9a2ab9 100644
--- a/arch/mips/boot/compressed/decompress.c
+++ b/arch/mips/boot/compressed/decompress.c
@@ -83,6 +83,7 @@ void __stack_chk_fail(void)
 void decompress_kernel(unsigned long boot_heap_start)
 {
 	unsigned long zimage_start, zimage_size;
+	unsigned long long loadaddr = VMLINUX_LOAD_ADDRESS_ULL;
 
 	__stack_chk_guard_setup();
 
@@ -105,9 +106,12 @@ void decompress_kernel(unsigned long boot_heap_start)
 	puthex(VMLINUX_LOAD_ADDRESS_ULL);
 	puts("\n");
 
+#ifdef CONFIG_MAPPED_KERNEL
+	loadaddr = CKSEG0ADDR(loadaddr);
+#endif
 	/* Decompress the kernel with according algorithm */
 	decompress((char *)zimage_start, zimage_size, 0, 0,
-		   (void *)VMLINUX_LOAD_ADDRESS_ULL, 0, error);
+		   (void *)(unsigned long)loadaddr, 0, error);
 
 	/* FIXME: should we flush cache here? */
 	puts("Now, booting the kernel...\n");
diff --git a/arch/mips/boot/compressed/head.S b/arch/mips/boot/compressed/head.S
index 409cb48..47ab26f 100644
--- a/arch/mips/boot/compressed/head.S
+++ b/arch/mips/boot/compressed/head.S
@@ -14,6 +14,7 @@
 
 #include <asm/asm.h>
 #include <asm/regdef.h>
+#include <asm/addrspace.h>
 
 	.set noreorder
 	.cprestore
@@ -44,7 +45,11 @@ start:
 	move	a1, s1
 	move	a2, s2
 	move	a3, s3
+#ifdef CONFIG_MAPPED_KERNEL
+	PTR_LI	k0, CKSEG0ADDR(KERNEL_ENTRY)
+#else
 	PTR_LI	k0, KERNEL_ENTRY
+#endif
 	jr	k0
 	 nop
 3:
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] MIPS: Netlogic: Map kernel with 1G/4G pages on XLPII
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

XLP2XX and XLP9XX support 1G and 4G pages. Use these for mapping
physical memory in Mapped Kernel support.

Reduces the number of WIRED entries in systems with more RAM.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/netlogic/common/memory.c | 237 +++++++++++++++++++++++++------------
 1 file changed, 159 insertions(+), 78 deletions(-)

diff --git a/arch/mips/netlogic/common/memory.c b/arch/mips/netlogic/common/memory.c
index 6d967ce..74fdcab 100644
--- a/arch/mips/netlogic/common/memory.c
+++ b/arch/mips/netlogic/common/memory.c
@@ -34,6 +34,7 @@
 
 #include <linux/kernel.h>
 #include <linux/types.h>
+#include <linux/sizes.h>
 
 #include <asm/bootinfo.h>
 #include <asm/pgtable.h>
@@ -41,100 +42,187 @@
 #include <asm/tlb.h>
 
 #include <asm/netlogic/common.h>
+#include <asm/netlogic/xlp-hal/xlp.h>
 
-#define TLBSZ		(256 * 1024 * 1024)
-#define PM_TLBSZ	PM_256M
-#define PTE_MAPKERN(pa)	(((pa >> 12) << 6) | 0x2f)
+#define SZ_4G		(4ull * 1024 * 1024 * 1024)
+#define PM_4G		0x1ffffe000
+#define MINPGSZ		SZ_256M
+
+#define PTE_MAPKERN(pa)	((((pa) >> 12) << 6) | 0x2f)
 #define TLB_MAXWIRED	28
 
 static const int prefetch_backup = 512;
 
 #if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
-static void nlm_tlb_align(struct boot_mem_map_entry *map)
+
+/* To track the allocated area of a boot_mem_map segment */
+struct alloc_entry {
+	/* Start and end of the va mapped */
+	unsigned long long start;
+	unsigned long long end;
+
+	/* When just one of lo0/lo1 is used, the valid area is half of above */
+	unsigned long long astart;
+	unsigned long long aend;
+} alloc_map[32];
+
+static inline int addtlb(phys_addr_t pa, u64 pgsz, u64 pmask,
+	unsigned int validmask, struct alloc_entry *ae)
 {
-	phys_t astart, aend, start, end;
+	phys_addr_t endpa;
+	u64 *t;
+	u32 *tlbcount;
+	int ntlb;
+
+	tlbcount = (u32 *)nlm_get_boot_data(BOOT_NTLBS);
+	ntlb = *tlbcount;
+	endpa = pa + 2 * pgsz;
 
-	start = map->addr;
-	end = start + map->size;
+	pr_debug("%2d - pa0 %llx pa1 %llx pgsz %llx valid %x\n",
+				ntlb, pa, endpa, pgsz, validmask);
+	if (ntlb == TLB_MAXWIRED) {
+		pr_err("Ran out of TLB entries pa %llx pgsz %llx\n", pa, pgsz);
+		return -1;
+	}
 
-	/* fudge first entry for now  */
-	if (start < 0x10000000) {
-		start = 0;
-		end = 0x10000000;
+	t = nlm_get_boot_data(BOOT_TLBS_START);
+	t += ntlb * (BOOT_TLB_SIZE / sizeof(t[0]));
+	t[BOOT_TLB_ENTRYHI] = pa + PAGE_OFFSET;
+	t[BOOT_TLB_ENTRYLO0] = (validmask & 0x1) ? PTE_MAPKERN(pa) : 1;
+	t[BOOT_TLB_ENTRYLO1] = (validmask & 0x2) ? PTE_MAPKERN(pa + pgsz) : 1;
+	t[BOOT_TLB_PAGEMASK] = pmask;
+
+	if (pa < ae->start) {
+		ae->astart = ae->start = pa;
+		if ((validmask & 0x1) == 0)
+			ae->astart += pgsz;
 	}
-	astart = round_up(start, TLBSZ);
-	aend = round_down(end, TLBSZ);
-	if (aend <= astart) {
-		pr_info("Boot mem map: discard seg %lx-%lx\n",
-				(unsigned long)start, (unsigned long)end);
-		map->size = 0;
-		return;
+	if (endpa > ae->end) {
+		ae->aend = ae->end = endpa;
+		if ((validmask & 0x2) == 0)
+			ae->aend -= pgsz;
 	}
-	if (astart != start || aend != end) {
-		if (start != 0) {
-			map->addr = astart;
-			map->size = aend - astart;
-		}
-		pr_info("Boot mem map: %lx - %lx -> %lx-%lx\n",
-			(unsigned long)start, (unsigned long)end,
-			(unsigned long)astart, (unsigned long)aend);
-	} else
-		pr_info("Boot mem map: added %lx - %lx\n",
-			(unsigned long)astart, (unsigned long)aend);
+	*tlbcount = ntlb + 1;
+	return 0;
 }
 
+/*
+ * Calculate the TLB entries needed to wire dowm the memory map
+ *
+ * Tries to use the largest pagesizes possible, discards memory which
+ * cannot be mapped
+ */
 static void nlm_calc_wired_tlbs(void)
 {
-	u64 *tlbarr;
-	u32 *tlbcount;
-	u64 lo0, lo1, vaddr;
-	phys_addr_t astart, aend, p;
-	unsigned long bootdata = CKSEG1ADDR(RESET_DATA_PHYS);
-	int i, pos;
+	u64 pgsz, pgmask, p;
+	phys_addr_t astart, aend, pend, nstart;
+	phys_addr_t tstart, tend, mstart, mend;
+	struct boot_mem_map_entry *bmap;
+	int i, nr_map;
+
+	nr_map = boot_mem_map.nr_map;
+	bmap = boot_mem_map.map;
+
+	for (i = 0; i < nr_map; i++) {
+		alloc_map[i].start = alloc_map[i].astart = ~0ull;
+		alloc_map[i].end = alloc_map[i].aend = 0;
+	}
 
-	tlbarr = (u64 *)(bootdata + BOOT_TLBS_START);
-	tlbcount = (u32 *)(bootdata + BOOT_NTLBS);
+	/* force the first entry with one 256M lo0 page */
+	addtlb(0, 0x10000000, PM_256M, 0x1, &alloc_map[0]);
 
-	pos = 0;
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		astart = boot_mem_map.map[i].addr;
-		aend =	astart + boot_mem_map.map[i].size;
+	/* starting page size and page mask */
+	if (cpu_is_xlpii()) {
+		pgsz = SZ_4G;
+		pgmask = PM_4G;
+	} else {
+		pgsz = SZ_256M;
+		pgmask = PM_256M;
+	}
 
-		/* fudge first entry for now  */
-		if (astart < 0x10000000) {
-			astart = 0;
-			aend = 0x10000000;
-		}
-		for (p = round_down(astart, 2 * TLBSZ);
-			p < round_up(aend, 2 * TLBSZ);) {
-				vaddr = PAGE_OFFSET + p;
-				lo0 = (p >= astart) ? PTE_MAPKERN(p) : 1;
-				p += TLBSZ;
-				lo1 = (p < aend) ? PTE_MAPKERN(p) : 1;
-				p += TLBSZ;
-
-				tlbarr[BOOT_TLB_ENTRYHI] = vaddr;
-				tlbarr[BOOT_TLB_ENTRYLO0] = lo0;
-				tlbarr[BOOT_TLB_ENTRYLO1] = lo1;
-				tlbarr[BOOT_TLB_PAGEMASK] = PM_TLBSZ;
-				tlbarr += (BOOT_TLB_SIZE / sizeof(tlbarr[0]));
-
-				if (++pos >= TLB_MAXWIRED) {
-					pr_err("Ran out of TLBs at %llx, ",
-							(unsigned long long)p);
-					pr_err("Discarding rest of memory!\n");
-					boot_mem_map.nr_map = i + 1;
-					boot_mem_map.map[i].size = p -
-						boot_mem_map.map[i].addr;
+	/* do multiple passes with successively smaller page sizes */
+	for (; pgsz >= MINPGSZ; pgsz /= 4, pgmask = (pgmask >> 2) ^ 0x1800) {
+		for (i = 0; i < nr_map; i++) {
+			if (bmap[i].type != BOOT_MEM_RAM)
+				continue;
+
+			/* previous mapping end and next mapping start */
+			pend = alloc_map[i - 1].end;
+			nstart = (i == nr_map - 1) ? ~0ull : bmap[i + 1].addr;
+
+			/* mem block start and end */
+			mstart = round_up(bmap[i].addr, MINPGSZ);
+			mend = round_down(bmap[i].addr + bmap[i].size, MINPGSZ);
+
+			/* allocated area in the memory block, start and end */
+			astart = alloc_map[i].start;
+			aend = alloc_map[i].end;
+
+			/* skip fully mapped blocks */
+			if (mstart >= astart && mend <= aend)
+				continue;
+
+			/* boundaries aligned to the current page size */
+			tstart = round_up(mstart, 2 * pgsz);
+			tend = round_down(mend, 2 * pgsz);
+			if (tstart > tend)
+				continue;
+
+			/* use LO1 of a TLB entry */
+			if (mstart + pgsz == tstart && pend <= mstart - pgsz)
+				if (addtlb(mstart - pgsz, pgsz,
+						pgmask, 0x2, &alloc_map[i]))
 					goto out;
+
+			for (p = tstart; p < tend;) {
+				if (astart < aend && p == astart) {
+					p = aend;
+					continue;
 				}
+				if (addtlb(p, pgsz, pgmask, 0x3, &alloc_map[i]))
+					goto out;
+				p += 2 * pgsz;
+			}
+
+			/* use LO0 of a TLB entry */
+			if (tend + pgsz == mend && nstart >= mend + pgsz)
+				if (addtlb(tend, pgsz,
+						pgmask, 0x1, &alloc_map[i]))
+					goto out;
 		}
 	}
 out:
-	*tlbcount = pos;
-	pr_info("%d TLB entires used for mapped kernel.\n", pos);
+	for (i = 0; i < nr_map; i++) {
+		mstart = bmap[i].addr;
+		mend = bmap[i].addr + bmap[i].size;
+		astart = alloc_map[i].astart;
+		aend = alloc_map[i].aend;
+
+		if (astart >= aend) {
+			bmap[i].size = 0;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)mstart,
+				(unsigned long long)mend);
+			continue;
+		}
+		if (bmap[i].addr < astart) {
+			bmap[i].addr = astart;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)bmap[i].addr,
+				(unsigned long long)astart);
+		}
+		if (mend > aend) {
+			bmap[i].size = aend - bmap[i].addr;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)aend,
+				(unsigned long long)mend);
+		}
+		pr_debug("%2d alloc: %10llx %10llx mem %10llx %10llx\n", i,
+						astart, aend, bmap[i].addr,
+						bmap[i].addr + bmap[i].size);
+	}
+	pr_info("%d TLB entires used for mapped kernel.\n",
+				*(u32 *)nlm_get_boot_data(BOOT_NTLBS));
 }
 #endif
 
@@ -143,13 +231,6 @@ void __init plat_mem_fixup(void)
 	int i;
 
 #if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
-	/* trim memory regions to PM_TLBSZ boundaries */
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		nlm_tlb_align(&boot_mem_map.map[i]);
-	}
-
 	/* calculate and save wired TLB entries */
 	nlm_calc_wired_tlbs();
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] MIPS: Netlogic: Map kernel with 1G/4G pages on XLPII
@ 2015-01-16 12:38   ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2015-01-16 12:38 UTC (permalink / raw)
  To: linux-mips, ralf; +Cc: Jayachandran C

XLP2XX and XLP9XX support 1G and 4G pages. Use these for mapping
physical memory in Mapped Kernel support.

Reduces the number of WIRED entries in systems with more RAM.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/netlogic/common/memory.c | 237 +++++++++++++++++++++++++------------
 1 file changed, 159 insertions(+), 78 deletions(-)

diff --git a/arch/mips/netlogic/common/memory.c b/arch/mips/netlogic/common/memory.c
index 6d967ce..74fdcab 100644
--- a/arch/mips/netlogic/common/memory.c
+++ b/arch/mips/netlogic/common/memory.c
@@ -34,6 +34,7 @@
 
 #include <linux/kernel.h>
 #include <linux/types.h>
+#include <linux/sizes.h>
 
 #include <asm/bootinfo.h>
 #include <asm/pgtable.h>
@@ -41,100 +42,187 @@
 #include <asm/tlb.h>
 
 #include <asm/netlogic/common.h>
+#include <asm/netlogic/xlp-hal/xlp.h>
 
-#define TLBSZ		(256 * 1024 * 1024)
-#define PM_TLBSZ	PM_256M
-#define PTE_MAPKERN(pa)	(((pa >> 12) << 6) | 0x2f)
+#define SZ_4G		(4ull * 1024 * 1024 * 1024)
+#define PM_4G		0x1ffffe000
+#define MINPGSZ		SZ_256M
+
+#define PTE_MAPKERN(pa)	((((pa) >> 12) << 6) | 0x2f)
 #define TLB_MAXWIRED	28
 
 static const int prefetch_backup = 512;
 
 #if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
-static void nlm_tlb_align(struct boot_mem_map_entry *map)
+
+/* To track the allocated area of a boot_mem_map segment */
+struct alloc_entry {
+	/* Start and end of the va mapped */
+	unsigned long long start;
+	unsigned long long end;
+
+	/* When just one of lo0/lo1 is used, the valid area is half of above */
+	unsigned long long astart;
+	unsigned long long aend;
+} alloc_map[32];
+
+static inline int addtlb(phys_addr_t pa, u64 pgsz, u64 pmask,
+	unsigned int validmask, struct alloc_entry *ae)
 {
-	phys_t astart, aend, start, end;
+	phys_addr_t endpa;
+	u64 *t;
+	u32 *tlbcount;
+	int ntlb;
+
+	tlbcount = (u32 *)nlm_get_boot_data(BOOT_NTLBS);
+	ntlb = *tlbcount;
+	endpa = pa + 2 * pgsz;
 
-	start = map->addr;
-	end = start + map->size;
+	pr_debug("%2d - pa0 %llx pa1 %llx pgsz %llx valid %x\n",
+				ntlb, pa, endpa, pgsz, validmask);
+	if (ntlb == TLB_MAXWIRED) {
+		pr_err("Ran out of TLB entries pa %llx pgsz %llx\n", pa, pgsz);
+		return -1;
+	}
 
-	/* fudge first entry for now  */
-	if (start < 0x10000000) {
-		start = 0;
-		end = 0x10000000;
+	t = nlm_get_boot_data(BOOT_TLBS_START);
+	t += ntlb * (BOOT_TLB_SIZE / sizeof(t[0]));
+	t[BOOT_TLB_ENTRYHI] = pa + PAGE_OFFSET;
+	t[BOOT_TLB_ENTRYLO0] = (validmask & 0x1) ? PTE_MAPKERN(pa) : 1;
+	t[BOOT_TLB_ENTRYLO1] = (validmask & 0x2) ? PTE_MAPKERN(pa + pgsz) : 1;
+	t[BOOT_TLB_PAGEMASK] = pmask;
+
+	if (pa < ae->start) {
+		ae->astart = ae->start = pa;
+		if ((validmask & 0x1) == 0)
+			ae->astart += pgsz;
 	}
-	astart = round_up(start, TLBSZ);
-	aend = round_down(end, TLBSZ);
-	if (aend <= astart) {
-		pr_info("Boot mem map: discard seg %lx-%lx\n",
-				(unsigned long)start, (unsigned long)end);
-		map->size = 0;
-		return;
+	if (endpa > ae->end) {
+		ae->aend = ae->end = endpa;
+		if ((validmask & 0x2) == 0)
+			ae->aend -= pgsz;
 	}
-	if (astart != start || aend != end) {
-		if (start != 0) {
-			map->addr = astart;
-			map->size = aend - astart;
-		}
-		pr_info("Boot mem map: %lx - %lx -> %lx-%lx\n",
-			(unsigned long)start, (unsigned long)end,
-			(unsigned long)astart, (unsigned long)aend);
-	} else
-		pr_info("Boot mem map: added %lx - %lx\n",
-			(unsigned long)astart, (unsigned long)aend);
+	*tlbcount = ntlb + 1;
+	return 0;
 }
 
+/*
+ * Calculate the TLB entries needed to wire dowm the memory map
+ *
+ * Tries to use the largest pagesizes possible, discards memory which
+ * cannot be mapped
+ */
 static void nlm_calc_wired_tlbs(void)
 {
-	u64 *tlbarr;
-	u32 *tlbcount;
-	u64 lo0, lo1, vaddr;
-	phys_addr_t astart, aend, p;
-	unsigned long bootdata = CKSEG1ADDR(RESET_DATA_PHYS);
-	int i, pos;
+	u64 pgsz, pgmask, p;
+	phys_addr_t astart, aend, pend, nstart;
+	phys_addr_t tstart, tend, mstart, mend;
+	struct boot_mem_map_entry *bmap;
+	int i, nr_map;
+
+	nr_map = boot_mem_map.nr_map;
+	bmap = boot_mem_map.map;
+
+	for (i = 0; i < nr_map; i++) {
+		alloc_map[i].start = alloc_map[i].astart = ~0ull;
+		alloc_map[i].end = alloc_map[i].aend = 0;
+	}
 
-	tlbarr = (u64 *)(bootdata + BOOT_TLBS_START);
-	tlbcount = (u32 *)(bootdata + BOOT_NTLBS);
+	/* force the first entry with one 256M lo0 page */
+	addtlb(0, 0x10000000, PM_256M, 0x1, &alloc_map[0]);
 
-	pos = 0;
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		astart = boot_mem_map.map[i].addr;
-		aend =	astart + boot_mem_map.map[i].size;
+	/* starting page size and page mask */
+	if (cpu_is_xlpii()) {
+		pgsz = SZ_4G;
+		pgmask = PM_4G;
+	} else {
+		pgsz = SZ_256M;
+		pgmask = PM_256M;
+	}
 
-		/* fudge first entry for now  */
-		if (astart < 0x10000000) {
-			astart = 0;
-			aend = 0x10000000;
-		}
-		for (p = round_down(astart, 2 * TLBSZ);
-			p < round_up(aend, 2 * TLBSZ);) {
-				vaddr = PAGE_OFFSET + p;
-				lo0 = (p >= astart) ? PTE_MAPKERN(p) : 1;
-				p += TLBSZ;
-				lo1 = (p < aend) ? PTE_MAPKERN(p) : 1;
-				p += TLBSZ;
-
-				tlbarr[BOOT_TLB_ENTRYHI] = vaddr;
-				tlbarr[BOOT_TLB_ENTRYLO0] = lo0;
-				tlbarr[BOOT_TLB_ENTRYLO1] = lo1;
-				tlbarr[BOOT_TLB_PAGEMASK] = PM_TLBSZ;
-				tlbarr += (BOOT_TLB_SIZE / sizeof(tlbarr[0]));
-
-				if (++pos >= TLB_MAXWIRED) {
-					pr_err("Ran out of TLBs at %llx, ",
-							(unsigned long long)p);
-					pr_err("Discarding rest of memory!\n");
-					boot_mem_map.nr_map = i + 1;
-					boot_mem_map.map[i].size = p -
-						boot_mem_map.map[i].addr;
+	/* do multiple passes with successively smaller page sizes */
+	for (; pgsz >= MINPGSZ; pgsz /= 4, pgmask = (pgmask >> 2) ^ 0x1800) {
+		for (i = 0; i < nr_map; i++) {
+			if (bmap[i].type != BOOT_MEM_RAM)
+				continue;
+
+			/* previous mapping end and next mapping start */
+			pend = alloc_map[i - 1].end;
+			nstart = (i == nr_map - 1) ? ~0ull : bmap[i + 1].addr;
+
+			/* mem block start and end */
+			mstart = round_up(bmap[i].addr, MINPGSZ);
+			mend = round_down(bmap[i].addr + bmap[i].size, MINPGSZ);
+
+			/* allocated area in the memory block, start and end */
+			astart = alloc_map[i].start;
+			aend = alloc_map[i].end;
+
+			/* skip fully mapped blocks */
+			if (mstart >= astart && mend <= aend)
+				continue;
+
+			/* boundaries aligned to the current page size */
+			tstart = round_up(mstart, 2 * pgsz);
+			tend = round_down(mend, 2 * pgsz);
+			if (tstart > tend)
+				continue;
+
+			/* use LO1 of a TLB entry */
+			if (mstart + pgsz == tstart && pend <= mstart - pgsz)
+				if (addtlb(mstart - pgsz, pgsz,
+						pgmask, 0x2, &alloc_map[i]))
 					goto out;
+
+			for (p = tstart; p < tend;) {
+				if (astart < aend && p == astart) {
+					p = aend;
+					continue;
 				}
+				if (addtlb(p, pgsz, pgmask, 0x3, &alloc_map[i]))
+					goto out;
+				p += 2 * pgsz;
+			}
+
+			/* use LO0 of a TLB entry */
+			if (tend + pgsz == mend && nstart >= mend + pgsz)
+				if (addtlb(tend, pgsz,
+						pgmask, 0x1, &alloc_map[i]))
+					goto out;
 		}
 	}
 out:
-	*tlbcount = pos;
-	pr_info("%d TLB entires used for mapped kernel.\n", pos);
+	for (i = 0; i < nr_map; i++) {
+		mstart = bmap[i].addr;
+		mend = bmap[i].addr + bmap[i].size;
+		astart = alloc_map[i].astart;
+		aend = alloc_map[i].aend;
+
+		if (astart >= aend) {
+			bmap[i].size = 0;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)mstart,
+				(unsigned long long)mend);
+			continue;
+		}
+		if (bmap[i].addr < astart) {
+			bmap[i].addr = astart;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)bmap[i].addr,
+				(unsigned long long)astart);
+		}
+		if (mend > aend) {
+			bmap[i].size = aend - bmap[i].addr;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)aend,
+				(unsigned long long)mend);
+		}
+		pr_debug("%2d alloc: %10llx %10llx mem %10llx %10llx\n", i,
+						astart, aend, bmap[i].addr,
+						bmap[i].addr + bmap[i].size);
+	}
+	pr_info("%d TLB entires used for mapped kernel.\n",
+				*(u32 *)nlm_get_boot_data(BOOT_NTLBS));
 }
 #endif
 
@@ -143,13 +231,6 @@ void __init plat_mem_fixup(void)
 	int i;
 
 #if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
-	/* trim memory regions to PM_TLBSZ boundaries */
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		nlm_tlb_align(&boot_mem_map.map[i]);
-	}
-
 	/* calculate and save wired TLB entries */
 	nlm_calc_wired_tlbs();
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] MIPS: Netlogic: Map kernel with 1G/4G pages on XLPII
@ 2014-05-09 16:09     ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2014-05-09 16:09 UTC (permalink / raw)
  To: linux-mips; +Cc: Jayachandran C, ralf

XLP2XX and XLP9XX support 1G and 4G pages. Use this for mapping
physical memory in Mapped Kernel support.

Reduces the number of WIRED entries in systems with more RAM.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/netlogic/common/memory.c |  237 ++++++++++++++++++++++++------------
 1 file changed, 159 insertions(+), 78 deletions(-)

diff --git a/arch/mips/netlogic/common/memory.c b/arch/mips/netlogic/common/memory.c
index 6d967ce..83aeb7c 100644
--- a/arch/mips/netlogic/common/memory.c
+++ b/arch/mips/netlogic/common/memory.c
@@ -34,6 +34,7 @@
 
 #include <linux/kernel.h>
 #include <linux/types.h>
+#include <linux/sizes.h>
 
 #include <asm/bootinfo.h>
 #include <asm/pgtable.h>
@@ -41,100 +42,187 @@
 #include <asm/tlb.h>
 
 #include <asm/netlogic/common.h>
+#include <asm/netlogic/xlp-hal/xlp.h>
 
-#define TLBSZ		(256 * 1024 * 1024)
-#define PM_TLBSZ	PM_256M
-#define PTE_MAPKERN(pa)	(((pa >> 12) << 6) | 0x2f)
+#define SZ_4G		(4ull * 1024 * 1024 * 1024)
+#define PM_4G		0x1ffffe000
+#define MINPGSZ		SZ_256M
+
+#define PTE_MAPKERN(pa)	((((pa) >> 12) << 6) | 0x2f)
 #define TLB_MAXWIRED	28
 
 static const int prefetch_backup = 512;
 
 #if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
-static void nlm_tlb_align(struct boot_mem_map_entry *map)
+
+/* To track the allocated area of a boot_mem_map segment */
+struct alloc_entry {
+	/* Start and end of the va mapped */
+	unsigned long long start;
+	unsigned long long end;
+
+	/* When just one of lo0/lo1 is used, the valid area is half of above */
+	unsigned long long astart;
+	unsigned long long aend;
+} alloc_map[32];
+
+static inline int addtlb(phys_addr_t pa, u64 pgsz, u64 pmask,
+	unsigned int validmask, struct alloc_entry *ae)
 {
-	phys_t astart, aend, start, end;
+	phys_addr_t endpa;
+	u64 *t;
+	u32 *tlbcount;
+	int ntlb;
+
+	tlbcount = (u32 *)nlm_get_boot_data(BOOT_NTLBS);
+	ntlb = *tlbcount;
+	endpa = pa + 2 * pgsz;
 
-	start = map->addr;
-	end = start + map->size;
+	pr_debug("%2d - pa0 %llx pa1 %llx pgsz %llx valid %x\n",
+				ntlb, pa, endpa, pgsz, validmask);
+	if (ntlb == TLB_MAXWIRED) {
+		pr_err("Ran out of TLB entries pa %llx pgsz %llx\n", pa, pgsz);
+		return -1;
+	}
 
-	/* fudge first entry for now  */
-	if (start < 0x10000000) {
-		start = 0;
-		end = 0x10000000;
+	t = nlm_get_boot_data(BOOT_TLBS_START);
+	t += ntlb * (BOOT_TLB_SIZE / sizeof(t[0]));
+	t[BOOT_TLB_ENTRYHI] = pa + PAGE_OFFSET;
+	t[BOOT_TLB_ENTRYLO0] = (validmask & 0x1) ? PTE_MAPKERN(pa) : 1;
+	t[BOOT_TLB_ENTRYLO1] = (validmask & 0x2) ? PTE_MAPKERN(pa + pgsz) : 1;
+	t[BOOT_TLB_PAGEMASK] = pmask;
+
+	if (pa < ae->start) {
+		ae->astart = ae->start = pa;
+		if ((validmask & 0x1) == 0)
+			ae->astart += pgsz;
 	}
-	astart = round_up(start, TLBSZ);
-	aend = round_down(end, TLBSZ);
-	if (aend <= astart) {
-		pr_info("Boot mem map: discard seg %lx-%lx\n",
-				(unsigned long)start, (unsigned long)end);
-		map->size = 0;
-		return;
+	if (endpa > ae->end) {
+		ae->aend = ae->end = endpa;
+		if ((validmask & 0x2) == 0)
+			ae->aend -= pgsz;
 	}
-	if (astart != start || aend != end) {
-		if (start != 0) {
-			map->addr = astart;
-			map->size = aend - astart;
-		}
-		pr_info("Boot mem map: %lx - %lx -> %lx-%lx\n",
-			(unsigned long)start, (unsigned long)end,
-			(unsigned long)astart, (unsigned long)aend);
-	} else
-		pr_info("Boot mem map: added %lx - %lx\n",
-			(unsigned long)astart, (unsigned long)aend);
+	*tlbcount = ntlb + 1;
+	return 0;
 }
 
+/*
+ * Calculate the TLB entries needed to wire dowm the memory map
+ *
+ * Tries to use the largest pagesizes possible, discards memory which
+ * cannot be mapped
+ */
 static void nlm_calc_wired_tlbs(void)
 {
-	u64 *tlbarr;
-	u32 *tlbcount;
-	u64 lo0, lo1, vaddr;
-	phys_addr_t astart, aend, p;
-	unsigned long bootdata = CKSEG1ADDR(RESET_DATA_PHYS);
-	int i, pos;
+	u64 pgsz, pgmask, p;
+	phys_addr_t astart, aend, pend, nstart;
+	phys_addr_t tstart, tend, mstart, mend;
+	struct boot_mem_map_entry *bmap;
+	int i, nr_map;
+
+	nr_map = boot_mem_map.nr_map;
+	bmap = boot_mem_map.map;
+
+	for (i = 0; i < nr_map; i++) {
+		alloc_map[i].start = alloc_map[i].astart = ~0ull;
+		alloc_map[i].end = alloc_map[i].aend = 0;
+	}
 
-	tlbarr = (u64 *)(bootdata + BOOT_TLBS_START);
-	tlbcount = (u32 *)(bootdata + BOOT_NTLBS);
+	/* force the first entry with one 256M lo0 page */
+	addtlb(0, 0x10000000, PM_256M, 0x1, &alloc_map[0]);
 
-	pos = 0;
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		astart = boot_mem_map.map[i].addr;
-		aend =	astart + boot_mem_map.map[i].size;
+	/* starting page size and page mask */
+	if (cpu_is_xlpii()) {
+		pgsz = SZ_4G;
+		pgmask = PM_4G;
+	} else {
+		pgsz = SZ_256M;
+		pgmask = PM_256M;
+	}
 
-		/* fudge first entry for now  */
-		if (astart < 0x10000000) {
-			astart = 0;
-			aend = 0x10000000;
-		}
-		for (p = round_down(astart, 2 * TLBSZ);
-			p < round_up(aend, 2 * TLBSZ);) {
-				vaddr = PAGE_OFFSET + p;
-				lo0 = (p >= astart) ? PTE_MAPKERN(p) : 1;
-				p += TLBSZ;
-				lo1 = (p < aend) ? PTE_MAPKERN(p) : 1;
-				p += TLBSZ;
-
-				tlbarr[BOOT_TLB_ENTRYHI] = vaddr;
-				tlbarr[BOOT_TLB_ENTRYLO0] = lo0;
-				tlbarr[BOOT_TLB_ENTRYLO1] = lo1;
-				tlbarr[BOOT_TLB_PAGEMASK] = PM_TLBSZ;
-				tlbarr += (BOOT_TLB_SIZE / sizeof(tlbarr[0]));
-
-				if (++pos >= TLB_MAXWIRED) {
-					pr_err("Ran out of TLBs at %llx, ",
-							(unsigned long long)p);
-					pr_err("Discarding rest of memory!\n");
-					boot_mem_map.nr_map = i + 1;
-					boot_mem_map.map[i].size = p -
-						boot_mem_map.map[i].addr;
+	/* do multiple passes with successively smaller page sizes */
+	for (; pgsz >= MINPGSZ; pgsz /= 4, pgmask = (pgmask >> 2) ^ 0x1800) {
+		for (i = 0; i < nr_map; i++) {
+			if (bmap[i].type != BOOT_MEM_RAM)
+				continue;
+
+			/* previous mapping end and next mapping start */
+			pend = alloc_map[i - 1].end;
+			nstart = (i == nr_map - 1) ? ~0ull : bmap[i + 1].addr;
+
+			/* mem block start and end */
+			mstart = round_up(bmap[i].addr, MINPGSZ);
+			mend = round_down(bmap[i].addr + bmap[i].size, MINPGSZ);
+
+			/* allocated area in the memory block, start and end */
+			astart = alloc_map[i].start;
+			aend = alloc_map[i].end;
+
+			/* skip fully mapped blocks */
+			if (mstart >= astart && mend <= aend)
+				continue;
+
+			/* boundaries aligned to the current page size */
+			tstart = round_up(mstart, 2 * pgsz);
+			tend = round_down(mend, 2 * pgsz);
+			if (tstart > tend)
+				continue;
+
+			/* use LO1 of a TLB entry */
+			if (mstart + pgsz == tstart && pend <= mstart - pgsz)
+				if (addtlb(mstart - pgsz, pgsz,
+						pgmask, 0x2, &alloc_map[i]))
 					goto out;
+
+			for (p = tstart; p < tend;) {
+				if (astart < aend && p == astart) {
+					p = aend;
+					continue;
 				}
+				if (addtlb(p, pgsz, pgmask, 0x3, &alloc_map[i]))
+					goto out;
+				p += 2 * pgsz;
+			}
+
+			/* use LO0 of a TLB entry */
+			if (tend + pgsz == mend && nstart >= mend + pgsz)
+				if (addtlb(tend, pgsz,
+						pgmask, 0x1, &alloc_map[i]))
+					goto out;
 		}
 	}
 out:
-	*tlbcount = pos;
-	pr_info("%d TLB entires used for mapped kernel.\n", pos);
+	for (i = 0; i < nr_map; i++) {
+		mstart = bmap[i].addr;
+		mend = bmap[i].addr + bmap[i].size;
+		astart = alloc_map[i].astart;
+		aend = alloc_map[i].aend;
+
+		pr_info("%2d alloc: %10llx %10llx mem %10llx %10llx\n", i,
+						astart, aend, bmap[i].addr,
+						bmap[i].addr + bmap[i].size);
+		if (astart >= aend) {
+			bmap[i].size = 0;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)mstart,
+				(unsigned long long)mend);
+			continue;
+		}
+		if (bmap[i].addr < astart) {
+			bmap[i].addr = astart;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)bmap[i].addr,
+				(unsigned long long)astart);
+		}
+		if (mend > aend) {
+			bmap[i].size = aend - bmap[i].addr;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)aend,
+				(unsigned long long)mend);
+		}
+	}
+	pr_info("%d TLB entires used for mapped kernel.\n",
+				*(u32 *)nlm_get_boot_data(BOOT_NTLBS));
 }
 #endif
 
@@ -143,13 +231,6 @@ void __init plat_mem_fixup(void)
 	int i;
 
 #if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
-	/* trim memory regions to PM_TLBSZ boundaries */
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		nlm_tlb_align(&boot_mem_map.map[i]);
-	}
-
 	/* calculate and save wired TLB entries */
 	nlm_calc_wired_tlbs();
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] MIPS: Netlogic: Map kernel with 1G/4G pages on XLPII
@ 2014-05-09 16:09     ` Jayachandran C
  0 siblings, 0 replies; 14+ messages in thread
From: Jayachandran C @ 2014-05-09 16:09 UTC (permalink / raw)
  To: linux-mips; +Cc: Jayachandran C, ralf

XLP2XX and XLP9XX support 1G and 4G pages. Use this for mapping
physical memory in Mapped Kernel support.

Reduces the number of WIRED entries in systems with more RAM.

Signed-off-by: Jayachandran C <jchandra@broadcom.com>
---
 arch/mips/netlogic/common/memory.c |  237 ++++++++++++++++++++++++------------
 1 file changed, 159 insertions(+), 78 deletions(-)

diff --git a/arch/mips/netlogic/common/memory.c b/arch/mips/netlogic/common/memory.c
index 6d967ce..83aeb7c 100644
--- a/arch/mips/netlogic/common/memory.c
+++ b/arch/mips/netlogic/common/memory.c
@@ -34,6 +34,7 @@
 
 #include <linux/kernel.h>
 #include <linux/types.h>
+#include <linux/sizes.h>
 
 #include <asm/bootinfo.h>
 #include <asm/pgtable.h>
@@ -41,100 +42,187 @@
 #include <asm/tlb.h>
 
 #include <asm/netlogic/common.h>
+#include <asm/netlogic/xlp-hal/xlp.h>
 
-#define TLBSZ		(256 * 1024 * 1024)
-#define PM_TLBSZ	PM_256M
-#define PTE_MAPKERN(pa)	(((pa >> 12) << 6) | 0x2f)
+#define SZ_4G		(4ull * 1024 * 1024 * 1024)
+#define PM_4G		0x1ffffe000
+#define MINPGSZ		SZ_256M
+
+#define PTE_MAPKERN(pa)	((((pa) >> 12) << 6) | 0x2f)
 #define TLB_MAXWIRED	28
 
 static const int prefetch_backup = 512;
 
 #if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
-static void nlm_tlb_align(struct boot_mem_map_entry *map)
+
+/* To track the allocated area of a boot_mem_map segment */
+struct alloc_entry {
+	/* Start and end of the va mapped */
+	unsigned long long start;
+	unsigned long long end;
+
+	/* When just one of lo0/lo1 is used, the valid area is half of above */
+	unsigned long long astart;
+	unsigned long long aend;
+} alloc_map[32];
+
+static inline int addtlb(phys_addr_t pa, u64 pgsz, u64 pmask,
+	unsigned int validmask, struct alloc_entry *ae)
 {
-	phys_t astart, aend, start, end;
+	phys_addr_t endpa;
+	u64 *t;
+	u32 *tlbcount;
+	int ntlb;
+
+	tlbcount = (u32 *)nlm_get_boot_data(BOOT_NTLBS);
+	ntlb = *tlbcount;
+	endpa = pa + 2 * pgsz;
 
-	start = map->addr;
-	end = start + map->size;
+	pr_debug("%2d - pa0 %llx pa1 %llx pgsz %llx valid %x\n",
+				ntlb, pa, endpa, pgsz, validmask);
+	if (ntlb == TLB_MAXWIRED) {
+		pr_err("Ran out of TLB entries pa %llx pgsz %llx\n", pa, pgsz);
+		return -1;
+	}
 
-	/* fudge first entry for now  */
-	if (start < 0x10000000) {
-		start = 0;
-		end = 0x10000000;
+	t = nlm_get_boot_data(BOOT_TLBS_START);
+	t += ntlb * (BOOT_TLB_SIZE / sizeof(t[0]));
+	t[BOOT_TLB_ENTRYHI] = pa + PAGE_OFFSET;
+	t[BOOT_TLB_ENTRYLO0] = (validmask & 0x1) ? PTE_MAPKERN(pa) : 1;
+	t[BOOT_TLB_ENTRYLO1] = (validmask & 0x2) ? PTE_MAPKERN(pa + pgsz) : 1;
+	t[BOOT_TLB_PAGEMASK] = pmask;
+
+	if (pa < ae->start) {
+		ae->astart = ae->start = pa;
+		if ((validmask & 0x1) == 0)
+			ae->astart += pgsz;
 	}
-	astart = round_up(start, TLBSZ);
-	aend = round_down(end, TLBSZ);
-	if (aend <= astart) {
-		pr_info("Boot mem map: discard seg %lx-%lx\n",
-				(unsigned long)start, (unsigned long)end);
-		map->size = 0;
-		return;
+	if (endpa > ae->end) {
+		ae->aend = ae->end = endpa;
+		if ((validmask & 0x2) == 0)
+			ae->aend -= pgsz;
 	}
-	if (astart != start || aend != end) {
-		if (start != 0) {
-			map->addr = astart;
-			map->size = aend - astart;
-		}
-		pr_info("Boot mem map: %lx - %lx -> %lx-%lx\n",
-			(unsigned long)start, (unsigned long)end,
-			(unsigned long)astart, (unsigned long)aend);
-	} else
-		pr_info("Boot mem map: added %lx - %lx\n",
-			(unsigned long)astart, (unsigned long)aend);
+	*tlbcount = ntlb + 1;
+	return 0;
 }
 
+/*
+ * Calculate the TLB entries needed to wire dowm the memory map
+ *
+ * Tries to use the largest pagesizes possible, discards memory which
+ * cannot be mapped
+ */
 static void nlm_calc_wired_tlbs(void)
 {
-	u64 *tlbarr;
-	u32 *tlbcount;
-	u64 lo0, lo1, vaddr;
-	phys_addr_t astart, aend, p;
-	unsigned long bootdata = CKSEG1ADDR(RESET_DATA_PHYS);
-	int i, pos;
+	u64 pgsz, pgmask, p;
+	phys_addr_t astart, aend, pend, nstart;
+	phys_addr_t tstart, tend, mstart, mend;
+	struct boot_mem_map_entry *bmap;
+	int i, nr_map;
+
+	nr_map = boot_mem_map.nr_map;
+	bmap = boot_mem_map.map;
+
+	for (i = 0; i < nr_map; i++) {
+		alloc_map[i].start = alloc_map[i].astart = ~0ull;
+		alloc_map[i].end = alloc_map[i].aend = 0;
+	}
 
-	tlbarr = (u64 *)(bootdata + BOOT_TLBS_START);
-	tlbcount = (u32 *)(bootdata + BOOT_NTLBS);
+	/* force the first entry with one 256M lo0 page */
+	addtlb(0, 0x10000000, PM_256M, 0x1, &alloc_map[0]);
 
-	pos = 0;
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		astart = boot_mem_map.map[i].addr;
-		aend =	astart + boot_mem_map.map[i].size;
+	/* starting page size and page mask */
+	if (cpu_is_xlpii()) {
+		pgsz = SZ_4G;
+		pgmask = PM_4G;
+	} else {
+		pgsz = SZ_256M;
+		pgmask = PM_256M;
+	}
 
-		/* fudge first entry for now  */
-		if (astart < 0x10000000) {
-			astart = 0;
-			aend = 0x10000000;
-		}
-		for (p = round_down(astart, 2 * TLBSZ);
-			p < round_up(aend, 2 * TLBSZ);) {
-				vaddr = PAGE_OFFSET + p;
-				lo0 = (p >= astart) ? PTE_MAPKERN(p) : 1;
-				p += TLBSZ;
-				lo1 = (p < aend) ? PTE_MAPKERN(p) : 1;
-				p += TLBSZ;
-
-				tlbarr[BOOT_TLB_ENTRYHI] = vaddr;
-				tlbarr[BOOT_TLB_ENTRYLO0] = lo0;
-				tlbarr[BOOT_TLB_ENTRYLO1] = lo1;
-				tlbarr[BOOT_TLB_PAGEMASK] = PM_TLBSZ;
-				tlbarr += (BOOT_TLB_SIZE / sizeof(tlbarr[0]));
-
-				if (++pos >= TLB_MAXWIRED) {
-					pr_err("Ran out of TLBs at %llx, ",
-							(unsigned long long)p);
-					pr_err("Discarding rest of memory!\n");
-					boot_mem_map.nr_map = i + 1;
-					boot_mem_map.map[i].size = p -
-						boot_mem_map.map[i].addr;
+	/* do multiple passes with successively smaller page sizes */
+	for (; pgsz >= MINPGSZ; pgsz /= 4, pgmask = (pgmask >> 2) ^ 0x1800) {
+		for (i = 0; i < nr_map; i++) {
+			if (bmap[i].type != BOOT_MEM_RAM)
+				continue;
+
+			/* previous mapping end and next mapping start */
+			pend = alloc_map[i - 1].end;
+			nstart = (i == nr_map - 1) ? ~0ull : bmap[i + 1].addr;
+
+			/* mem block start and end */
+			mstart = round_up(bmap[i].addr, MINPGSZ);
+			mend = round_down(bmap[i].addr + bmap[i].size, MINPGSZ);
+
+			/* allocated area in the memory block, start and end */
+			astart = alloc_map[i].start;
+			aend = alloc_map[i].end;
+
+			/* skip fully mapped blocks */
+			if (mstart >= astart && mend <= aend)
+				continue;
+
+			/* boundaries aligned to the current page size */
+			tstart = round_up(mstart, 2 * pgsz);
+			tend = round_down(mend, 2 * pgsz);
+			if (tstart > tend)
+				continue;
+
+			/* use LO1 of a TLB entry */
+			if (mstart + pgsz == tstart && pend <= mstart - pgsz)
+				if (addtlb(mstart - pgsz, pgsz,
+						pgmask, 0x2, &alloc_map[i]))
 					goto out;
+
+			for (p = tstart; p < tend;) {
+				if (astart < aend && p == astart) {
+					p = aend;
+					continue;
 				}
+				if (addtlb(p, pgsz, pgmask, 0x3, &alloc_map[i]))
+					goto out;
+				p += 2 * pgsz;
+			}
+
+			/* use LO0 of a TLB entry */
+			if (tend + pgsz == mend && nstart >= mend + pgsz)
+				if (addtlb(tend, pgsz,
+						pgmask, 0x1, &alloc_map[i]))
+					goto out;
 		}
 	}
 out:
-	*tlbcount = pos;
-	pr_info("%d TLB entires used for mapped kernel.\n", pos);
+	for (i = 0; i < nr_map; i++) {
+		mstart = bmap[i].addr;
+		mend = bmap[i].addr + bmap[i].size;
+		astart = alloc_map[i].astart;
+		aend = alloc_map[i].aend;
+
+		pr_info("%2d alloc: %10llx %10llx mem %10llx %10llx\n", i,
+						astart, aend, bmap[i].addr,
+						bmap[i].addr + bmap[i].size);
+		if (astart >= aend) {
+			bmap[i].size = 0;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)mstart,
+				(unsigned long long)mend);
+			continue;
+		}
+		if (bmap[i].addr < astart) {
+			bmap[i].addr = astart;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)bmap[i].addr,
+				(unsigned long long)astart);
+		}
+		if (mend > aend) {
+			bmap[i].size = aend - bmap[i].addr;
+			pr_info("%2d: Discarded %#10llx - %#10llx\n", i,
+				(unsigned long long)aend,
+				(unsigned long long)mend);
+		}
+	}
+	pr_info("%d TLB entires used for mapped kernel.\n",
+				*(u32 *)nlm_get_boot_data(BOOT_NTLBS));
 }
 #endif
 
@@ -143,13 +231,6 @@ void __init plat_mem_fixup(void)
 	int i;
 
 #if defined(CONFIG_MAPPED_KERNEL) && defined(CONFIG_64BIT)
-	/* trim memory regions to PM_TLBSZ boundaries */
-	for (i = 0; i < boot_mem_map.nr_map; i++) {
-		if (boot_mem_map.map[i].type != BOOT_MEM_RAM)
-			continue;
-		nlm_tlb_align(&boot_mem_map.map[i]);
-	}
-
 	/* calculate and save wired TLB entries */
 	nlm_calc_wired_tlbs();
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-01-16 12:39 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-16 12:38 [PATCH 0/5] Mapped kernel support for Broadcom XLP/XLPII Jayachandran C
2015-01-16 12:38 ` Jayachandran C
2015-01-16 12:38 ` [PATCH 1/5] MIPS: Make MAPPED_KERNEL config option common Jayachandran C
2015-01-16 12:38   ` Jayachandran C
2015-01-16 12:38 ` [PATCH 2/5] MIPS: Add platform function to fixup memory Jayachandran C
2015-01-16 12:38   ` Jayachandran C
2015-01-16 12:38 ` [PATCH 3/5] MIPS: Netlogic: Mapped kernel support Jayachandran C
2015-01-16 12:38   ` Jayachandran C
2015-01-16 12:38 ` [PATCH 4/5] MIPS: Compress MAPPED kernels Jayachandran C
2015-01-16 12:38   ` Jayachandran C
2015-01-16 12:38 ` [PATCH 5/5] MIPS: Netlogic: Map kernel with 1G/4G pages on XLPII Jayachandran C
2015-01-16 12:38   ` Jayachandran C
  -- strict thread matches above, loose matches on Subject: below --
2014-05-09 15:28 [PATCH RFC 0/5] Mapped kernel support for Broadcom XLP Jayachandran C
2014-05-09 16:09 ` [PATCH 1/5] MIPS: Make MAPPED_KERNEL config option common Jayachandran C
2014-05-09 16:09   ` [PATCH 5/5] MIPS: Netlogic: Map kernel with 1G/4G pages on XLPII Jayachandran C
2014-05-09 16:09     ` Jayachandran C

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.