linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel
@ 2013-07-04 12:54 Kevin Hao
  2013-07-04 12:54 ` [PATCH v2 1/8] powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS Kevin Hao
                   ` (7 more replies)
  0 siblings, 8 replies; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala; +Cc: Scott Wood, linuxppc

v2:
These patches are based on the Ben's next branch. In this version we choose
to do a second relocation if the PAGE_OFFSET is not mapped to the memstart_addr
and we also choose to set the tlb1 entries for the kernel space in address
space 1. With this implementation:
  * We can load the kernel at any place between
     memstart_addr ~ memstart_addr + 768M
  * We can reserve any memory between memstart_addr ~ memstart_addr + 768M
    for a kdump kernel.

I have done a kdump boot on a p2020rdb kernel with the memory reserved by
'crashkernel=32M@320M'.


v1:
Currently the fsl booke 32bit kernel is using the DYNAMIC_MEMSTART relocation
method. But the RELOCATABLE method is more flexible and has less alignment
restriction. So enable this feature on this platform and use it by
default for the kdump kernel.

These patches have passed the kdump boot test on a p2020rdb board.

Kevin Hao (8):
  powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS
  powerpc/fsl_booke: introduce get_phys_addr function
  powerpc: enable the relocatable support for the fsl booke 32bit kernel
  powerpc/fsl_booke: set the tlb entry for the kernel address in AS1
  memblock: introduce the memblock_reinit function
  powerpc: introduce early_get_first_memblock_info
  powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for    
    relocatable kernel
  powerpc/fsl_booke: enable the relocatable for the kdump kernel

 arch/powerpc/Kconfig                          |   5 +-
 arch/powerpc/kernel/entry_32.S                |   8 +-
 arch/powerpc/kernel/fsl_booke_entry_mapping.S |  14 +-
 arch/powerpc/kernel/head_fsl_booke.S          | 233 ++++++++++++++++++++++++--
 arch/powerpc/kernel/prom.c                    |  24 +++
 arch/powerpc/mm/fsl_booke_mmu.c               |  56 +++++++
 arch/powerpc/mm/hugetlbpage-book3e.c          |   3 +-
 arch/powerpc/mm/mmu_decl.h                    |   2 +
 include/linux/memblock.h                      |   1 +
 include/linux/of_fdt.h                        |   1 +
 mm/memblock.c                                 |  33 ++--
 11 files changed, 340 insertions(+), 40 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v2 1/8] powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS
  2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
@ 2013-07-04 12:54 ` Kevin Hao
  2013-07-26 23:14   ` Scott Wood
  2013-07-04 12:54 ` [PATCH v2 2/8] powerpc/fsl_booke: introduce get_phys_addr function Kevin Hao
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala; +Cc: Scott Wood, linuxppc

The e500v1 doesn't implement the MAS7, so we should avoid to access
this register on that implementations. Some code use the
CONFIG_PHYS_64BIT to protect these accesses, but this is not accurate.
In theory we can enable the CONFIG_PHYS_64BIT for a e500v1 board and
the CONFIG_PHYS_64BIT is also enabled by default in mpc85xx_defconfig
which definitely have the support for e500v1 board. The MMU_FTR_BIG_PHYS
should be the right choice.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
A new patch in v2.

 arch/powerpc/kernel/entry_32.S                | 8 +++++---
 arch/powerpc/kernel/fsl_booke_entry_mapping.S | 6 ++++--
 arch/powerpc/kernel/head_fsl_booke.S          | 4 ++++
 arch/powerpc/mm/hugetlbpage-book3e.c          | 3 ++-
 4 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 22b45a4..2ce22c2 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -75,10 +75,10 @@ crit_transfer_to_handler:
 	stw	r0,MAS3(r11)
 	mfspr	r0,SPRN_MAS6
 	stw	r0,MAS6(r11)
-#ifdef CONFIG_PHYS_64BIT
+BEGIN_MMU_FTR_SECTION
 	mfspr	r0,SPRN_MAS7
 	stw	r0,MAS7(r11)
-#endif /* CONFIG_PHYS_64BIT */
+END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS)
 #endif /* CONFIG_PPC_BOOK3E_MMU */
 #ifdef CONFIG_44x
 	mfspr	r0,SPRN_MMUCR
@@ -1112,8 +1112,10 @@ exc_exit_restart_end:
 #if defined(CONFIG_PPC_BOOK3E_MMU)
 #ifdef CONFIG_PHYS_64BIT
 #define	RESTORE_MAS7							\
+BEGIN_MMU_FTR_SECTION							\
 	lwz	r11,MAS7(r1);						\
-	mtspr	SPRN_MAS7,r11;
+	mtspr	SPRN_MAS7,r11;						\
+END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS)
 #else
 #define	RESTORE_MAS7
 #endif /* CONFIG_PHYS_64BIT */
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index a92c79b..2201f84 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -88,9 +88,11 @@ skpinv:	addi	r6,r6,1				/* Increment */
 1:	mflr	r7
 
 	mfspr	r8,SPRN_MAS3
-#ifdef CONFIG_PHYS_64BIT
+BEGIN_MMU_FTR_SECTION
 	mfspr	r23,SPRN_MAS7
-#endif
+MMU_FTR_SECTION_ELSE
+	li	r23,0
+ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
 	and	r8,r6,r8
 	subfic	r9,r6,-4096
 	and	r9,r9,r7
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index d10a7ca..a04a48d 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -82,7 +82,11 @@ _ENTRY(_start);
 	and	r19,r3,r18		/* r19 = page offset */
 	andc	r31,r20,r18		/* r31 = page base */
 	or	r31,r31,r19		/* r31 = devtree phys addr */
+BEGIN_MMU_FTR_SECTION
 	mfspr	r30,SPRN_MAS7
+MMU_FTR_SECTION_ELSE
+	li	r30,0
+ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
 
 	li	r25,0			/* phys kernel start (low) */
 	li	r24,0			/* CPU number */
diff --git a/arch/powerpc/mm/hugetlbpage-book3e.c b/arch/powerpc/mm/hugetlbpage-book3e.c
index 3bc7006..ac63e7e 100644
--- a/arch/powerpc/mm/hugetlbpage-book3e.c
+++ b/arch/powerpc/mm/hugetlbpage-book3e.c
@@ -103,7 +103,8 @@ void book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea,
 	if (mmu_has_feature(MMU_FTR_USE_PAIRED_MAS)) {
 		mtspr(SPRN_MAS7_MAS3, mas7_3);
 	} else {
-		mtspr(SPRN_MAS7, upper_32_bits(mas7_3));
+		if (mmu_has_feature(MMU_FTR_BIG_PHYS))
+			mtspr(SPRN_MAS7, upper_32_bits(mas7_3));
 		mtspr(SPRN_MAS3, lower_32_bits(mas7_3));
 	}
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 2/8] powerpc/fsl_booke: introduce get_phys_addr function
  2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
  2013-07-04 12:54 ` [PATCH v2 1/8] powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS Kevin Hao
@ 2013-07-04 12:54 ` Kevin Hao
  2013-07-04 12:54 ` [PATCH v2 3/8] powerpc: enable the relocatable support for the fsl booke 32bit kernel Kevin Hao
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala; +Cc: Scott Wood, linuxppc

Move the codes which translate a effective address to physical address
to a separate function. So it can be reused by other code.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
A new patch in v2.

 arch/powerpc/kernel/head_fsl_booke.S | 54 +++++++++++++++++++++---------------
 1 file changed, 32 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index a04a48d..dab091e 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -65,28 +65,9 @@ _ENTRY(_start);
 	nop
 
 	/* Translate device tree address to physical, save in r30/r31 */
-	mfmsr	r16
-	mfspr	r17,SPRN_PID
-	rlwinm	r17,r17,16,0x3fff0000	/* turn PID into MAS6[SPID] */
-	rlwimi	r17,r16,28,0x00000001	/* turn MSR[DS] into MAS6[SAS] */
-	mtspr	SPRN_MAS6,r17
-
-	tlbsx	0,r3			/* must succeed */
-
-	mfspr	r16,SPRN_MAS1
-	mfspr	r20,SPRN_MAS3
-	rlwinm	r17,r16,25,0x1f		/* r17 = log2(page size) */
-	li	r18,1024
-	slw	r18,r18,r17		/* r18 = page size */
-	addi	r18,r18,-1
-	and	r19,r3,r18		/* r19 = page offset */
-	andc	r31,r20,r18		/* r31 = page base */
-	or	r31,r31,r19		/* r31 = devtree phys addr */
-BEGIN_MMU_FTR_SECTION
-	mfspr	r30,SPRN_MAS7
-MMU_FTR_SECTION_ELSE
-	li	r30,0
-ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
+	bl	get_phys_addr
+	mr	r30,r3
+	mr	r31,r4
 
 	li	r25,0			/* phys kernel start (low) */
 	li	r24,0			/* CPU number */
@@ -860,6 +841,35 @@ KernelSPE:
 #endif /* CONFIG_SPE */
 
 /*
+ * Translate the effec addr in r3 to phys addr. The phys addr will be put
+ * into r3(higher 32bit) and r4(lower 32bit)
+ */
+get_phys_addr:
+	mfmsr	r8
+	mfspr	r9,SPRN_PID
+	rlwinm	r9,r9,16,0x3fff0000	/* turn PID into MAS6[SPID] */
+	rlwimi	r9,r8,28,0x00000001	/* turn MSR[DS] into MAS6[SAS] */
+	mtspr	SPRN_MAS6,r9
+
+	tlbsx	0,r3			/* must succeed */
+
+	mfspr	r8,SPRN_MAS1
+	mfspr	r12,SPRN_MAS3
+	rlwinm	r9,r8,25,0x1f		/* r9 = log2(page size) */
+	li	r10,1024
+	slw	r10,r10,r9		/* r10 = page size */
+	addi	r10,r10,-1
+	and	r11,r3,r10		/* r11 = page offset */
+	andc	r4,r12,r10		/* r4 = page base */
+	or	r4,r4,r11		/* r4 = devtree phys addr */
+BEGIN_MMU_FTR_SECTION
+	mfspr	r3,SPRN_MAS7
+MMU_FTR_SECTION_ELSE
+	li	r3,0
+ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
+	blr
+
+/*
  * Global functions
  */
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 3/8] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
  2013-07-04 12:54 ` [PATCH v2 1/8] powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS Kevin Hao
  2013-07-04 12:54 ` [PATCH v2 2/8] powerpc/fsl_booke: introduce get_phys_addr function Kevin Hao
@ 2013-07-04 12:54 ` Kevin Hao
  2013-07-26 23:28   ` Scott Wood
  2013-07-04 12:54 ` [PATCH v2 4/8] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 Kevin Hao
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala; +Cc: Scott Wood, linuxppc

This is based on the codes in the head_44x.S. Since we always align to
256M before mapping the PAGE_OFFSET for a relocatable kernel, we also
change the init tlb map to 256M size.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
v2: Move the code to set kernstart_addr and virt_phys_offset to a c function.
    So we can expand it easily later.

Hi Scott,

I still use the 256M align for the init tlb as in v1 for the following reasons:
  * This should be the most possible case in reality.
  * This is just for very early booting code and should not be a big issue
    if the first tlb entry shrink to a less size later.

 arch/powerpc/Kconfig                          |  2 +-
 arch/powerpc/kernel/fsl_booke_entry_mapping.S |  8 +++---
 arch/powerpc/kernel/head_fsl_booke.S          | 37 +++++++++++++++++++++++++++
 arch/powerpc/mm/fsl_booke_mmu.c               | 28 ++++++++++++++++++++
 4 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5374776..5b2e115 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -859,7 +859,7 @@ config DYNAMIC_MEMSTART
 
 config RELOCATABLE
 	bool "Build a relocatable kernel"
-	depends on ADVANCED_OPTIONS && FLATMEM && 44x
+	depends on ADVANCED_OPTIONS && FLATMEM && (44x || FSL_BOOKE)
 	select NONSTATIC_KERNEL
 	help
 	  This builds a kernel image that is capable of running at the
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index 2201f84..211e507 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -167,10 +167,10 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
 	lis	r6,0x1000		/* Set MAS0(TLBSEL) = TLB1(1), ESEL = 0 */
 	mtspr	SPRN_MAS0,r6
 	lis	r6,(MAS1_VALID|MAS1_IPROT)@h
-	ori	r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
+	ori	r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_256M))@l
 	mtspr	SPRN_MAS1,r6
-	lis	r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_SMP)@h
-	ori	r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_SMP)@l
+	lis	r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_256M, M_IF_SMP)@h
+	ori	r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_256M, M_IF_SMP)@l
 	mtspr	SPRN_MAS2,r6
 	mtspr	SPRN_MAS3,r8
 	tlbwe
@@ -178,6 +178,8 @@ ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
 /* 7. Jump to KERNELBASE mapping */
 	lis	r6,(KERNELBASE & ~0xfff)@h
 	ori	r6,r6,(KERNELBASE & ~0xfff)@l
+	rlwinm	r7,r25,0,0x0fffffff
+	add	r6,r7,r6
 
 #elif defined(ENTRY_MAPPING_KEXEC_SETUP)
 /*
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index dab091e..134064d 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -73,6 +73,33 @@ _ENTRY(_start);
 	li	r24,0			/* CPU number */
 	li	r23,0			/* phys kernel start (high) */
 
+#ifdef CONFIG_RELOCATABLE
+	bl	0f			/* Get our runtime address */
+0:	mflr	r3			/* Make it accessible */
+	addis	r3,r3,(_stext - 0b)@ha
+	addi	r3,r3,(_stext - 0b)@l	/* Get our current runtime base */
+
+	/* Translate _stext address to physical, save in r23/r25 */
+	bl	get_phys_addr
+	mr	r23,r3
+	mr	r25,r4
+
+	/*
+	 * We have the runtime (virutal) address of our base.
+	 * We calculate our shift of offset from a 256M page.
+	 * We could map the 256M page we belong to at PAGE_OFFSET and
+	 * get going from there.
+	 */
+	lis	r4,KERNELBASE@h
+	ori	r4,r4,KERNELBASE@l
+	rlwinm	r6,r25,0,0xfffffff		/* r6 = PHYS_START % 256M */
+	rlwinm	r5,r4,0,0xfffffff		/* r5 = KERNELBASE % 256M */
+	subf	r3,r5,r6			/* r3 = r6 - r5 */
+	add	r3,r4,r3			/* Required Virutal Address */
+
+	bl	relocate
+#endif
+
 /* We try to not make any assumptions about how the boot loader
  * setup or used the TLBs.  We invalidate all mappings from the
  * boot loader and load a single entry in TLB1[0] to map the
@@ -182,6 +209,16 @@ _ENTRY(__early_start)
 
 	bl	early_init
 
+#ifdef CONFIG_RELOCATABLE
+#ifdef CONFIG_PHYS_64BIT
+	mr	r3,r23
+	mr	r4,r25
+#else
+	mr	r3,r25
+#endif
+	bl	relocate_init
+#endif
+
 #ifdef CONFIG_DYNAMIC_MEMSTART
 	lis	r3,kernstart_addr@ha
 	la	r3,kernstart_addr@l(r3)
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index 07ba45b..5fe271c 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -241,4 +241,32 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 	/* 64M mapped initially according to head_fsl_booke.S */
 	memblock_set_current_limit(min_t(u64, limit, 0x04000000));
 }
+
+#ifdef CONFIG_RELOCATABLE
+notrace void __init relocate_init(phys_addr_t start)
+{
+	unsigned long base = KERNELBASE;
+
+	/*
+	 * Relocatable kernel support based on processing of dynamic
+	 * relocation entries.
+	 * Compute the virt_phys_offset :
+	 * virt_phys_offset = stext.run - kernstart_addr
+	 *
+	 * stext.run = (KERNELBASE & ~0xfffffff) + (kernstart_addr & 0xfffffff)
+	 * When we relocate, we have :
+	 *
+	 *	(kernstart_addr & 0xfffffff) = (stext.run & 0xfffffff)
+	 *
+	 * hence:
+	 *  virt_phys_offset = (KERNELBASE & ~0xfffffff) -
+	 *                              (kernstart_addr & ~0xfffffff)
+	 *
+	 */
+	kernstart_addr = start;
+	start &= ~0xfffffff;
+	base &= ~0xfffffff;
+	virt_phys_offset = base - start;
+}
+#endif
 #endif
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 4/8] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1
  2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (2 preceding siblings ...)
  2013-07-04 12:54 ` [PATCH v2 3/8] powerpc: enable the relocatable support for the fsl booke 32bit kernel Kevin Hao
@ 2013-07-04 12:54 ` Kevin Hao
  2013-07-26 23:37   ` Scott Wood
  2013-07-04 12:54 ` [PATCH v2 5/8] memblock: introduce the memblock_reinit function Kevin Hao
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala; +Cc: Scott Wood, linuxppc

We use the tlb1 entries to map low mem to the kernel space. In the
current code, it assumes that the first tlb entry would cover the
kernel image. But this is not true for some special cases, such as
when we run a relocatable kernel above the 256M or set
CONFIG_KERNEL_START above 256M. So we choose to switch to address
space 1 before setting these tlb entries.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
A new patch in v2.

 arch/powerpc/kernel/head_fsl_booke.S | 81 ++++++++++++++++++++++++++++++++++++
 arch/powerpc/mm/fsl_booke_mmu.c      |  2 +
 arch/powerpc/mm/mmu_decl.h           |  2 +
 3 files changed, 85 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 134064d..0cbfe95 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1172,6 +1172,87 @@ __secondary_hold_acknowledge:
 #endif
 
 /*
+ * Create a tbl entry with the same effective and physical address as
+ * the tlb entry used by the current running code. But set the TS to 1.
+ * Then switch to the address space 1. It will return with the r3 set to
+ * the ESEL of the new created tlb.
+ */
+_GLOBAL(switch_to_as1)
+	mflr	r5
+
+	/* Find a entry not used */
+	mfspr	r3,SPRN_TLB1CFG
+	andi.	r3,r3,0xfff
+	mfspr	r4,SPRN_PID
+	rlwinm	r4,r4,16,0x3fff0000	/* turn PID into MAS6[SPID] */
+	mtspr	SPRN_MAS6,r4
+1:	lis	r4,0x1000		/* Set MAS0(TLBSEL) = 1 */
+	addi	r3,r3,-1
+	rlwimi	r4,r3,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r3) */
+	mtspr	SPRN_MAS0,r4
+	tlbre
+	mfspr	r4,SPRN_MAS1
+	andis.	r4,r4,MAS1_VALID@h
+	bne	1b
+
+	/* Get the tlb entry used by the current running code */
+	bl	0f
+0:	mflr	r4
+	tlbsx	0,r4
+
+	mfspr	r4,SPRN_MAS1
+	ori	r4,r4,MAS1_TS		/* Set the TS = 1 */
+	mtspr	SPRN_MAS1,r4
+
+	mfspr	r4,SPRN_MAS0
+	rlwinm	r4,r4,0,~MAS0_ESEL_MASK
+	rlwimi	r4,r3,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r3) */
+	mtspr	SPRN_MAS0,r4
+	tlbwe
+	isync
+	sync
+
+	mfmsr	r4
+	ori	r4,r4,MSR_IS | MSR_DS
+	mtspr	SPRN_SRR0,r5
+	mtspr	SPRN_SRR1,r4
+	sync
+	rfi
+
+/*
+ * Restore to the address space 0 and also invalidate the tlb entry created
+ * by switch_to_as1.
+*/
+_GLOBAL(restore_to_as0)
+	mflr	r0
+
+	bl	0f
+0:	mflr	r9
+	addi	r9,r9,1f - 0b
+
+	mfmsr	r7
+	li	r8,(MSR_IS | MSR_DS)
+	andc	r7,r7,r8
+
+	mtspr	SPRN_SRR0,r9
+	mtspr	SPRN_SRR1,r7
+	sync
+	rfi
+
+	/* Invalidate the temporary tlb entry for AS1 */
+1:	lis	r9,0x1000		/* Set MAS0(TLBSEL) = 1 */
+	rlwimi	r9,r3,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r3) */
+	mtspr	SPRN_MAS0,r9
+	tlbre
+	mfspr	r9,SPRN_MAS1
+	rlwinm	r9,r9,0,2,31		/* Clear MAS1 Valid and IPPROT */
+	mtspr	SPRN_MAS1,r9
+	tlbwe
+	isync
+	mtlr	r0
+	blr
+
+/*
  * We put a few things here that have to be page-aligned. This stuff
  * goes at the beginning of the data segment, which is page-aligned.
  */
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index 5fe271c..8f60ef8 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -222,7 +222,9 @@ void __init adjust_total_lowmem(void)
 	/* adjust lowmem size to __max_low_memory */
 	ram = min((phys_addr_t)__max_low_memory, (phys_addr_t)total_lowmem);
 
+	i = switch_to_as1();
 	__max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
+	restore_to_as0(i);
 
 	pr_info("Memory CAM mapping: ");
 	for (i = 0; i < tlbcam_index - 1; i++)
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 83eb5d5..3a65644 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -148,6 +148,8 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
 extern void MMU_init_hw(void);
 extern unsigned long mmu_mapin_ram(unsigned long top);
 extern void adjust_total_lowmem(void);
+extern int switch_to_as1(void);
+extern void restore_to_as0(int);
 #endif
 extern void loadcam_entry(unsigned int index);
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 5/8] memblock: introduce the memblock_reinit function
  2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (3 preceding siblings ...)
  2013-07-04 12:54 ` [PATCH v2 4/8] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 Kevin Hao
@ 2013-07-04 12:54 ` Kevin Hao
  2013-07-04 12:54 ` [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info Kevin Hao
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala, Andrew Morton; +Cc: Scott Wood, linux-mm, linuxppc

In the current code, the data used by memblock are initialized
statically. But in some special cases we may scan the memory twice.
So we should have a way to reinitialize these data before the second
time.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
A new patch in v2.

 include/linux/memblock.h |  1 +
 mm/memblock.c            | 33 +++++++++++++++++++++++----------
 2 files changed, 24 insertions(+), 10 deletions(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index f388203..9d55311 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -58,6 +58,7 @@ int memblock_remove(phys_addr_t base, phys_addr_t size);
 int memblock_free(phys_addr_t base, phys_addr_t size);
 int memblock_reserve(phys_addr_t base, phys_addr_t size);
 void memblock_trim_memory(phys_addr_t align);
+void memblock_reinit(void);
 
 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
diff --git a/mm/memblock.c b/mm/memblock.c
index c5fad93..9406ce6 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -23,23 +23,36 @@
 static struct memblock_region memblock_memory_init_regions[INIT_MEMBLOCK_REGIONS] __initdata_memblock;
 static struct memblock_region memblock_reserved_init_regions[INIT_MEMBLOCK_REGIONS] __initdata_memblock;
 
-struct memblock memblock __initdata_memblock = {
-	.memory.regions		= memblock_memory_init_regions,
-	.memory.cnt		= 1,	/* empty dummy entry */
-	.memory.max		= INIT_MEMBLOCK_REGIONS,
-
-	.reserved.regions	= memblock_reserved_init_regions,
-	.reserved.cnt		= 1,	/* empty dummy entry */
-	.reserved.max		= INIT_MEMBLOCK_REGIONS,
+#define INIT_MEMBLOCK {							\
+	.memory.regions		= memblock_memory_init_regions,		\
+	.memory.cnt		= 1,	/* empty dummy entry */		\
+	.memory.max		= INIT_MEMBLOCK_REGIONS,		\
+									\
+	.reserved.regions	= memblock_reserved_init_regions,	\
+	.reserved.cnt		= 1,	/* empty dummy entry */		\
+	.reserved.max		= INIT_MEMBLOCK_REGIONS,		\
+									\
+	.current_limit		= MEMBLOCK_ALLOC_ANYWHERE,		\
+}
 
-	.current_limit		= MEMBLOCK_ALLOC_ANYWHERE,
-};
+struct memblock memblock __initdata_memblock = INIT_MEMBLOCK;
 
 int memblock_debug __initdata_memblock;
 static int memblock_can_resize __initdata_memblock;
 static int memblock_memory_in_slab __initdata_memblock = 0;
 static int memblock_reserved_in_slab __initdata_memblock = 0;
 
+void __init memblock_reinit(void)
+{
+	memset(memblock_memory_init_regions, 0,
+				sizeof(memblock_memory_init_regions));
+	memset(memblock_reserved_init_regions, 0,
+				sizeof(memblock_reserved_init_regions));
+
+	memset(&memblock, 0, sizeof(memblock));
+	memblock = (struct memblock) INIT_MEMBLOCK;
+}
+
 /* inline so we don't get a warning when pr_debug is compiled out */
 static __init_memblock const char *
 memblock_type_name(struct memblock_type *type)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info
  2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (4 preceding siblings ...)
  2013-07-04 12:54 ` [PATCH v2 5/8] memblock: introduce the memblock_reinit function Kevin Hao
@ 2013-07-04 12:54 ` Kevin Hao
  2013-07-27  0:18   ` Scott Wood
  2013-07-04 12:54 ` [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel Kevin Hao
  2013-07-04 12:54 ` [PATCH v2 8/8] powerpc/fsl_booke: enable the relocatable for the kdump kernel Kevin Hao
  7 siblings, 1 reply; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala, Benjamin Herrenschmidt; +Cc: Scott Wood, linuxppc

For a relocatable kernel since it can be loaded at any place, there
is no any relation between the kernel start addr and the memstart_addr.
So we can't calculate the memstart_addr from kernel start addr. And
also we can't wait to do the relocation after we get the real
memstart_addr from device tree because it is so late. So introduce
a new function we can use to get the first memblock address and size
in a very early stage (before machine_init).

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
A new patch in v2.

 arch/powerpc/kernel/prom.c | 24 ++++++++++++++++++++++++
 include/linux/of_fdt.h     |  1 +
 2 files changed, 25 insertions(+)

diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index eb23ac9..9a69d2d 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -753,6 +753,30 @@ void __init early_init_devtree(void *params)
 	DBG(" <- early_init_devtree()\n");
 }
 
+#ifdef CONFIG_RELOCATABLE
+/*
+ * This function run before early_init_devtree, so we have to init
+ * initial_boot_params. Since early_init_dt_scan_memory_ppc will be
+ * executed again in early_init_devtree, we have to reinitialize the
+ * memblock data before return.
+ */
+void __init early_get_first_memblock_info(void *params, phys_addr_t *size)
+{
+	/* Setup flat device-tree pointer */
+	initial_boot_params = params;
+
+	/* Scan memory nodes and rebuild MEMBLOCKs */
+	of_scan_flat_dt(early_init_dt_scan_root, NULL);
+	of_scan_flat_dt(early_init_dt_scan_memory_ppc, NULL);
+
+	if (size)
+		*size = first_memblock_size;
+
+	/* Undo what early_init_dt_scan_memory_ppc does to memblock */
+	memblock_reinit();
+}
+#endif
+
 /*******
  *
  * New implementation of the OF "find" APIs, return a refcounted
diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
index ed136ad..befe744 100644
--- a/include/linux/of_fdt.h
+++ b/include/linux/of_fdt.h
@@ -117,6 +117,7 @@ extern int early_init_dt_scan_root(unsigned long node, const char *uname,
 /* Other Prototypes */
 extern void unflatten_device_tree(void);
 extern void early_init_devtree(void *);
+extern void early_get_first_memblock_info(void *, phys_addr_t *);
 #else /* CONFIG_OF_FLATTREE */
 static inline void unflatten_device_tree(void) {}
 #endif /* CONFIG_OF_FLATTREE */
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
  2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (5 preceding siblings ...)
  2013-07-04 12:54 ` [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info Kevin Hao
@ 2013-07-04 12:54 ` Kevin Hao
  2013-07-27  0:17   ` Scott Wood
  2013-08-06  0:14   ` Scott Wood
  2013-07-04 12:54 ` [PATCH v2 8/8] powerpc/fsl_booke: enable the relocatable for the kdump kernel Kevin Hao
  7 siblings, 2 replies; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala; +Cc: Scott Wood, linuxppc

This is always true for a non-relocatable kernel. Otherwise the kernel
would get stuck. But for a relocatable kernel, it seems a little
complicated. When booting a relocatable kernel, we just align the
kernel start addr to 256M and map the PAGE_OFFSET from there. The
relocation will base on this virtual address. But if this address
is not the same as the memstart_addr, we will have to change the
map of PAGE_OFFSET to the real memstart_addr and do another relocation
again.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
A new patch in v2.

 arch/powerpc/kernel/head_fsl_booke.S | 75 +++++++++++++++++++++++++++++++++---
 arch/powerpc/mm/fsl_booke_mmu.c      | 68 ++++++++++++++++++++++----------
 arch/powerpc/mm/mmu_decl.h           |  2 +-
 3 files changed, 118 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 0cbfe95..00cfb7e 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -84,6 +84,39 @@ _ENTRY(_start);
 	mr	r23,r3
 	mr	r25,r4
 
+	bl	0f
+0:	mflr	r8
+	addis	r3,r8,(is_second_reloc - 0b)@ha
+	lwz	r19,(is_second_reloc - 0b)@l(r3)
+
+	/* Check if this is the second relocation. */
+	cmpwi	r19,1
+	bne	1f
+
+	/*
+	 * For the second relocation, we already get the real memstart_addr
+	 * from device tree. So we will map PAGE_OFFSET to memstart_addr,
+	 * then the virtual address of start kernel should be:
+	 *          PAGE_OFFSET + (kernstart_addr - memstart_addr)
+	 * Since the offset between kernstart_addr and memstart_addr should
+	 * never be beyond 1G, so we can just use the lower 32bit of them
+	 * for the calculation.
+	 */
+	lis	r3,PAGE_OFFSET@h
+
+	addis	r4,r8,(kernstart_addr - 0b)@ha
+	addi	r4,r4,(kernstart_addr - 0b)@l
+	lwz	r5,4(r4)
+
+	addis	r6,r8,(memstart_addr - 0b)@ha
+	addi	r6,r6,(memstart_addr - 0b)@l
+	lwz	r7,4(r6)
+
+	subf	r5,r7,r5
+	add	r3,r3,r5
+	b	2f
+
+1:
 	/*
 	 * We have the runtime (virutal) address of our base.
 	 * We calculate our shift of offset from a 256M page.
@@ -97,7 +130,7 @@ _ENTRY(_start);
 	subf	r3,r5,r6			/* r3 = r6 - r5 */
 	add	r3,r4,r3			/* Required Virutal Address */
 
-	bl	relocate
+2:	bl	relocate
 #endif
 
 /* We try to not make any assumptions about how the boot loader
@@ -121,10 +154,19 @@ _ENTRY(_start);
 
 _ENTRY(__early_start)
 
+#ifdef CONFIG_RELOCATABLE
+	/*
+	 * For the second relocation, we already set the right tlb entries
+	 * for the kernel space, so skip the code in fsl_booke_entry_mapping.S
+	*/
+	cmpwi	r19,1
+	beq	set_ivor
+#endif
 #define ENTRY_MAPPING_BOOT_SETUP
 #include "fsl_booke_entry_mapping.S"
 #undef ENTRY_MAPPING_BOOT_SETUP
 
+set_ivor:
 	/* Establish the interrupt vector offsets */
 	SET_IVOR(0,  CriticalInput);
 	SET_IVOR(1,  MachineCheck);
@@ -210,11 +252,13 @@ _ENTRY(__early_start)
 	bl	early_init
 
 #ifdef CONFIG_RELOCATABLE
+	mr	r3,r30
+	mr	r4,r31
 #ifdef CONFIG_PHYS_64BIT
-	mr	r3,r23
-	mr	r4,r25
+	mr	r5,r23
+	mr	r6,r25
 #else
-	mr	r3,r25
+	mr	r5,r25
 #endif
 	bl	relocate_init
 #endif
@@ -1222,6 +1266,9 @@ _GLOBAL(switch_to_as1)
 /*
  * Restore to the address space 0 and also invalidate the tlb entry created
  * by switch_to_as1.
+ * r3 - the tlb entry which should be invalidated
+ * r4 - __pa(PAGE_OFFSET in AS0) - pa(PAGE_OFFSET in AS1)
+ * r5 - device tree virtual address
 */
 _GLOBAL(restore_to_as0)
 	mflr	r0
@@ -1230,7 +1277,15 @@ _GLOBAL(restore_to_as0)
 0:	mflr	r9
 	addi	r9,r9,1f - 0b
 
-	mfmsr	r7
+	/*
+	 * We may map the PAGE_OFFSET in AS0 to a different physical address,
+	 * so we need calculate the right jump and device tree address based
+	 * on the offset passed by r4.
+	*/
+	subf	r9,r4,r9
+	subf	r5,r4,r5
+
+2:	mfmsr	r7
 	li	r8,(MSR_IS | MSR_DS)
 	andc	r7,r7,r8
 
@@ -1249,9 +1304,19 @@ _GLOBAL(restore_to_as0)
 	mtspr	SPRN_MAS1,r9
 	tlbwe
 	isync
+
+	cmpwi	r4,0
+	bne	3f
 	mtlr	r0
 	blr
 
+	/*
+	 * The PAGE_OFFSET will map to a different physical address,
+	 * jump to _start to do another relocation again.
+	*/
+3:	mr	r3,r5
+	bl	_start
+
 /*
  * We put a few things here that have to be page-aligned. This stuff
  * goes at the beginning of the data segment, which is page-aligned.
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index 8f60ef8..dd283fd 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -224,7 +224,7 @@ void __init adjust_total_lowmem(void)
 
 	i = switch_to_as1();
 	__max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
-	restore_to_as0(i);
+	restore_to_as0(i, 0, 0);
 
 	pr_info("Memory CAM mapping: ");
 	for (i = 0; i < tlbcam_index - 1; i++)
@@ -245,30 +245,56 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 }
 
 #ifdef CONFIG_RELOCATABLE
-notrace void __init relocate_init(phys_addr_t start)
+int __initdata is_second_reloc;
+notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 {
 	unsigned long base = KERNELBASE;
 
-	/*
-	 * Relocatable kernel support based on processing of dynamic
-	 * relocation entries.
-	 * Compute the virt_phys_offset :
-	 * virt_phys_offset = stext.run - kernstart_addr
-	 *
-	 * stext.run = (KERNELBASE & ~0xfffffff) + (kernstart_addr & 0xfffffff)
-	 * When we relocate, we have :
-	 *
-	 *	(kernstart_addr & 0xfffffff) = (stext.run & 0xfffffff)
-	 *
-	 * hence:
-	 *  virt_phys_offset = (KERNELBASE & ~0xfffffff) -
-	 *                              (kernstart_addr & ~0xfffffff)
-	 *
-	 */
 	kernstart_addr = start;
-	start &= ~0xfffffff;
-	base &= ~0xfffffff;
-	virt_phys_offset = base - start;
+	if (!is_second_reloc) {
+		phys_addr_t size;
+
+		/*
+		 * Relocatable kernel support based on processing of dynamic
+		 * relocation entries. Before we get the real memstart_addr,
+		 * We will compute the virt_phys_offset like this:
+		 * virt_phys_offset = stext.run - kernstart_addr
+		 *
+		 * stext.run = (KERNELBASE & ~0xfffffff) +
+		 *				(kernstart_addr & 0xfffffff)
+		 * When we relocate, we have :
+		 *
+		 *	(kernstart_addr & 0xfffffff) = (stext.run & 0xfffffff)
+		 *
+		 * hence:
+		 *  virt_phys_offset = (KERNELBASE & ~0xfffffff) -
+		 *                              (kernstart_addr & ~0xfffffff)
+		 *
+		 */
+		start &= ~0xfffffff;
+		base &= ~0xfffffff;
+		virt_phys_offset = base - start;
+		early_get_first_memblock_info(__va(dt_ptr), &size);
+		/*
+		 * We now get the memstart_addr, then we should check if this
+		 * address is the same as what the PAGE_OFFSET map to now. If
+		 * not we have to change the map of PAGE_OFFSET to memstart_addr
+		 * and do a second relocation.
+		 */
+		if (start != memstart_addr) {
+			unsigned long ram;
+			int n, offset = memstart_addr - start;
+
+			is_second_reloc = 1;
+			ram = size;
+			n = switch_to_as1();
+			map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
+			restore_to_as0(n, offset, __va(dt_ptr));
+			/* We should never reach here */
+			panic("Relocation error");
+		}
+	} else
+		virt_phys_offset = PAGE_OFFSET - memstart_addr;
 }
 #endif
 #endif
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 3a65644..8280dbb 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -149,7 +149,7 @@ extern void MMU_init_hw(void);
 extern unsigned long mmu_mapin_ram(unsigned long top);
 extern void adjust_total_lowmem(void);
 extern int switch_to_as1(void);
-extern void restore_to_as0(int);
+extern void restore_to_as0(int, int, void *);
 #endif
 extern void loadcam_entry(unsigned int index);
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v2 8/8] powerpc/fsl_booke: enable the relocatable for the kdump kernel
  2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (6 preceding siblings ...)
  2013-07-04 12:54 ` [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel Kevin Hao
@ 2013-07-04 12:54 ` Kevin Hao
  7 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-07-04 12:54 UTC (permalink / raw)
  To: Kumar Gala; +Cc: Scott Wood, linuxppc

The RELOCATABLE is more flexible and without any alignment restriction.
And it is a superset of DYNAMIC_MEMSTART. So use it by default for
a kdump kernel.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
A new patch in v2.

 arch/powerpc/Kconfig | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5b2e115..885bf06 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -375,8 +375,7 @@ config KEXEC
 config CRASH_DUMP
 	bool "Build a kdump crash kernel"
 	depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP)
-	select RELOCATABLE if PPC64 || 44x
-	select DYNAMIC_MEMSTART if FSL_BOOKE
+	select RELOCATABLE if PPC64 || 44x || FSL_BOOKE
 	help
 	  Build a kernel suitable for use as a kdump capture kernel.
 	  The same kernel binary can be used as production kernel and dump
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/8] powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS
  2013-07-04 12:54 ` [PATCH v2 1/8] powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS Kevin Hao
@ 2013-07-26 23:14   ` Scott Wood
  2013-08-04  0:30     ` Kevin Hao
  0 siblings, 1 reply; 25+ messages in thread
From: Scott Wood @ 2013-07-26 23:14 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On 07/04/2013 07:54:07 AM, Kevin Hao wrote:
> diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S =20
> b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> index a92c79b..2201f84 100644
> --- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> +++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> @@ -88,9 +88,11 @@ skpinv:	addi	r6,r6,1				=20
> /* Increment */
>  1:	mflr	r7
>=20
>  	mfspr	r8,SPRN_MAS3
> -#ifdef CONFIG_PHYS_64BIT
> +BEGIN_MMU_FTR_SECTION
>  	mfspr	r23,SPRN_MAS7
> -#endif
> +MMU_FTR_SECTION_ELSE
> +	li	r23,0
> +ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
>  	and	r8,r6,r8
>  	subfic	r9,r6,-4096
>  	and	r9,r9,r7
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S =20
> b/arch/powerpc/kernel/head_fsl_booke.S
> index d10a7ca..a04a48d 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -82,7 +82,11 @@ _ENTRY(_start);
>  	and	r19,r3,r18		/* r19 =3D page offset */
>  	andc	r31,r20,r18		/* r31 =3D page base */
>  	or	r31,r31,r19		/* r31 =3D devtree phys addr */
> +BEGIN_MMU_FTR_SECTION
>  	mfspr	r30,SPRN_MAS7
> +MMU_FTR_SECTION_ELSE
> +	li	r30,0
> +ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)

Code patching hasn't been done yet at this point.

-Scott=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/8] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2013-07-04 12:54 ` [PATCH v2 3/8] powerpc: enable the relocatable support for the fsl booke 32bit kernel Kevin Hao
@ 2013-07-26 23:28   ` Scott Wood
  2013-08-04  0:38     ` Kevin Hao
  0 siblings, 1 reply; 25+ messages in thread
From: Scott Wood @ 2013-07-26 23:28 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On 07/04/2013 07:54:09 AM, Kevin Hao wrote:
> This is based on the codes in the head_44x.S. Since we always align to
> 256M before mapping the PAGE_OFFSET for a relocatable kernel, we also
> change the init tlb map to 256M size.
>=20
> Signed-off-by: Kevin Hao <haokexin@gmail.com>
> ---
> v2: Move the code to set kernstart_addr and virt_phys_offset to a c =20
> function.
>     So we can expand it easily later.
>=20
> Hi Scott,
>=20
> I still use the 256M align for the init tlb as in v1 for the =20
> following reasons:
>   * This should be the most possible case in reality.

There is no "most possible case".  It's either possible (and supported) =20
or not.  And having less than 256M is definitely possible.  The 8540 =20
reference board has 64M.

AMP scenarios that start on a 64M-aligned but not 256M-aligned address =20
are also something I've done.

>   * This is just for very early booting code and should not be a big =20
> issue
>     if the first tlb entry shrink to a less size later.

"We can probably get away with it most of the time" is not a very good =20
justification.  What's wrong with the suggestion I made last time, of =20
basing the size on the alignment of the address?

> +	/*
> +	 * We have the runtime (virutal) address of our base.
> +	 * We calculate our shift of offset from a 256M page.
> +	 * We could map the 256M page we belong to at PAGE_OFFSET and
> +	 * get going from there.
> +	 */
> +	lis	r4,KERNELBASE@h
> +	ori	r4,r4,KERNELBASE@l
> +	rlwinm	r6,r25,0,0xfffffff		/* r6 =3D PHYS_START % =20
> 256M */
> +	rlwinm	r5,r4,0,0xfffffff		/* r5 =3D KERNELBASE % =20
> 256M */
> +	subf	r3,r5,r6			/* r3 =3D r6 - r5 */
> +	add	r3,r4,r3			/* Required Virutal =20
> Address */

s/Virutal/Virtual/

-Scott=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/8] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1
  2013-07-04 12:54 ` [PATCH v2 4/8] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 Kevin Hao
@ 2013-07-26 23:37   ` Scott Wood
  2013-08-04  0:42     ` Kevin Hao
  0 siblings, 1 reply; 25+ messages in thread
From: Scott Wood @ 2013-07-26 23:37 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On 07/04/2013 07:54:10 AM, Kevin Hao wrote:
> We use the tlb1 entries to map low mem to the kernel space. In the
> current code, it assumes that the first tlb entry would cover the
> kernel image. But this is not true for some special cases, such as
> when we run a relocatable kernel above the 256M or set
> CONFIG_KERNEL_START above 256M. So we choose to switch to address
> space 1 before setting these tlb entries.

If you're doing this, then I see even less reason to use such a large =20
boot TLB1 entry.

> Signed-off-by: Kevin Hao <haokexin@gmail.com>
> ---
> A new patch in v2.
>=20
>  arch/powerpc/kernel/head_fsl_booke.S | 81 =20
> ++++++++++++++++++++++++++++++++++++
>  arch/powerpc/mm/fsl_booke_mmu.c      |  2 +
>  arch/powerpc/mm/mmu_decl.h           |  2 +
>  3 files changed, 85 insertions(+)
>=20
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S =20
> b/arch/powerpc/kernel/head_fsl_booke.S
> index 134064d..0cbfe95 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -1172,6 +1172,87 @@ __secondary_hold_acknowledge:
>  #endif
>=20
>  /*
> + * Create a tbl entry

s/tbl/tlb/

> diff --git a/arch/powerpc/mm/fsl_booke_mmu.c =20
> b/arch/powerpc/mm/fsl_booke_mmu.c
> index 5fe271c..8f60ef8 100644
> --- a/arch/powerpc/mm/fsl_booke_mmu.c
> +++ b/arch/powerpc/mm/fsl_booke_mmu.c
> @@ -222,7 +222,9 @@ void __init adjust_total_lowmem(void)
>  	/* adjust lowmem size to __max_low_memory */
>  	ram =3D min((phys_addr_t)__max_low_memory, =20
> (phys_addr_t)total_lowmem);
>=20
> +	i =3D switch_to_as1();
>  	__max_low_memory =3D map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
> +	restore_to_as0(i);

Wouldn't it be simpler to just run out of AS1 from the end of =20
fsl_booke_entry_mapping.S, similar to what U-Boot does?  With ESEL =20
being changed to something non-conflicting, of course.

-Scott=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
  2013-07-04 12:54 ` [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel Kevin Hao
@ 2013-07-27  0:17   ` Scott Wood
  2013-08-04  0:50     ` Kevin Hao
  2013-08-06  0:14   ` Scott Wood
  1 sibling, 1 reply; 25+ messages in thread
From: Scott Wood @ 2013-07-27  0:17 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On 07/04/2013 07:54:13 AM, Kevin Hao wrote:
> @@ -1222,6 +1266,9 @@ _GLOBAL(switch_to_as1)
>  /*
>   * Restore to the address space 0 and also invalidate the tlb entry =20
> created
>   * by switch_to_as1.
> + * r3 - the tlb entry which should be invalidated
> + * r4 - __pa(PAGE_OFFSET in AS0) - pa(PAGE_OFFSET in AS1)
> + * r5 - device tree virtual address
>  */
>  _GLOBAL(restore_to_as0)
>  	mflr	r0
> @@ -1230,7 +1277,15 @@ _GLOBAL(restore_to_as0)
>  0:	mflr	r9
>  	addi	r9,r9,1f - 0b
>=20
> -	mfmsr	r7
> +	/*
> +	 * We may map the PAGE_OFFSET in AS0 to a different physical =20
> address,
> +	 * so we need calculate the right jump and device tree address =20
> based
> +	 * on the offset passed by r4.
> +	*/

Whitespace

> +	subf	r9,r4,r9
> +	subf	r5,r4,r5
> +
> +2:	mfmsr	r7
>  	li	r8,(MSR_IS | MSR_DS)
>  	andc	r7,r7,r8
>=20
> @@ -1249,9 +1304,19 @@ _GLOBAL(restore_to_as0)
>  	mtspr	SPRN_MAS1,r9
>  	tlbwe
>  	isync
> +
> +	cmpwi	r4,0
> +	bne	3f
>  	mtlr	r0
>  	blr
>=20
> +	/*
> +	 * The PAGE_OFFSET will map to a different physical address,
> +	 * jump to _start to do another relocation again.
> +	*/
> +3:	mr	r3,r5
> +	bl	_start
> +
>  /*
>   * We put a few things here that have to be page-aligned. This stuff
>   * goes at the beginning of the data segment, which is page-aligned.
> diff --git a/arch/powerpc/mm/fsl_booke_mmu.c =20
> b/arch/powerpc/mm/fsl_booke_mmu.c
> index 8f60ef8..dd283fd 100644
> --- a/arch/powerpc/mm/fsl_booke_mmu.c
> +++ b/arch/powerpc/mm/fsl_booke_mmu.c
> @@ -224,7 +224,7 @@ void __init adjust_total_lowmem(void)
>=20
>  	i =3D switch_to_as1();
>  	__max_low_memory =3D map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
> -	restore_to_as0(i);
> +	restore_to_as0(i, 0, 0);

The device tree virtual address is zero?

>  	pr_info("Memory CAM mapping: ");
>  	for (i =3D 0; i < tlbcam_index - 1; i++)
> @@ -245,30 +245,56 @@ void setup_initial_memory_limit(phys_addr_t =20
> first_memblock_base,
>  }
>=20
>  #ifdef CONFIG_RELOCATABLE
> -notrace void __init relocate_init(phys_addr_t start)
> +int __initdata is_second_reloc;
> +notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>  {
>  	unsigned long base =3D KERNELBASE;
>=20
> -	/*
> -	 * Relocatable kernel support based on processing of dynamic
> -	 * relocation entries.
> -	 * Compute the virt_phys_offset :
> -	 * virt_phys_offset =3D stext.run - kernstart_addr
> -	 *
> -	 * stext.run =3D (KERNELBASE & ~0xfffffff) + (kernstart_addr & =20
> 0xfffffff)
> -	 * When we relocate, we have :
> -	 *
> -	 *	(kernstart_addr & 0xfffffff) =3D (stext.run & 0xfffffff)
> -	 *
> -	 * hence:
> -	 *  virt_phys_offset =3D (KERNELBASE & ~0xfffffff) -
> -	 *                              (kernstart_addr & ~0xfffffff)
> -	 *
> -	 */
>  	kernstart_addr =3D start;
> -	start &=3D ~0xfffffff;
> -	base &=3D ~0xfffffff;
> -	virt_phys_offset =3D base - start;
> +	if (!is_second_reloc) {

Since it's at the end of a function and one side is much shorter than =20
the
other, please do:

	if (is_second_reloc) {
		virt_phys_offset =3D PAGE_OFFSET - memstart_addr;
		return;
	}

	/* the rest of the code goes here without having to indent =20
everything */

Otherwise, please use positive logic for if/else constructs.

> +		phys_addr_t size;
> +
> +		/*
> +		 * Relocatable kernel support based on processing of =20
> dynamic
> +		 * relocation entries. Before we get the real =20
> memstart_addr,
> +		 * We will compute the virt_phys_offset like this:
> +		 * virt_phys_offset =3D stext.run - kernstart_addr
> +		 *
> +		 * stext.run =3D (KERNELBASE & ~0xfffffff) +
> +		 *				(kernstart_addr & =20
> 0xfffffff)
> +		 * When we relocate, we have :
> +		 *
> +		 *	(kernstart_addr & 0xfffffff) =3D (stext.run & =20
> 0xfffffff)
> +		 *
> +		 * hence:
> +		 *  virt_phys_offset =3D (KERNELBASE & ~0xfffffff) -
> +		 *                              (kernstart_addr & =20
> ~0xfffffff)
> +		 *
> +		 */
> +		start &=3D ~0xfffffff;
> +		base &=3D ~0xfffffff;
> +		virt_phys_offset =3D base - start;
> +		early_get_first_memblock_info(__va(dt_ptr), &size);
> +		/*
> +		 * We now get the memstart_addr, then we should check =20
> if this
> +		 * address is the same as what the PAGE_OFFSET map to =20
> now. If
> +		 * not we have to change the map of PAGE_OFFSET to =20
> memstart_addr
> +		 * and do a second relocation.
> +		 */
> +		if (start !=3D memstart_addr) {
> +			unsigned long ram;
> +			int n, offset =3D memstart_addr - start;
> +
> +			is_second_reloc =3D 1;
> +			ram =3D size;
> +			n =3D switch_to_as1();
> +			map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);

Do we really need this much RAM mapped at this point?  Why can't we =20
continue
with the same size TLB entry that we've been using, until the second
relocation?

> +			restore_to_as0(n, offset, __va(dt_ptr));
> +			/* We should never reach here */
> +			panic("Relocation error");

Where is execution supposed to resume?  It looks like you're expecting =20
it
to resume from _start, but why?  And where is this effect of
restore_to_as0() documented?

-Scott=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info
  2013-07-04 12:54 ` [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info Kevin Hao
@ 2013-07-27  0:18   ` Scott Wood
  2013-08-04  0:45     ` Kevin Hao
  0 siblings, 1 reply; 25+ messages in thread
From: Scott Wood @ 2013-07-27  0:18 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On 07/04/2013 07:54:12 AM, Kevin Hao wrote:
> For a relocatable kernel since it can be loaded at any place, there
> is no any relation between the kernel start addr and the =20
> memstart_addr.
> So we can't calculate the memstart_addr from kernel start addr. And
> also we can't wait to do the relocation after we get the real
> memstart_addr from device tree because it is so late. So introduce
> a new function we can use to get the first memblock address and size
> in a very early stage (before machine_init).
>=20
> Signed-off-by: Kevin Hao <haokexin@gmail.com>
> ---
> A new patch in v2.
>=20
>  arch/powerpc/kernel/prom.c | 24 ++++++++++++++++++++++++
>  include/linux/of_fdt.h     |  1 +
>  2 files changed, 25 insertions(+)
>=20
> diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
> index eb23ac9..9a69d2d 100644
> --- a/arch/powerpc/kernel/prom.c
> +++ b/arch/powerpc/kernel/prom.c
> @@ -753,6 +753,30 @@ void __init early_init_devtree(void *params)
>  	DBG(" <- early_init_devtree()\n");
>  }
>=20
> +#ifdef CONFIG_RELOCATABLE
> +/*
> + * This function run before early_init_devtree, so we have to init
> + * initial_boot_params. Since early_init_dt_scan_memory_ppc will be
> + * executed again in early_init_devtree, we have to reinitialize the
> + * memblock data before return.
> + */
> +void __init early_get_first_memblock_info(void *params, phys_addr_t =20
> *size)
> +{
> +	/* Setup flat device-tree pointer */
> +	initial_boot_params =3D params;
> +
> +	/* Scan memory nodes and rebuild MEMBLOCKs */
> +	of_scan_flat_dt(early_init_dt_scan_root, NULL);
> +	of_scan_flat_dt(early_init_dt_scan_memory_ppc, NULL);
> +
> +	if (size)
> +		*size =3D first_memblock_size;
> +
> +	/* Undo what early_init_dt_scan_memory_ppc does to memblock */
> +	memblock_reinit();
> +}
> +#endif

Wouldn't it be simpler to set a flag so that =20
early_init_dt_add_memory_arch() doesn't mess with memblocks on the =20
first pass?

-Scott=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 1/8] powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS
  2013-07-26 23:14   ` Scott Wood
@ 2013-08-04  0:30     ` Kevin Hao
  0 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-08-04  0:30 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 1452 bytes --]

On Fri, Jul 26, 2013 at 06:14:00PM -0500, Scott Wood wrote:
> On 07/04/2013 07:54:07 AM, Kevin Hao wrote:
> >diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> >b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> >index a92c79b..2201f84 100644
> >--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> >+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> >@@ -88,9 +88,11 @@ skpinv:	addi	r6,r6,1				/* Increment */
> > 1:	mflr	r7
> >
> > 	mfspr	r8,SPRN_MAS3
> >-#ifdef CONFIG_PHYS_64BIT
> >+BEGIN_MMU_FTR_SECTION
> > 	mfspr	r23,SPRN_MAS7
> >-#endif
> >+MMU_FTR_SECTION_ELSE
> >+	li	r23,0
> >+ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
> > 	and	r8,r6,r8
> > 	subfic	r9,r6,-4096
> > 	and	r9,r9,r7
> >diff --git a/arch/powerpc/kernel/head_fsl_booke.S
> >b/arch/powerpc/kernel/head_fsl_booke.S
> >index d10a7ca..a04a48d 100644
> >--- a/arch/powerpc/kernel/head_fsl_booke.S
> >+++ b/arch/powerpc/kernel/head_fsl_booke.S
> >@@ -82,7 +82,11 @@ _ENTRY(_start);
> > 	and	r19,r3,r18		/* r19 = page offset */
> > 	andc	r31,r20,r18		/* r31 = page base */
> > 	or	r31,r31,r19		/* r31 = devtree phys addr */
> >+BEGIN_MMU_FTR_SECTION
> > 	mfspr	r30,SPRN_MAS7
> >+MMU_FTR_SECTION_ELSE
> >+	li	r30,0
> >+ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_BIG_PHYS)
> 
> Code patching hasn't been done yet at this point.

Indeed. I overlooked this. I will change it to #ifdef CONFIG_PHYS_64BIT.

Thanks,
Kevin

> 
> -Scott

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 3/8] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2013-07-26 23:28   ` Scott Wood
@ 2013-08-04  0:38     ` Kevin Hao
  0 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-08-04  0:38 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 1916 bytes --]

On Fri, Jul 26, 2013 at 06:28:46PM -0500, Scott Wood wrote:
> On 07/04/2013 07:54:09 AM, Kevin Hao wrote:
> >This is based on the codes in the head_44x.S. Since we always align to
> >256M before mapping the PAGE_OFFSET for a relocatable kernel, we also
> >change the init tlb map to 256M size.
> >
> >Signed-off-by: Kevin Hao <haokexin@gmail.com>
> >---
> >v2: Move the code to set kernstart_addr and virt_phys_offset to a
> >c function.
> >    So we can expand it easily later.
> >
> >Hi Scott,
> >
> >I still use the 256M align for the init tlb as in v1 for the
> >following reasons:
> >  * This should be the most possible case in reality.
> 
> There is no "most possible case".  It's either possible (and
> supported) or not.  And having less than 256M is definitely
> possible.  The 8540 reference board has 64M.
> 
> AMP scenarios that start on a 64M-aligned but not 256M-aligned
> address are also something I've done.
> 
> >  * This is just for very early booting code and should not be a
> >big issue
> >    if the first tlb entry shrink to a less size later.
> 
> "We can probably get away with it most of the time" is not a very
> good justification.  What's wrong with the suggestion I made last
> time, of basing the size on the alignment of the address?

OK, I will use the 64M align.

> 
> >+	/*
> >+	 * We have the runtime (virutal) address of our base.
> >+	 * We calculate our shift of offset from a 256M page.
> >+	 * We could map the 256M page we belong to at PAGE_OFFSET and
> >+	 * get going from there.
> >+	 */
> >+	lis	r4,KERNELBASE@h
> >+	ori	r4,r4,KERNELBASE@l
> >+	rlwinm	r6,r25,0,0xfffffff		/* r6 = PHYS_START % 256M */
> >+	rlwinm	r5,r4,0,0xfffffff		/* r5 = KERNELBASE % 256M */
> >+	subf	r3,r5,r6			/* r3 = r6 - r5 */
> >+	add	r3,r4,r3			/* Required Virutal Address */
> 
> s/Virutal/Virtual/

Fixed.

Thanks,
Kevin
> 
> -Scott

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 4/8] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1
  2013-07-26 23:37   ` Scott Wood
@ 2013-08-04  0:42     ` Kevin Hao
  0 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-08-04  0:42 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 1186 bytes --]

On Fri, Jul 26, 2013 at 06:37:10PM -0500, Scott Wood wrote:
> On 07/04/2013 07:54:10 AM, Kevin Hao wrote:
> >--- a/arch/powerpc/kernel/head_fsl_booke.S

<snip>

> >+++ b/arch/powerpc/kernel/head_fsl_booke.S
> >@@ -1172,6 +1172,87 @@ __secondary_hold_acknowledge:
> > #endif
> >
> > /*
> >+ * Create a tbl entry
> 
> s/tbl/tlb/

Fixed.

> 
> >diff --git a/arch/powerpc/mm/fsl_booke_mmu.c
> >b/arch/powerpc/mm/fsl_booke_mmu.c
> >index 5fe271c..8f60ef8 100644
> >--- a/arch/powerpc/mm/fsl_booke_mmu.c
> >+++ b/arch/powerpc/mm/fsl_booke_mmu.c
> >@@ -222,7 +222,9 @@ void __init adjust_total_lowmem(void)
> > 	/* adjust lowmem size to __max_low_memory */
> > 	ram = min((phys_addr_t)__max_low_memory,
> >(phys_addr_t)total_lowmem);
> >
> >+	i = switch_to_as1();
> > 	__max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
> >+	restore_to_as0(i);
> 
> Wouldn't it be simpler to just run out of AS1 from the end of
> fsl_booke_entry_mapping.S, similar to what U-Boot does?  With ESEL
> being changed to something non-conflicting, of course.

This pair of functions will be used by the codes in the following patch.

Thanks,
Kevin
> 
> -Scott

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info
  2013-07-27  0:18   ` Scott Wood
@ 2013-08-04  0:45     ` Kevin Hao
  2013-08-05 23:59       ` Scott Wood
  0 siblings, 1 reply; 25+ messages in thread
From: Kevin Hao @ 2013-08-04  0:45 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 1864 bytes --]

On Fri, Jul 26, 2013 at 07:18:01PM -0500, Scott Wood wrote:
> >+ * This function run before early_init_devtree, so we have to init
> >+ * initial_boot_params. Since early_init_dt_scan_memory_ppc will be
> >+ * executed again in early_init_devtree, we have to reinitialize the
> >+ * memblock data before return.
> >+ */
> >+void __init early_get_first_memblock_info(void *params,
> >phys_addr_t *size)
> >+{
> >+	/* Setup flat device-tree pointer */
> >+	initial_boot_params = params;
> >+
> >+	/* Scan memory nodes and rebuild MEMBLOCKs */
> >+	of_scan_flat_dt(early_init_dt_scan_root, NULL);
> >+	of_scan_flat_dt(early_init_dt_scan_memory_ppc, NULL);
> >+
> >+	if (size)
> >+		*size = first_memblock_size;
> >+
> >+	/* Undo what early_init_dt_scan_memory_ppc does to memblock */
> >+	memblock_reinit();
> >+}
> >+#endif
> 
> Wouldn't it be simpler to set a flag so that
> early_init_dt_add_memory_arch() doesn't mess with memblocks on the
> first pass?

Do you mean something like this?

diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 9a69d2d..e861394 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -523,6 +523,10 @@ static int __init early_init_dt_scan_memory_ppc(unsigned long node,
 
 void __init early_init_dt_add_memory_arch(u64 base, u64 size)
 {
+#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_FSL_BOOKE)
+	static int first_time = 1;
+#endif
+
 #ifdef CONFIG_PPC64
 	if (iommu_is_off) {
 		if (base >= 0x80000000ul)
@@ -541,6 +545,13 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
 	}
 
 	/* Add the chunk to the MEMBLOCK list */
+
+#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_FSL_BOOKE)
+	if (first_time) {
+		first_time = 0;
+		return;
+	}
+#endif
 	memblock_add(base, size);
 }

Thanks,
Kevin
> 
> -Scott

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
  2013-07-27  0:17   ` Scott Wood
@ 2013-08-04  0:50     ` Kevin Hao
  2013-08-06  0:10       ` Scott Wood
  0 siblings, 1 reply; 25+ messages in thread
From: Kevin Hao @ 2013-08-04  0:50 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 6088 bytes --]

On Fri, Jul 26, 2013 at 07:17:57PM -0500, Scott Wood wrote:
> On 07/04/2013 07:54:13 AM, Kevin Hao wrote:
> >@@ -1222,6 +1266,9 @@ _GLOBAL(switch_to_as1)
> > /*
> >  * Restore to the address space 0 and also invalidate the tlb
> >entry created
> >  * by switch_to_as1.
> >+ * r3 - the tlb entry which should be invalidated
> >+ * r4 - __pa(PAGE_OFFSET in AS0) - pa(PAGE_OFFSET in AS1)
> >+ * r5 - device tree virtual address
> > */
> > _GLOBAL(restore_to_as0)
> > 	mflr	r0
> >@@ -1230,7 +1277,15 @@ _GLOBAL(restore_to_as0)
> > 0:	mflr	r9
> > 	addi	r9,r9,1f - 0b
> >
> >-	mfmsr	r7
> >+	/*
> >+	 * We may map the PAGE_OFFSET in AS0 to a different physical
> >address,
> >+	 * so we need calculate the right jump and device tree address
> >based
> >+	 * on the offset passed by r4.
> >+	*/
> 
> Whitespace

Fixed.

> 
> >+	subf	r9,r4,r9
> >+	subf	r5,r4,r5
> >+
> >+2:	mfmsr	r7
> > 	li	r8,(MSR_IS | MSR_DS)
> > 	andc	r7,r7,r8
> >
> >@@ -1249,9 +1304,19 @@ _GLOBAL(restore_to_as0)
> > 	mtspr	SPRN_MAS1,r9
> > 	tlbwe
> > 	isync
> >+
> >+	cmpwi	r4,0
> >+	bne	3f
> > 	mtlr	r0
> > 	blr
> >
> >+	/*
> >+	 * The PAGE_OFFSET will map to a different physical address,
> >+	 * jump to _start to do another relocation again.
> >+	*/
> >+3:	mr	r3,r5
> >+	bl	_start
> >+
> > /*
> >  * We put a few things here that have to be page-aligned. This stuff
> >  * goes at the beginning of the data segment, which is page-aligned.
> >diff --git a/arch/powerpc/mm/fsl_booke_mmu.c
> >b/arch/powerpc/mm/fsl_booke_mmu.c
> >index 8f60ef8..dd283fd 100644
> >--- a/arch/powerpc/mm/fsl_booke_mmu.c
> >+++ b/arch/powerpc/mm/fsl_booke_mmu.c
> >@@ -224,7 +224,7 @@ void __init adjust_total_lowmem(void)
> >
> > 	i = switch_to_as1();
> > 	__max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
> >-	restore_to_as0(i);
> >+	restore_to_as0(i, 0, 0);
> 
> The device tree virtual address is zero?

No. But if the __pa(PAGE_OFFSET in AS0) is equal to __pa(PAGE_OFFSET in AS1),
that mean we don't need to do another relocation and the device tree virtual
address is useless in this case.

> 
> > 	pr_info("Memory CAM mapping: ");
> > 	for (i = 0; i < tlbcam_index - 1; i++)
> >@@ -245,30 +245,56 @@ void setup_initial_memory_limit(phys_addr_t
> >first_memblock_base,
> > }
> >
> > #ifdef CONFIG_RELOCATABLE
> >-notrace void __init relocate_init(phys_addr_t start)
> >+int __initdata is_second_reloc;
> >+notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
> > {
> > 	unsigned long base = KERNELBASE;
> >
> >-	/*
> >-	 * Relocatable kernel support based on processing of dynamic
> >-	 * relocation entries.
> >-	 * Compute the virt_phys_offset :
> >-	 * virt_phys_offset = stext.run - kernstart_addr
> >-	 *
> >-	 * stext.run = (KERNELBASE & ~0xfffffff) + (kernstart_addr &
> >0xfffffff)
> >-	 * When we relocate, we have :
> >-	 *
> >-	 *	(kernstart_addr & 0xfffffff) = (stext.run & 0xfffffff)
> >-	 *
> >-	 * hence:
> >-	 *  virt_phys_offset = (KERNELBASE & ~0xfffffff) -
> >-	 *                              (kernstart_addr & ~0xfffffff)
> >-	 *
> >-	 */
> > 	kernstart_addr = start;
> >-	start &= ~0xfffffff;
> >-	base &= ~0xfffffff;
> >-	virt_phys_offset = base - start;
> >+	if (!is_second_reloc) {
> 
> Since it's at the end of a function and one side is much shorter
> than the
> other, please do:
> 
> 	if (is_second_reloc) {
> 		virt_phys_offset = PAGE_OFFSET - memstart_addr;
> 		return;
> 	}
> 
> 	/* the rest of the code goes here without having to indent
> everything */
> 

Yes, this looks much better. Changed.

> Otherwise, please use positive logic for if/else constructs.
> 
> >+		phys_addr_t size;
> >+
> >+		/*
> >+		 * Relocatable kernel support based on processing of dynamic
> >+		 * relocation entries. Before we get the real memstart_addr,
> >+		 * We will compute the virt_phys_offset like this:
> >+		 * virt_phys_offset = stext.run - kernstart_addr
> >+		 *
> >+		 * stext.run = (KERNELBASE & ~0xfffffff) +
> >+		 *				(kernstart_addr & 0xfffffff)
> >+		 * When we relocate, we have :
> >+		 *
> >+		 *	(kernstart_addr & 0xfffffff) = (stext.run & 0xfffffff)
> >+		 *
> >+		 * hence:
> >+		 *  virt_phys_offset = (KERNELBASE & ~0xfffffff) -
> >+		 *                              (kernstart_addr & ~0xfffffff)
> >+		 *
> >+		 */
> >+		start &= ~0xfffffff;
> >+		base &= ~0xfffffff;
> >+		virt_phys_offset = base - start;
> >+		early_get_first_memblock_info(__va(dt_ptr), &size);
> >+		/*
> >+		 * We now get the memstart_addr, then we should check if this
> >+		 * address is the same as what the PAGE_OFFSET map to now. If
> >+		 * not we have to change the map of PAGE_OFFSET to
> >memstart_addr
> >+		 * and do a second relocation.
> >+		 */
> >+		if (start != memstart_addr) {
> >+			unsigned long ram;
> >+			int n, offset = memstart_addr - start;
> >+
> >+			is_second_reloc = 1;
> >+			ram = size;
> >+			n = switch_to_as1();
> >+			map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
> 
> Do we really need this much RAM mapped at this point?

Not really.

>  Why can't we
> continue
> with the same size TLB entry that we've been using, until the second
> relocation?

OK, I will change it to use 64M.

> 
> >+			restore_to_as0(n, offset, __va(dt_ptr));
> >+			/* We should never reach here */
> >+			panic("Relocation error");
> 
> Where is execution supposed to resume?  It looks like you're
> expecting it
> to resume from _start,

Yes.

> but why?

For second relocation, we need to:
  * do the real relocate
  * set the interrupt vector
  * zero the BSS

So starting from the _start will avoid to duplicate these codes.

>  And where is this effect of
> restore_to_as0() documented?

There is a comment about this in restore_to_as0.

        /*   
         * The PAGE_OFFSET will map to a different physical address,
         * jump to _start to do another relocation again.
        */
3:      mr      r3,r5
        bl      _start

Thanks,
Kevin
> 
> -Scott

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info
  2013-08-04  0:45     ` Kevin Hao
@ 2013-08-05 23:59       ` Scott Wood
  2013-08-06  1:21         ` Kevin Hao
  0 siblings, 1 reply; 25+ messages in thread
From: Scott Wood @ 2013-08-05 23:59 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On Sun, 2013-08-04 at 08:45 +0800, Kevin Hao wrote:
> On Fri, Jul 26, 2013 at 07:18:01PM -0500, Scott Wood wrote:
> > >+ * This function run before early_init_devtree, so we have to init
> > >+ * initial_boot_params. Since early_init_dt_scan_memory_ppc will be
> > >+ * executed again in early_init_devtree, we have to reinitialize the
> > >+ * memblock data before return.
> > >+ */
> > >+void __init early_get_first_memblock_info(void *params,
> > >phys_addr_t *size)
> > >+{
> > >+	/* Setup flat device-tree pointer */
> > >+	initial_boot_params = params;
> > >+
> > >+	/* Scan memory nodes and rebuild MEMBLOCKs */
> > >+	of_scan_flat_dt(early_init_dt_scan_root, NULL);
> > >+	of_scan_flat_dt(early_init_dt_scan_memory_ppc, NULL);
> > >+
> > >+	if (size)
> > >+		*size = first_memblock_size;
> > >+
> > >+	/* Undo what early_init_dt_scan_memory_ppc does to memblock */
> > >+	memblock_reinit();
> > >+}
> > >+#endif
> > 
> > Wouldn't it be simpler to set a flag so that
> > early_init_dt_add_memory_arch() doesn't mess with memblocks on the
> > first pass?
> 
> Do you mean something like this?
> 
> diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
> index 9a69d2d..e861394 100644
> --- a/arch/powerpc/kernel/prom.c
> +++ b/arch/powerpc/kernel/prom.c
> @@ -523,6 +523,10 @@ static int __init early_init_dt_scan_memory_ppc(unsigned long node,
>  
>  void __init early_init_dt_add_memory_arch(u64 base, u64 size)
>  {
> +#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_FSL_BOOKE)
> +	static int first_time = 1;
> +#endif
> +
>  #ifdef CONFIG_PPC64
>  	if (iommu_is_off) {
>  		if (base >= 0x80000000ul)
> @@ -541,6 +545,13 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
>  	}
>  
>  	/* Add the chunk to the MEMBLOCK list */
> +
> +#if defined(CONFIG_RELOCATABLE) && defined(CONFIG_FSL_BOOKE)
> +	if (first_time) {
> +		first_time = 0;
> +		return;
> +	}
> +#endif
>  	memblock_add(base, size);
>  }

I think it'd be clearer for it to be an external variable that gets set
by the relocation code -- plus, the above wouldn't work if this gets
called twice due to having multiple memory regions.

-Scott

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
  2013-08-04  0:50     ` Kevin Hao
@ 2013-08-06  0:10       ` Scott Wood
  2013-08-06  1:23         ` Kevin Hao
  0 siblings, 1 reply; 25+ messages in thread
From: Scott Wood @ 2013-08-06  0:10 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On Sun, 2013-08-04 at 08:50 +0800, Kevin Hao wrote:
> On Fri, Jul 26, 2013 at 07:17:57PM -0500, Scott Wood wrote:
> > >diff --git a/arch/powerpc/mm/fsl_booke_mmu.c
> > >b/arch/powerpc/mm/fsl_booke_mmu.c
> > >index 8f60ef8..dd283fd 100644
> > >--- a/arch/powerpc/mm/fsl_booke_mmu.c
> > >+++ b/arch/powerpc/mm/fsl_booke_mmu.c
> > >@@ -224,7 +224,7 @@ void __init adjust_total_lowmem(void)
> > >
> > > 	i = switch_to_as1();
> > > 	__max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
> > >-	restore_to_as0(i);
> > >+	restore_to_as0(i, 0, 0);
> > 
> > The device tree virtual address is zero?
> 
> No. But if the __pa(PAGE_OFFSET in AS0) is equal to __pa(PAGE_OFFSET in AS1),
> that mean we don't need to do another relocation and the device tree virtual
> address is useless in this case.

The documentation of restore_to_as0() should make it clear that r5 is
ignored if r4 is zero.

-Scott

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
  2013-07-04 12:54 ` [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel Kevin Hao
  2013-07-27  0:17   ` Scott Wood
@ 2013-08-06  0:14   ` Scott Wood
  2013-08-06  1:45     ` Kevin Hao
  1 sibling, 1 reply; 25+ messages in thread
From: Scott Wood @ 2013-08-06  0:14 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On Thu, 2013-07-04 at 20:54 +0800, Kevin Hao wrote:
> @@ -1222,6 +1266,9 @@ _GLOBAL(switch_to_as1)
>  /*
>   * Restore to the address space 0 and also invalidate the tlb entry created
>   * by switch_to_as1.
> + * r3 - the tlb entry which should be invalidated
> + * r4 - __pa(PAGE_OFFSET in AS0) - pa(PAGE_OFFSET in AS1)
> + * r5 - device tree virtual address
>  */
>  _GLOBAL(restore_to_as0)
[snip]
> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> index 3a65644..8280dbb 100644
> --- a/arch/powerpc/mm/mmu_decl.h
> +++ b/arch/powerpc/mm/mmu_decl.h
> @@ -149,7 +149,7 @@ extern void MMU_init_hw(void);
>  extern unsigned long mmu_mapin_ram(unsigned long top);
>  extern void adjust_total_lowmem(void);
>  extern int switch_to_as1(void);
> -extern void restore_to_as0(int);
> +extern void restore_to_as0(int, int, void *);

"int" seems wrong for the second argument.  Shouldn't it be phys_addr_t?

Also, please use parameter names.

-Scott

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info
  2013-08-05 23:59       ` Scott Wood
@ 2013-08-06  1:21         ` Kevin Hao
  0 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-08-06  1:21 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 430 bytes --]

On Mon, Aug 05, 2013 at 06:59:28PM -0500, Scott Wood wrote:
> On Sun, 2013-08-04 at 08:45 +0800, Kevin Hao wrote:
> >  	memblock_add(base, size);

<snip>

> >  }
> 
> I think it'd be clearer for it to be an external variable that gets set
> by the relocation code -- plus, the above wouldn't work if this gets
> called twice due to having multiple memory regions.

Got it.

Thanks,
Kevin
> 
> -Scott
> 
> 
> 

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
  2013-08-06  0:10       ` Scott Wood
@ 2013-08-06  1:23         ` Kevin Hao
  0 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-08-06  1:23 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 1081 bytes --]

On Mon, Aug 05, 2013 at 07:10:06PM -0500, Scott Wood wrote:
> On Sun, 2013-08-04 at 08:50 +0800, Kevin Hao wrote:
> > On Fri, Jul 26, 2013 at 07:17:57PM -0500, Scott Wood wrote:
> > > >diff --git a/arch/powerpc/mm/fsl_booke_mmu.c
> > > >b/arch/powerpc/mm/fsl_booke_mmu.c
> > > >index 8f60ef8..dd283fd 100644
> > > >--- a/arch/powerpc/mm/fsl_booke_mmu.c
> > > >+++ b/arch/powerpc/mm/fsl_booke_mmu.c
> > > >@@ -224,7 +224,7 @@ void __init adjust_total_lowmem(void)
> > > >
> > > > 	i = switch_to_as1();
> > > > 	__max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
> > > >-	restore_to_as0(i);
> > > >+	restore_to_as0(i, 0, 0);
> > > 
> > > The device tree virtual address is zero?
> > 
> > No. But if the __pa(PAGE_OFFSET in AS0) is equal to __pa(PAGE_OFFSET in AS1),
> > that mean we don't need to do another relocation and the device tree virtual
> > address is useless in this case.
> 
> The documentation of restore_to_as0() should make it clear that r5 is
> ignored if r4 is zero.

OK, will add.

Thanks,
Kevin

> 
> -Scott
> 
> 
> 

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
  2013-08-06  0:14   ` Scott Wood
@ 2013-08-06  1:45     ` Kevin Hao
  0 siblings, 0 replies; 25+ messages in thread
From: Kevin Hao @ 2013-08-06  1:45 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 1371 bytes --]

On Mon, Aug 05, 2013 at 07:14:00PM -0500, Scott Wood wrote:
> On Thu, 2013-07-04 at 20:54 +0800, Kevin Hao wrote:
> > @@ -1222,6 +1266,9 @@ _GLOBAL(switch_to_as1)
> >  /*
> >   * Restore to the address space 0 and also invalidate the tlb entry created
> >   * by switch_to_as1.
> > + * r3 - the tlb entry which should be invalidated
> > + * r4 - __pa(PAGE_OFFSET in AS0) - pa(PAGE_OFFSET in AS1)
> > + * r5 - device tree virtual address
> >  */
> >  _GLOBAL(restore_to_as0)
> [snip]
> > diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> > index 3a65644..8280dbb 100644
> > --- a/arch/powerpc/mm/mmu_decl.h
> > +++ b/arch/powerpc/mm/mmu_decl.h
> > @@ -149,7 +149,7 @@ extern void MMU_init_hw(void);
> >  extern unsigned long mmu_mapin_ram(unsigned long top);
> >  extern void adjust_total_lowmem(void);
> >  extern int switch_to_as1(void);
> > -extern void restore_to_as0(int);
> > +extern void restore_to_as0(int, int, void *);
> 
> "int" seems wrong for the second argument. Shouldn't it be phys_addr_t?

This is the offset between __pa(PAGE_OFFSET in AS0) and __pa(PAGE_OFFSET in AS1).
And the max offset should never exceed 768M. So a int type is safe here and
using this type also avoid the ugly #ifdef CONFIG_PHYS_64BIT.

> 
> Also, please use parameter names.

Sure.

Thanks,
Kevin
> 
> -Scott
> 
> 
> 

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2013-08-06  1:45 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-04 12:54 [PATCH v2 0/8] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
2013-07-04 12:54 ` [PATCH v2 1/8] powerpc/fsl_booke: protect the access to MAS7 with MMU_FTR_BIG_PHYS Kevin Hao
2013-07-26 23:14   ` Scott Wood
2013-08-04  0:30     ` Kevin Hao
2013-07-04 12:54 ` [PATCH v2 2/8] powerpc/fsl_booke: introduce get_phys_addr function Kevin Hao
2013-07-04 12:54 ` [PATCH v2 3/8] powerpc: enable the relocatable support for the fsl booke 32bit kernel Kevin Hao
2013-07-26 23:28   ` Scott Wood
2013-08-04  0:38     ` Kevin Hao
2013-07-04 12:54 ` [PATCH v2 4/8] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 Kevin Hao
2013-07-26 23:37   ` Scott Wood
2013-08-04  0:42     ` Kevin Hao
2013-07-04 12:54 ` [PATCH v2 5/8] memblock: introduce the memblock_reinit function Kevin Hao
2013-07-04 12:54 ` [PATCH v2 6/8] powerpc: introduce early_get_first_memblock_info Kevin Hao
2013-07-27  0:18   ` Scott Wood
2013-08-04  0:45     ` Kevin Hao
2013-08-05 23:59       ` Scott Wood
2013-08-06  1:21         ` Kevin Hao
2013-07-04 12:54 ` [PATCH v2 7/8] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel Kevin Hao
2013-07-27  0:17   ` Scott Wood
2013-08-04  0:50     ` Kevin Hao
2013-08-06  0:10       ` Scott Wood
2013-08-06  1:23         ` Kevin Hao
2013-08-06  0:14   ` Scott Wood
2013-08-06  1:45     ` Kevin Hao
2013-07-04 12:54 ` [PATCH v2 8/8] powerpc/fsl_booke: enable the relocatable for the kdump kernel Kevin Hao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).