linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel
@ 2013-08-07  1:18 Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 1/7] powerpc/fsl_booke: protect the access to MAS7 Kevin Hao
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Kevin Hao @ 2013-08-07  1:18 UTC (permalink / raw)
  To: Scott Wood, Kumar Gala; +Cc: linuxppc

v3:
The main changes include:
  * Drop the patch 5 in v2 (memblock: introduce the memblock_reinit function)
  * Change to use the 64M boot init tlb.

Please refer to the comment section of each patch for more detail.

This patch series passed the kdump test with kernel option "crashkernel=64M@32M"
and "crashkernel=64M@80M" on a p2020rdb board.

v2:
These patches are based on the Ben's next branch. In this version we choose
to do a second relocation if the PAGE_OFFSET is not mapped to the memstart_addr
and we also choose to set the tlb1 entries for the kernel space in address
space 1. With this implementation:
  * We can load the kernel at any place between
     memstart_addr ~ memstart_addr + 768M
  * We can reserve any memory between memstart_addr ~ memstart_addr + 768M
    for a kdump kernel.

I have done a kdump boot on a p2020rdb kernel with the memory reserved by
'crashkernel=32M@320M'.


v1:
Currently the fsl booke 32bit kernel is using the DYNAMIC_MEMSTART relocation
method. But the RELOCATABLE method is more flexible and has less alignment
restriction. So enable this feature on this platform and use it by
default for the kdump kernel.

These patches have passed the kdump boot test on a p2020rdb board.
---
Kevin Hao (7):
  powerpc/fsl_booke: protect the access to MAS7
  powerpc/fsl_booke: introduce get_phys_addr function
  powerpc: enable the relocatable support for the fsl booke 32bit kernel
  powerpc/fsl_booke: set the tlb entry for the kernel address in AS1
  powerpc: introduce early_get_first_memblock_info
  powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for    
    relocatable kernel
  powerpc/fsl_booke: enable the relocatable for the kdump kernel

 arch/powerpc/Kconfig                          |   5 +-
 arch/powerpc/kernel/fsl_booke_entry_mapping.S |   2 +
 arch/powerpc/kernel/head_fsl_booke.S          | 231 ++++++++++++++++++++++++--
 arch/powerpc/kernel/prom.c                    |  41 ++++-
 arch/powerpc/mm/fsl_booke_mmu.c               |  55 ++++++
 arch/powerpc/mm/hugetlbpage-book3e.c          |   3 +-
 arch/powerpc/mm/mmu_decl.h                    |   2 +
 include/linux/of_fdt.h                        |   1 +
 8 files changed, 317 insertions(+), 23 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 1/7] powerpc/fsl_booke: protect the access to MAS7
  2013-08-07  1:18 [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
@ 2013-08-07  1:18 ` Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 2/7] powerpc/fsl_booke: introduce get_phys_addr function Kevin Hao
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Kevin Hao @ 2013-08-07  1:18 UTC (permalink / raw)
  To: Scott Wood, Kumar Gala; +Cc: linuxppc

The e500v1 doesn't implement the MAS7, so we should avoid to access
this register on that implementations. In the current kernel, the
access to MAS7 are protected by either CONFIG_PHYS_64BIT or
MMU_FTR_BIG_PHYS. Since some code are executed before the code
patching, we have to use CONFIG_PHYS_64BIT in these cases.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
v3: Use ifdef CONFIG_PHYS_64BIT for the code running before code patching.

v2: A new patch in v2.

 arch/powerpc/kernel/head_fsl_booke.S | 2 ++
 arch/powerpc/mm/hugetlbpage-book3e.c | 3 ++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index d10a7ca..304e6f2 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -82,7 +82,9 @@ _ENTRY(_start);
 	and	r19,r3,r18		/* r19 = page offset */
 	andc	r31,r20,r18		/* r31 = page base */
 	or	r31,r31,r19		/* r31 = devtree phys addr */
+#ifdef CONFIG_PHYS_64BIT
 	mfspr	r30,SPRN_MAS7
+#endif
 
 	li	r25,0			/* phys kernel start (low) */
 	li	r24,0			/* CPU number */
diff --git a/arch/powerpc/mm/hugetlbpage-book3e.c b/arch/powerpc/mm/hugetlbpage-book3e.c
index 3bc7006..ac63e7e 100644
--- a/arch/powerpc/mm/hugetlbpage-book3e.c
+++ b/arch/powerpc/mm/hugetlbpage-book3e.c
@@ -103,7 +103,8 @@ void book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea,
 	if (mmu_has_feature(MMU_FTR_USE_PAIRED_MAS)) {
 		mtspr(SPRN_MAS7_MAS3, mas7_3);
 	} else {
-		mtspr(SPRN_MAS7, upper_32_bits(mas7_3));
+		if (mmu_has_feature(MMU_FTR_BIG_PHYS))
+			mtspr(SPRN_MAS7, upper_32_bits(mas7_3));
 		mtspr(SPRN_MAS3, lower_32_bits(mas7_3));
 	}
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 2/7] powerpc/fsl_booke: introduce get_phys_addr function
  2013-08-07  1:18 [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 1/7] powerpc/fsl_booke: protect the access to MAS7 Kevin Hao
@ 2013-08-07  1:18 ` Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel Kevin Hao
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Kevin Hao @ 2013-08-07  1:18 UTC (permalink / raw)
  To: Scott Wood, Kumar Gala; +Cc: linuxppc

Move the codes which translate a effective address to physical address
to a separate function. So it can be reused by other code.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
v3: Use ifdef CONFIG_PHYS_64BIT to protect the access to MAS7

v2: A new patch in v2.

 arch/powerpc/kernel/head_fsl_booke.S | 50 +++++++++++++++++++++---------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 304e6f2..377bd81 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -65,26 +65,9 @@ _ENTRY(_start);
 	nop
 
 	/* Translate device tree address to physical, save in r30/r31 */
-	mfmsr	r16
-	mfspr	r17,SPRN_PID
-	rlwinm	r17,r17,16,0x3fff0000	/* turn PID into MAS6[SPID] */
-	rlwimi	r17,r16,28,0x00000001	/* turn MSR[DS] into MAS6[SAS] */
-	mtspr	SPRN_MAS6,r17
-
-	tlbsx	0,r3			/* must succeed */
-
-	mfspr	r16,SPRN_MAS1
-	mfspr	r20,SPRN_MAS3
-	rlwinm	r17,r16,25,0x1f		/* r17 = log2(page size) */
-	li	r18,1024
-	slw	r18,r18,r17		/* r18 = page size */
-	addi	r18,r18,-1
-	and	r19,r3,r18		/* r19 = page offset */
-	andc	r31,r20,r18		/* r31 = page base */
-	or	r31,r31,r19		/* r31 = devtree phys addr */
-#ifdef CONFIG_PHYS_64BIT
-	mfspr	r30,SPRN_MAS7
-#endif
+	bl	get_phys_addr
+	mr	r30,r3
+	mr	r31,r4
 
 	li	r25,0			/* phys kernel start (low) */
 	li	r24,0			/* CPU number */
@@ -858,6 +841,33 @@ KernelSPE:
 #endif /* CONFIG_SPE */
 
 /*
+ * Translate the effec addr in r3 to phys addr. The phys addr will be put
+ * into r3(higher 32bit) and r4(lower 32bit)
+ */
+get_phys_addr:
+	mfmsr	r8
+	mfspr	r9,SPRN_PID
+	rlwinm	r9,r9,16,0x3fff0000	/* turn PID into MAS6[SPID] */
+	rlwimi	r9,r8,28,0x00000001	/* turn MSR[DS] into MAS6[SAS] */
+	mtspr	SPRN_MAS6,r9
+
+	tlbsx	0,r3			/* must succeed */
+
+	mfspr	r8,SPRN_MAS1
+	mfspr	r12,SPRN_MAS3
+	rlwinm	r9,r8,25,0x1f		/* r9 = log2(page size) */
+	li	r10,1024
+	slw	r10,r10,r9		/* r10 = page size */
+	addi	r10,r10,-1
+	and	r11,r3,r10		/* r11 = page offset */
+	andc	r4,r12,r10		/* r4 = page base */
+	or	r4,r4,r11		/* r4 = devtree phys addr */
+#ifdef CONFIG_PHYS_64BIT
+	mfspr	r3,SPRN_MAS7
+#endif
+	blr
+
+/*
  * Global functions
  */
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2013-08-07  1:18 [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 1/7] powerpc/fsl_booke: protect the access to MAS7 Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 2/7] powerpc/fsl_booke: introduce get_phys_addr function Kevin Hao
@ 2013-08-07  1:18 ` Kevin Hao
  2013-12-18 23:48   ` [v3, " Scott Wood
  2013-08-07  1:18 ` [PATCH v3 4/7] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 Kevin Hao
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Kevin Hao @ 2013-08-07  1:18 UTC (permalink / raw)
  To: Scott Wood, Kumar Gala; +Cc: linuxppc

This is based on the codes in the head_44x.S. The difference is that
the init tlb size we used is 64M. With this patch we can only load the
kernel at address between memstart_addr ~ memstart_addr + 64M. We will
fix this restriction in the following patches.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
v3:
  * Use the 64M align.
  * typo fix.

v2: Move the code to set kernstart_addr and virt_phys_offset to a c function.
    So we can expand it easily later.

 arch/powerpc/Kconfig                          |  2 +-
 arch/powerpc/kernel/fsl_booke_entry_mapping.S |  2 ++
 arch/powerpc/kernel/head_fsl_booke.S          | 37 +++++++++++++++++++++++++++
 arch/powerpc/mm/fsl_booke_mmu.c               | 28 ++++++++++++++++++++
 4 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 3bf72cd..57dc8f9 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -859,7 +859,7 @@ config DYNAMIC_MEMSTART
 
 config RELOCATABLE
 	bool "Build a relocatable kernel"
-	depends on ADVANCED_OPTIONS && FLATMEM && 44x
+	depends on ADVANCED_OPTIONS && FLATMEM && (44x || FSL_BOOKE)
 	select NONSTATIC_KERNEL
 	help
 	  This builds a kernel image that is capable of running at the
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index a92c79b..f22e7e4 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -176,6 +176,8 @@ skpinv:	addi	r6,r6,1				/* Increment */
 /* 7. Jump to KERNELBASE mapping */
 	lis	r6,(KERNELBASE & ~0xfff)@h
 	ori	r6,r6,(KERNELBASE & ~0xfff)@l
+	rlwinm	r7,r25,0,0x03ffffff
+	add	r6,r7,r6
 
 #elif defined(ENTRY_MAPPING_KEXEC_SETUP)
 /*
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 377bd81..f6ec9a3 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -73,6 +73,33 @@ _ENTRY(_start);
 	li	r24,0			/* CPU number */
 	li	r23,0			/* phys kernel start (high) */
 
+#ifdef CONFIG_RELOCATABLE
+	bl	0f			/* Get our runtime address */
+0:	mflr	r3			/* Make it accessible */
+	addis	r3,r3,(_stext - 0b)@ha
+	addi	r3,r3,(_stext - 0b)@l	/* Get our current runtime base */
+
+	/* Translate _stext address to physical, save in r23/r25 */
+	bl	get_phys_addr
+	mr	r23,r3
+	mr	r25,r4
+
+	/*
+	 * We have the runtime (virutal) address of our base.
+	 * We calculate our shift of offset from a 64M page.
+	 * We could map the 64M page we belong to at PAGE_OFFSET and
+	 * get going from there.
+	 */
+	lis	r4,KERNELBASE@h
+	ori	r4,r4,KERNELBASE@l
+	rlwinm	r6,r25,0,0x3ffffff		/* r6 = PHYS_START % 64M */
+	rlwinm	r5,r4,0,0x3ffffff		/* r5 = KERNELBASE % 64M */
+	subf	r3,r5,r6			/* r3 = r6 - r5 */
+	add	r3,r4,r3			/* Required Virtual Address */
+
+	bl	relocate
+#endif
+
 /* We try to not make any assumptions about how the boot loader
  * setup or used the TLBs.  We invalidate all mappings from the
  * boot loader and load a single entry in TLB1[0] to map the
@@ -182,6 +209,16 @@ _ENTRY(__early_start)
 
 	bl	early_init
 
+#ifdef CONFIG_RELOCATABLE
+#ifdef CONFIG_PHYS_64BIT
+	mr	r3,r23
+	mr	r4,r25
+#else
+	mr	r3,r25
+#endif
+	bl	relocate_init
+#endif
+
 #ifdef CONFIG_DYNAMIC_MEMSTART
 	lis	r3,kernstart_addr@ha
 	la	r3,kernstart_addr@l(r3)
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index 07ba45b..ce4a116 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -241,4 +241,32 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 	/* 64M mapped initially according to head_fsl_booke.S */
 	memblock_set_current_limit(min_t(u64, limit, 0x04000000));
 }
+
+#ifdef CONFIG_RELOCATABLE
+notrace void __init relocate_init(phys_addr_t start)
+{
+	unsigned long base = KERNELBASE;
+
+	/*
+	 * Relocatable kernel support based on processing of dynamic
+	 * relocation entries.
+	 * Compute the virt_phys_offset :
+	 * virt_phys_offset = stext.run - kernstart_addr
+	 *
+	 * stext.run = (KERNELBASE & ~0x3ffffff) + (kernstart_addr & 0x3ffffff)
+	 * When we relocate, we have :
+	 *
+	 *	(kernstart_addr & 0x3ffffff) = (stext.run & 0x3ffffff)
+	 *
+	 * hence:
+	 *  virt_phys_offset = (KERNELBASE & ~0x3ffffff) -
+	 *                              (kernstart_addr & ~0x3ffffff)
+	 *
+	 */
+	kernstart_addr = start;
+	start &= ~0x3ffffff;
+	base &= ~0x3ffffff;
+	virt_phys_offset = base - start;
+}
+#endif
 #endif
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 4/7] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1
  2013-08-07  1:18 [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (2 preceding siblings ...)
  2013-08-07  1:18 ` [PATCH v3 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel Kevin Hao
@ 2013-08-07  1:18 ` Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 5/7] powerpc: introduce early_get_first_memblock_info Kevin Hao
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Kevin Hao @ 2013-08-07  1:18 UTC (permalink / raw)
  To: Scott Wood, Kumar Gala; +Cc: linuxppc

We use the tlb1 entries to map low mem to the kernel space. In the
current code, it assumes that the first tlb entry would cover the
kernel image. But this is not true for some special cases, such as
when we run a relocatable kernel above the 64M or set
CONFIG_KERNEL_START above 64M. So we choose to switch to address
space 1 before setting these tlb entries.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
v3: Typo fix.

v2: A new patch in v2.

 arch/powerpc/kernel/head_fsl_booke.S | 81 ++++++++++++++++++++++++++++++++++++
 arch/powerpc/mm/fsl_booke_mmu.c      |  2 +
 arch/powerpc/mm/mmu_decl.h           |  2 +
 3 files changed, 85 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index f6ec9a3..7e9724e 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1170,6 +1170,87 @@ __secondary_hold_acknowledge:
 #endif
 
 /*
+ * Create a tlb entry with the same effective and physical address as
+ * the tlb entry used by the current running code. But set the TS to 1.
+ * Then switch to the address space 1. It will return with the r3 set to
+ * the ESEL of the new created tlb.
+ */
+_GLOBAL(switch_to_as1)
+	mflr	r5
+
+	/* Find a entry not used */
+	mfspr	r3,SPRN_TLB1CFG
+	andi.	r3,r3,0xfff
+	mfspr	r4,SPRN_PID
+	rlwinm	r4,r4,16,0x3fff0000	/* turn PID into MAS6[SPID] */
+	mtspr	SPRN_MAS6,r4
+1:	lis	r4,0x1000		/* Set MAS0(TLBSEL) = 1 */
+	addi	r3,r3,-1
+	rlwimi	r4,r3,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r3) */
+	mtspr	SPRN_MAS0,r4
+	tlbre
+	mfspr	r4,SPRN_MAS1
+	andis.	r4,r4,MAS1_VALID@h
+	bne	1b
+
+	/* Get the tlb entry used by the current running code */
+	bl	0f
+0:	mflr	r4
+	tlbsx	0,r4
+
+	mfspr	r4,SPRN_MAS1
+	ori	r4,r4,MAS1_TS		/* Set the TS = 1 */
+	mtspr	SPRN_MAS1,r4
+
+	mfspr	r4,SPRN_MAS0
+	rlwinm	r4,r4,0,~MAS0_ESEL_MASK
+	rlwimi	r4,r3,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r3) */
+	mtspr	SPRN_MAS0,r4
+	tlbwe
+	isync
+	sync
+
+	mfmsr	r4
+	ori	r4,r4,MSR_IS | MSR_DS
+	mtspr	SPRN_SRR0,r5
+	mtspr	SPRN_SRR1,r4
+	sync
+	rfi
+
+/*
+ * Restore to the address space 0 and also invalidate the tlb entry created
+ * by switch_to_as1.
+*/
+_GLOBAL(restore_to_as0)
+	mflr	r0
+
+	bl	0f
+0:	mflr	r9
+	addi	r9,r9,1f - 0b
+
+	mfmsr	r7
+	li	r8,(MSR_IS | MSR_DS)
+	andc	r7,r7,r8
+
+	mtspr	SPRN_SRR0,r9
+	mtspr	SPRN_SRR1,r7
+	sync
+	rfi
+
+	/* Invalidate the temporary tlb entry for AS1 */
+1:	lis	r9,0x1000		/* Set MAS0(TLBSEL) = 1 */
+	rlwimi	r9,r3,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r3) */
+	mtspr	SPRN_MAS0,r9
+	tlbre
+	mfspr	r9,SPRN_MAS1
+	rlwinm	r9,r9,0,2,31		/* Clear MAS1 Valid and IPPROT */
+	mtspr	SPRN_MAS1,r9
+	tlbwe
+	isync
+	mtlr	r0
+	blr
+
+/*
  * We put a few things here that have to be page-aligned. This stuff
  * goes at the beginning of the data segment, which is page-aligned.
  */
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index ce4a116..1d54f6d 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -222,7 +222,9 @@ void __init adjust_total_lowmem(void)
 	/* adjust lowmem size to __max_low_memory */
 	ram = min((phys_addr_t)__max_low_memory, (phys_addr_t)total_lowmem);
 
+	i = switch_to_as1();
 	__max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
+	restore_to_as0(i);
 
 	pr_info("Memory CAM mapping: ");
 	for (i = 0; i < tlbcam_index - 1; i++)
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 83eb5d5..eefbf7b 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -148,6 +148,8 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
 extern void MMU_init_hw(void);
 extern unsigned long mmu_mapin_ram(unsigned long top);
 extern void adjust_total_lowmem(void);
+extern int switch_to_as1(void);
+extern void restore_to_as0(int esel);
 #endif
 extern void loadcam_entry(unsigned int index);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 5/7] powerpc: introduce early_get_first_memblock_info
  2013-08-07  1:18 [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (3 preceding siblings ...)
  2013-08-07  1:18 ` [PATCH v3 4/7] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 Kevin Hao
@ 2013-08-07  1:18 ` Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 6/7] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 7/7] powerpc/fsl_booke: enable the relocatable for the kdump kernel Kevin Hao
  6 siblings, 0 replies; 17+ messages in thread
From: Kevin Hao @ 2013-08-07  1:18 UTC (permalink / raw)
  To: Scott Wood, Kumar Gala, Benjamin Herrenschmidt; +Cc: linuxppc

For a relocatable kernel since it can be loaded at any place, there
is no any relation between the kernel start addr and the memstart_addr.
So we can't calculate the memstart_addr from kernel start addr. And
also we can't wait to do the relocation after we get the real
memstart_addr from device tree because it is so late. So introduce
a new function we can use to get the first memblock address and size
in a very early stage (before machine_init).

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
v3: Introduce a variable to avoid to mess the memblock.

v2: A new patch in v2.

 arch/powerpc/kernel/prom.c | 41 ++++++++++++++++++++++++++++++++++++++++-
 include/linux/of_fdt.h     |  1 +
 2 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index eb23ac9..bfd525e 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -521,6 +521,20 @@ static int __init early_init_dt_scan_memory_ppc(unsigned long node,
 	return early_init_dt_scan_memory(node, uname, depth, data);
 }
 
+/*
+ * For a relocatable kernel, we need to get the memstart_addr first,
+ * then use it to calculate the virtual kernel start address. This has
+ * to happen at a very early stage (before machine_init). In this case,
+ * we just want to get the memstart_address and would not like to mess the
+ * memblock at this stage. So introduce a variable to skip the memblock_add()
+ * for this reason.
+ */
+#ifdef CONFIG_RELOCATABLE
+static int add_mem_to_memblock = 1;
+#else
+#define add_mem_to_memblock 1
+#endif
+
 void __init early_init_dt_add_memory_arch(u64 base, u64 size)
 {
 #ifdef CONFIG_PPC64
@@ -541,7 +555,8 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
 	}
 
 	/* Add the chunk to the MEMBLOCK list */
-	memblock_add(base, size);
+	if (add_mem_to_memblock)
+		memblock_add(base, size);
 }
 
 void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
@@ -753,6 +768,30 @@ void __init early_init_devtree(void *params)
 	DBG(" <- early_init_devtree()\n");
 }
 
+#ifdef CONFIG_RELOCATABLE
+/*
+ * This function run before early_init_devtree, so we have to init
+ * initial_boot_params.
+ */
+void __init early_get_first_memblock_info(void *params, phys_addr_t *size)
+{
+	/* Setup flat device-tree pointer */
+	initial_boot_params = params;
+
+	/*
+	 * Scan the memory nodes and set add_mem_to_memblock to 0 to avoid
+	 * mess the memblock.
+	 */
+	add_mem_to_memblock = 0;
+	of_scan_flat_dt(early_init_dt_scan_root, NULL);
+	of_scan_flat_dt(early_init_dt_scan_memory_ppc, NULL);
+	add_mem_to_memblock = 1;
+
+	if (size)
+		*size = first_memblock_size;
+}
+#endif
+
 /*******
  *
  * New implementation of the OF "find" APIs, return a refcounted
diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
index ed136ad..befe744 100644
--- a/include/linux/of_fdt.h
+++ b/include/linux/of_fdt.h
@@ -117,6 +117,7 @@ extern int early_init_dt_scan_root(unsigned long node, const char *uname,
 /* Other Prototypes */
 extern void unflatten_device_tree(void);
 extern void early_init_devtree(void *);
+extern void early_get_first_memblock_info(void *, phys_addr_t *);
 #else /* CONFIG_OF_FLATTREE */
 static inline void unflatten_device_tree(void) {}
 #endif /* CONFIG_OF_FLATTREE */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 6/7] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel
  2013-08-07  1:18 [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (4 preceding siblings ...)
  2013-08-07  1:18 ` [PATCH v3 5/7] powerpc: introduce early_get_first_memblock_info Kevin Hao
@ 2013-08-07  1:18 ` Kevin Hao
  2013-08-07  1:18 ` [PATCH v3 7/7] powerpc/fsl_booke: enable the relocatable for the kdump kernel Kevin Hao
  6 siblings, 0 replies; 17+ messages in thread
From: Kevin Hao @ 2013-08-07  1:18 UTC (permalink / raw)
  To: Scott Wood, Kumar Gala; +Cc: linuxppc

This is always true for a non-relocatable kernel. Otherwise the kernel
would get stuck. But for a relocatable kernel, it seems a little
complicated. When booting a relocatable kernel, we just align the
kernel start addr to 64M and map the PAGE_OFFSET from there. The
relocation will base on this virtual address. But if this address
is not the same as the memstart_addr, we will have to change the
map of PAGE_OFFSET to the real memstart_addr and do another relocation
again.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
v3:
  * Typo fix.
  * Refactor relocate_init, no function change.
  * Map only 64M memory before the second relocation.
  * Comments update.

v2: A new patch in v2.

 arch/powerpc/kernel/head_fsl_booke.S | 75 +++++++++++++++++++++++++++++++++---
 arch/powerpc/mm/fsl_booke_mmu.c      | 37 +++++++++++++++---
 arch/powerpc/mm/mmu_decl.h           |  2 +-
 3 files changed, 102 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 7e9724e..c3989d9 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -84,6 +84,39 @@ _ENTRY(_start);
 	mr	r23,r3
 	mr	r25,r4
 
+	bl	0f
+0:	mflr	r8
+	addis	r3,r8,(is_second_reloc - 0b)@ha
+	lwz	r19,(is_second_reloc - 0b)@l(r3)
+
+	/* Check if this is the second relocation. */
+	cmpwi	r19,1
+	bne	1f
+
+	/*
+	 * For the second relocation, we already get the real memstart_addr
+	 * from device tree. So we will map PAGE_OFFSET to memstart_addr,
+	 * then the virtual address of start kernel should be:
+	 *          PAGE_OFFSET + (kernstart_addr - memstart_addr)
+	 * Since the offset between kernstart_addr and memstart_addr should
+	 * never be beyond 1G, so we can just use the lower 32bit of them
+	 * for the calculation.
+	 */
+	lis	r3,PAGE_OFFSET@h
+
+	addis	r4,r8,(kernstart_addr - 0b)@ha
+	addi	r4,r4,(kernstart_addr - 0b)@l
+	lwz	r5,4(r4)
+
+	addis	r6,r8,(memstart_addr - 0b)@ha
+	addi	r6,r6,(memstart_addr - 0b)@l
+	lwz	r7,4(r6)
+
+	subf	r5,r7,r5
+	add	r3,r3,r5
+	b	2f
+
+1:
 	/*
 	 * We have the runtime (virutal) address of our base.
 	 * We calculate our shift of offset from a 64M page.
@@ -97,7 +130,7 @@ _ENTRY(_start);
 	subf	r3,r5,r6			/* r3 = r6 - r5 */
 	add	r3,r4,r3			/* Required Virtual Address */
 
-	bl	relocate
+2:	bl	relocate
 #endif
 
 /* We try to not make any assumptions about how the boot loader
@@ -121,10 +154,19 @@ _ENTRY(_start);
 
 _ENTRY(__early_start)
 
+#ifdef CONFIG_RELOCATABLE
+	/*
+	 * For the second relocation, we already set the right tlb entries
+	 * for the kernel space, so skip the code in fsl_booke_entry_mapping.S
+	*/
+	cmpwi	r19,1
+	beq	set_ivor
+#endif
 #define ENTRY_MAPPING_BOOT_SETUP
 #include "fsl_booke_entry_mapping.S"
 #undef ENTRY_MAPPING_BOOT_SETUP
 
+set_ivor:
 	/* Establish the interrupt vector offsets */
 	SET_IVOR(0,  CriticalInput);
 	SET_IVOR(1,  MachineCheck);
@@ -210,11 +252,13 @@ _ENTRY(__early_start)
 	bl	early_init
 
 #ifdef CONFIG_RELOCATABLE
+	mr	r3,r30
+	mr	r4,r31
 #ifdef CONFIG_PHYS_64BIT
-	mr	r3,r23
-	mr	r4,r25
+	mr	r5,r23
+	mr	r6,r25
 #else
-	mr	r3,r25
+	mr	r5,r25
 #endif
 	bl	relocate_init
 #endif
@@ -1220,6 +1264,9 @@ _GLOBAL(switch_to_as1)
 /*
  * Restore to the address space 0 and also invalidate the tlb entry created
  * by switch_to_as1.
+ * r3 - the tlb entry which should be invalidated
+ * r4 - __pa(PAGE_OFFSET in AS0) - __pa(PAGE_OFFSET in AS1)
+ * r5 - device tree virtual address. If r4 is 0, r5 is ignored.
 */
 _GLOBAL(restore_to_as0)
 	mflr	r0
@@ -1228,7 +1275,15 @@ _GLOBAL(restore_to_as0)
 0:	mflr	r9
 	addi	r9,r9,1f - 0b
 
-	mfmsr	r7
+	/*
+	 * We may map the PAGE_OFFSET in AS0 to a different physical address,
+	 * so we need calculate the right jump and device tree address based
+	 * on the offset passed by r4.
+	 */
+	subf	r9,r4,r9
+	subf	r5,r4,r5
+
+2:	mfmsr	r7
 	li	r8,(MSR_IS | MSR_DS)
 	andc	r7,r7,r8
 
@@ -1247,9 +1302,19 @@ _GLOBAL(restore_to_as0)
 	mtspr	SPRN_MAS1,r9
 	tlbwe
 	isync
+
+	cmpwi	r4,0
+	bne	3f
 	mtlr	r0
 	blr
 
+	/*
+	 * The PAGE_OFFSET will map to a different physical address,
+	 * jump to _start to do another relocation again.
+	*/
+3:	mr	r3,r5
+	bl	_start
+
 /*
  * We put a few things here that have to be page-aligned. This stuff
  * goes at the beginning of the data segment, which is page-aligned.
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index 1d54f6d..048d716 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -224,7 +224,7 @@ void __init adjust_total_lowmem(void)
 
 	i = switch_to_as1();
 	__max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM);
-	restore_to_as0(i);
+	restore_to_as0(i, 0, 0);
 
 	pr_info("Memory CAM mapping: ");
 	for (i = 0; i < tlbcam_index - 1; i++)
@@ -245,17 +245,25 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 }
 
 #ifdef CONFIG_RELOCATABLE
-notrace void __init relocate_init(phys_addr_t start)
+int __initdata is_second_reloc;
+notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 {
 	unsigned long base = KERNELBASE;
 
+	kernstart_addr = start;
+	if (is_second_reloc) {
+		virt_phys_offset = PAGE_OFFSET - memstart_addr;
+		return;
+	}
+
 	/*
 	 * Relocatable kernel support based on processing of dynamic
-	 * relocation entries.
-	 * Compute the virt_phys_offset :
+	 * relocation entries. Before we get the real memstart_addr,
+	 * We will compute the virt_phys_offset like this:
 	 * virt_phys_offset = stext.run - kernstart_addr
 	 *
-	 * stext.run = (KERNELBASE & ~0x3ffffff) + (kernstart_addr & 0x3ffffff)
+	 * stext.run = (KERNELBASE & ~0x3ffffff) +
+	 *				(kernstart_addr & 0x3ffffff)
 	 * When we relocate, we have :
 	 *
 	 *	(kernstart_addr & 0x3ffffff) = (stext.run & 0x3ffffff)
@@ -265,10 +273,27 @@ notrace void __init relocate_init(phys_addr_t start)
 	 *                              (kernstart_addr & ~0x3ffffff)
 	 *
 	 */
-	kernstart_addr = start;
 	start &= ~0x3ffffff;
 	base &= ~0x3ffffff;
 	virt_phys_offset = base - start;
+	early_get_first_memblock_info(__va(dt_ptr), NULL);
+	/*
+	 * We now get the memstart_addr, then we should check if this
+	 * address is the same as what the PAGE_OFFSET map to now. If
+	 * not we have to change the map of PAGE_OFFSET to memstart_addr
+	 * and do a second relocation.
+	 */
+	if (start != memstart_addr) {
+		int n, offset = memstart_addr - start;
+
+		is_second_reloc = 1;
+		n = switch_to_as1();
+		/* map a 64M area for the second relocation */
+		map_mem_in_cams(0x4000000UL, CONFIG_LOWMEM_CAM_NUM);
+		restore_to_as0(n, offset, __va(dt_ptr));
+		/* We should never reach here */
+		panic("Relocation error");
+	}
 }
 #endif
 #endif
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index eefbf7b..91da910 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -149,7 +149,7 @@ extern void MMU_init_hw(void);
 extern unsigned long mmu_mapin_ram(unsigned long top);
 extern void adjust_total_lowmem(void);
 extern int switch_to_as1(void);
-extern void restore_to_as0(int esel);
+extern void restore_to_as0(int esel, int offset, void *dt_ptr);
 #endif
 extern void loadcam_entry(unsigned int index);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v3 7/7] powerpc/fsl_booke: enable the relocatable for the kdump kernel
  2013-08-07  1:18 [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
                   ` (5 preceding siblings ...)
  2013-08-07  1:18 ` [PATCH v3 6/7] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel Kevin Hao
@ 2013-08-07  1:18 ` Kevin Hao
  6 siblings, 0 replies; 17+ messages in thread
From: Kevin Hao @ 2013-08-07  1:18 UTC (permalink / raw)
  To: Scott Wood, Kumar Gala; +Cc: linuxppc

The RELOCATABLE is more flexible and without any alignment restriction.
And it is a superset of DYNAMIC_MEMSTART. So use it by default for
a kdump kernel.

Signed-off-by: Kevin Hao <haokexin@gmail.com>
---
v3: no change.

v2: A new patch in v2.

 arch/powerpc/Kconfig | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 57dc8f9..7553d72 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -376,8 +376,7 @@ config KEXEC
 config CRASH_DUMP
 	bool "Build a kdump crash kernel"
 	depends on PPC64 || 6xx || FSL_BOOKE || (44x && !SMP)
-	select RELOCATABLE if PPC64 || 44x
-	select DYNAMIC_MEMSTART if FSL_BOOKE
+	select RELOCATABLE if PPC64 || 44x || FSL_BOOKE
 	help
 	  Build a kernel suitable for use as a kdump capture kernel.
 	  The same kernel binary can be used as production kernel and dump
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2013-08-07  1:18 ` [PATCH v3 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel Kevin Hao
@ 2013-12-18 23:48   ` Scott Wood
  2013-12-20  7:43     ` Kevin Hao
  0 siblings, 1 reply; 17+ messages in thread
From: Scott Wood @ 2013-12-18 23:48 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On Wed, Aug 07, 2013 at 09:18:31AM +0800, Kevin Hao wrote:
> This is based on the codes in the head_44x.S. The difference is that
> the init tlb size we used is 64M. With this patch we can only load the
> kernel at address between memstart_addr ~ memstart_addr + 64M. We will
> fix this restriction in the following patches.

Which following patch fixes the restriction?  With all seven patches
applied, I was still only successful booting within 64M of memstart_addr.

-Scott

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2013-12-18 23:48   ` [v3, " Scott Wood
@ 2013-12-20  7:43     ` Kevin Hao
  2014-01-04  0:49       ` Scott Wood
  0 siblings, 1 reply; 17+ messages in thread
From: Kevin Hao @ 2013-12-20  7:43 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 2568 bytes --]

On Wed, Dec 18, 2013 at 05:48:25PM -0600, Scott Wood wrote:
> On Wed, Aug 07, 2013 at 09:18:31AM +0800, Kevin Hao wrote:
> > This is based on the codes in the head_44x.S. The difference is that
> > the init tlb size we used is 64M. With this patch we can only load the
> > kernel at address between memstart_addr ~ memstart_addr + 64M. We will
> > fix this restriction in the following patches.
> 
> Which following patch fixes the restriction?  With all seven patches
> applied, I was still only successful booting within 64M of memstart_addr.

There is bug in this patch series when booting above the 64M. It seems
that I missed to test this previously. Sorry for that. With the following
change I can boot the kernel at 0x5000000.

diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index 048d716ae706..ce0c7d7db6c3 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -171,11 +171,10 @@ unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
 	return 1UL << camsize;
 }
 
-unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
+static unsigned long map_mem_in_cams_addr(phys_addr_t phys, unsigned long virt,
+					unsigned long ram, int max_cam_idx)
 {
 	int i;
-	unsigned long virt = PAGE_OFFSET;
-	phys_addr_t phys = memstart_addr;
 	unsigned long amount_mapped = 0;
 
 	/* Calculate CAM values */
@@ -195,6 +194,14 @@ unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
 	return amount_mapped;
 }
 
+unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
+{
+	unsigned long virt = PAGE_OFFSET;
+	phys_addr_t phys = memstart_addr;
+
+	return map_mem_in_cams_addr(phys, virt, ram, max_cam_idx);
+}
+
 #ifdef CONFIG_PPC32
 
 #if defined(CONFIG_LOWMEM_CAM_NUM_BOOL) && (CONFIG_LOWMEM_CAM_NUM >= NUM_TLBCAMS)
@@ -289,7 +296,11 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 		is_second_reloc = 1;
 		n = switch_to_as1();
 		/* map a 64M area for the second relocation */
-		map_mem_in_cams(0x4000000UL, CONFIG_LOWMEM_CAM_NUM);
+		if (memstart_addr > start)
+			map_mem_in_cams(0x4000000, CONFIG_LOWMEM_CAM_NUM);
+		else
+			map_mem_in_cams_addr(start, PAGE_OFFSET - offset,
+					0x4000000, CONFIG_LOWMEM_CAM_NUM);
 		restore_to_as0(n, offset, __va(dt_ptr));
 		/* We should never reach here */
 		panic("Relocation error");

I will do more test and then create a new spin to merge this change and rebase
on the latest kernel. Thanks for the review.

Kevin
> 
> -Scott

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2013-12-20  7:43     ` Kevin Hao
@ 2014-01-04  0:49       ` Scott Wood
  2014-01-04  6:34         ` Kevin Hao
  0 siblings, 1 reply; 17+ messages in thread
From: Scott Wood @ 2014-01-04  0:49 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On Fri, 2013-12-20 at 15:43 +0800, Kevin Hao wrote:
> On Wed, Dec 18, 2013 at 05:48:25PM -0600, Scott Wood wrote:
> > On Wed, Aug 07, 2013 at 09:18:31AM +0800, Kevin Hao wrote:
> > > This is based on the codes in the head_44x.S. The difference is that
> > > the init tlb size we used is 64M. With this patch we can only load the
> > > kernel at address between memstart_addr ~ memstart_addr + 64M. We will
> > > fix this restriction in the following patches.
> > 
> > Which following patch fixes the restriction?  With all seven patches
> > applied, I was still only successful booting within 64M of memstart_addr.
> 
> There is bug in this patch series when booting above the 64M. It seems
> that I missed to test this previously. Sorry for that. With the following
> change I can boot the kernel at 0x5000000.

I tried v4 and it still doesn't work for me over 64M (without increasing
the start of memory).  I pulled the following out of the log buffer when
booting at 0x5000000 (after cleaning up the binary goo -- is that
something new?):

Unable to handle kernel paging request for data at address 0xbffe4008
Faulting instruction address: 0xc16ee934
Oops: Kernel access of bad area, sig: 11 [#1]
SMP NR_CPUS=8
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 3.13.0-rc1-00065-g422752a #12
task: c18f2340 ti: c192c000 task.ti: c192c000
NIP: c16ee934 LR: c16d1860 CTR: c100e96c
REGS: c192dee0 TRAP: 0300   Not tainted  (3.13.0-rc1-00065-g422752a)
MSR: 00021002 <CE,ME>  CR: 42044022  XER: 20000000
DEAR: bffe4008 ESR: 00000000 
GPR00: c16d1860 c192df90 c18f2340 c16eec58 00000000 00000000 05000000 00000000 
GPR08: 00000000 bffe4000 00000000 bc000000 00000000 0570ad40 ffffffff 7ffb0aec 
GPR16: 00000000 00000000 7fe4dd70 00000000 7fe4ddb0 00000000 c192c000 00000007 
GPR24: 00000000 c1940000 c16eec58 00000000 ffffffff 03fe4000 00000000 c18f0000
NIP [c16ee934] of_scan_flat_dt+0x28/0x148
LR [c16d1860] early_get_first_memblock_info+0x38/0x84
Call Trace:
[c192dfc0] [c16d1860] early_get_first_memblock_info+0x38/0x84
[c192dfd0] [c16d4888] relocate_init+0x98/0x160
[c192dff0] [c100045c] set_ivor+0x144/0x190
Instruction dump:
7c0803a6 4e800020 9421ffd0 7c0802a6 bee1000c 3f20c194 8139a45c 7c7a1b78
90010034 7c9b2378 3b80ffff 3ae00007 <83e90008> 3b00fff8 7fe9fa14 48000008
---[ end trace 41ed10ed80b8d831 ]---
Kernel panic - not syncing: Attempted to kill the idle task!


> diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
> index 048d716ae706..ce0c7d7db6c3 100644
> --- a/arch/powerpc/mm/fsl_booke_mmu.c
> +++ b/arch/powerpc/mm/fsl_booke_mmu.c
> @@ -171,11 +171,10 @@ unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
>  	return 1UL << camsize;
>  }
>  
> -unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
> +static unsigned long map_mem_in_cams_addr(phys_addr_t phys, unsigned long virt,
> +					unsigned long ram, int max_cam_idx)
>  {
>  	int i;
> -	unsigned long virt = PAGE_OFFSET;
> -	phys_addr_t phys = memstart_addr;
>  	unsigned long amount_mapped = 0;
>  
>  	/* Calculate CAM values */
> @@ -195,6 +194,14 @@ unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
>  	return amount_mapped;
>  }
>  
> +unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
> +{
> +	unsigned long virt = PAGE_OFFSET;
> +	phys_addr_t phys = memstart_addr;
> +
> +	return map_mem_in_cams_addr(phys, virt, ram, max_cam_idx);
> +}
> +
>  #ifdef CONFIG_PPC32
>  
>  #if defined(CONFIG_LOWMEM_CAM_NUM_BOOL) && (CONFIG_LOWMEM_CAM_NUM >= NUM_TLBCAMS)
> @@ -289,7 +296,11 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>  		is_second_reloc = 1;
>  		n = switch_to_as1();
>  		/* map a 64M area for the second relocation */
> -		map_mem_in_cams(0x4000000UL, CONFIG_LOWMEM_CAM_NUM);
> +		if (memstart_addr > start)
> +			map_mem_in_cams(0x4000000, CONFIG_LOWMEM_CAM_NUM);
> +		else
> +			map_mem_in_cams_addr(start, PAGE_OFFSET - offset,
> +					0x4000000, CONFIG_LOWMEM_CAM_NUM);
>  		restore_to_as0(n, offset, __va(dt_ptr));
>  		/* We should never reach here */
>  		panic("Relocation error");
> 

I'm having a hard time following the logic here.  What is PAGE_OFFSET -
offset supposed to be?  Why would we map anything belowe PAGE_OFFSET?

-Scott

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2014-01-04  0:49       ` Scott Wood
@ 2014-01-04  6:34         ` Kevin Hao
  2014-01-07 23:46           ` Scott Wood
  0 siblings, 1 reply; 17+ messages in thread
From: Kevin Hao @ 2014-01-04  6:34 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 5906 bytes --]

On Fri, Jan 03, 2014 at 06:49:09PM -0600, Scott Wood wrote:
> On Fri, 2013-12-20 at 15:43 +0800, Kevin Hao wrote:
> > On Wed, Dec 18, 2013 at 05:48:25PM -0600, Scott Wood wrote:
> > > On Wed, Aug 07, 2013 at 09:18:31AM +0800, Kevin Hao wrote:
> > > > This is based on the codes in the head_44x.S. The difference is that
> > > > the init tlb size we used is 64M. With this patch we can only load the
> > > > kernel at address between memstart_addr ~ memstart_addr + 64M. We will
> > > > fix this restriction in the following patches.
> > > 
> > > Which following patch fixes the restriction?  With all seven patches
> > > applied, I was still only successful booting within 64M of memstart_addr.
> > 
> > There is bug in this patch series when booting above the 64M. It seems
> > that I missed to test this previously. Sorry for that. With the following
> > change I can boot the kernel at 0x5000000.
> 
> I tried v4 and it still doesn't work for me over 64M (without increasing
> the start of memory).  I pulled the following out of the log buffer when
> booting at 0x5000000 (after cleaning up the binary goo -- is that
> something new?):
> 
> Unable to handle kernel paging request for data at address 0xbffe4008

Actually there still have one limitation that we have to make sure
that the kernel and dtb are in the 64M memory mapped by the init tlb entry.
I booted the kernel successfully by using the following u-boot commands:
  setenv fdt_high 0xffffffff
  dhcp 6000000 128.224.162.196:/vlm-boards/p5020/uImage
  tftp 6f00000 128.224.162.196:/vlm-boards/p5020/p5020ds.dtb
  bootm 6000000 - 6f00000                                                                                                                                         

> Faulting instruction address: 0xc16ee934
> Oops: Kernel access of bad area, sig: 11 [#1]
> SMP NR_CPUS=8
> Modules linked in:
> CPU: 0 PID: 0 Comm: swapper Not tainted 3.13.0-rc1-00065-g422752a #12
> task: c18f2340 ti: c192c000 task.ti: c192c000
> NIP: c16ee934 LR: c16d1860 CTR: c100e96c
> REGS: c192dee0 TRAP: 0300   Not tainted  (3.13.0-rc1-00065-g422752a)
> MSR: 00021002 <CE,ME>  CR: 42044022  XER: 20000000
> DEAR: bffe4008 ESR: 00000000 
> GPR00: c16d1860 c192df90 c18f2340 c16eec58 00000000 00000000 05000000 00000000 
> GPR08: 00000000 bffe4000 00000000 bc000000 00000000 0570ad40 ffffffff 7ffb0aec 
> GPR16: 00000000 00000000 7fe4dd70 00000000 7fe4ddb0 00000000 c192c000 00000007 
> GPR24: 00000000 c1940000 c16eec58 00000000 ffffffff 03fe4000 00000000 c18f0000
> NIP [c16ee934] of_scan_flat_dt+0x28/0x148
> LR [c16d1860] early_get_first_memblock_info+0x38/0x84
> Call Trace:
> [c192dfc0] [c16d1860] early_get_first_memblock_info+0x38/0x84
> [c192dfd0] [c16d4888] relocate_init+0x98/0x160
> [c192dff0] [c100045c] set_ivor+0x144/0x190
> Instruction dump:
> 7c0803a6 4e800020 9421ffd0 7c0802a6 bee1000c 3f20c194 8139a45c 7c7a1b78
> 90010034 7c9b2378 3b80ffff 3ae00007 <83e90008> 3b00fff8 7fe9fa14 48000008
> ---[ end trace 41ed10ed80b8d831 ]---
> Kernel panic - not syncing: Attempted to kill the idle task!
> 
> 
> > diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
> > index 048d716ae706..ce0c7d7db6c3 100644
> > --- a/arch/powerpc/mm/fsl_booke_mmu.c
> > +++ b/arch/powerpc/mm/fsl_booke_mmu.c
> > @@ -171,11 +171,10 @@ unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
> >  	return 1UL << camsize;
> >  }
> >  
> > -unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
> > +static unsigned long map_mem_in_cams_addr(phys_addr_t phys, unsigned long virt,
> > +					unsigned long ram, int max_cam_idx)
> >  {
> >  	int i;
> > -	unsigned long virt = PAGE_OFFSET;
> > -	phys_addr_t phys = memstart_addr;
> >  	unsigned long amount_mapped = 0;
> >  
> >  	/* Calculate CAM values */
> > @@ -195,6 +194,14 @@ unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
> >  	return amount_mapped;
> >  }
> >  
> > +unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx)
> > +{
> > +	unsigned long virt = PAGE_OFFSET;
> > +	phys_addr_t phys = memstart_addr;
> > +
> > +	return map_mem_in_cams_addr(phys, virt, ram, max_cam_idx);
> > +}
> > +
> >  #ifdef CONFIG_PPC32
> >  
> >  #if defined(CONFIG_LOWMEM_CAM_NUM_BOOL) && (CONFIG_LOWMEM_CAM_NUM >= NUM_TLBCAMS)
> > @@ -289,7 +296,11 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
> >  		is_second_reloc = 1;
> >  		n = switch_to_as1();
> >  		/* map a 64M area for the second relocation */
> > -		map_mem_in_cams(0x4000000UL, CONFIG_LOWMEM_CAM_NUM);
> > +		if (memstart_addr > start)
> > +			map_mem_in_cams(0x4000000, CONFIG_LOWMEM_CAM_NUM);
> > +		else
> > +			map_mem_in_cams_addr(start, PAGE_OFFSET - offset,
> > +					0x4000000, CONFIG_LOWMEM_CAM_NUM);
> >  		restore_to_as0(n, offset, __va(dt_ptr));
> >  		/* We should never reach here */
> >  		panic("Relocation error");
> > 
> 
> I'm having a hard time following the logic here.  What is PAGE_OFFSET -
> offset supposed to be?  Why would we map anything belowe PAGE_OFFSET?

No, we don't map the address below PAGE_OFFSET.
    memstart_addr is the physical start address of RAM.
    start is the kernel running physical address aligned with 64M.

    offset = memstart_addr - start

So if memstart_addr < start, the offset is negative. The PAGE_OFFSET - offset
is the virtual start address we should use for the init 64M map. It's above
the PAGE_OFFSET instead of below.

For example, if we boot the kernel at 0x5000000:
    memstart_addr = 0x0
    start = 0x4000000
    offset = -0x4000000
    PAGE_OFFSET - offset = 0xc4000000.

Then we should create a 64M map for the virtual address from
0xc4000000 ~ 0xc8000000. This is the final virtual address that the kernel
symbols would use.

Thanks,
Kevin
> 
> -Scott
> 
> 

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2014-01-04  6:34         ` Kevin Hao
@ 2014-01-07 23:46           ` Scott Wood
  2014-01-08  2:42             ` Kevin Hao
  0 siblings, 1 reply; 17+ messages in thread
From: Scott Wood @ 2014-01-07 23:46 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On Sat, 2014-01-04 at 14:34 +0800, Kevin Hao wrote:
> On Fri, Jan 03, 2014 at 06:49:09PM -0600, Scott Wood wrote:
> > On Fri, 2013-12-20 at 15:43 +0800, Kevin Hao wrote:
> > > On Wed, Dec 18, 2013 at 05:48:25PM -0600, Scott Wood wrote:
> > > > On Wed, Aug 07, 2013 at 09:18:31AM +0800, Kevin Hao wrote:
> > > > > This is based on the codes in the head_44x.S. The difference is that
> > > > > the init tlb size we used is 64M. With this patch we can only load the
> > > > > kernel at address between memstart_addr ~ memstart_addr + 64M. We will
> > > > > fix this restriction in the following patches.
> > > > 
> > > > Which following patch fixes the restriction?  With all seven patches
> > > > applied, I was still only successful booting within 64M of memstart_addr.
> > > 
> > > There is bug in this patch series when booting above the 64M. It seems
> > > that I missed to test this previously. Sorry for that. With the following
> > > change I can boot the kernel at 0x5000000.
> > 
> > I tried v4 and it still doesn't work for me over 64M (without increasing
> > the start of memory).  I pulled the following out of the log buffer when
> > booting at 0x5000000 (after cleaning up the binary goo -- is that
> > something new?):
> > 
> > Unable to handle kernel paging request for data at address 0xbffe4008
> 
> Actually there still have one limitation that we have to make sure
> that the kernel and dtb are in the 64M memory mapped by the init tlb entry.
> I booted the kernel successfully by using the following u-boot commands:
>   setenv fdt_high 0xffffffff
>   dhcp 6000000 128.224.162.196:/vlm-boards/p5020/uImage
>   tftp 6f00000 128.224.162.196:/vlm-boards/p5020/p5020ds.dtb
>   bootm 6000000 - 6f00000                                                                                                                                         

OK, that was it -- I hadn't set fdt_high and thus U-Boot was relocating
the fdt under 64M.

We should probably be using ioremap_prot() (or some other mechanism) to
create a special mapping, rather than assuming the fdt is covered by the
initial TLB entry.  That doesn't need to happen as part of this
patchset, of course, as it's not a new limitation.

> > I'm having a hard time following the logic here.  What is PAGE_OFFSET -
> > offset supposed to be?  Why would we map anything belowe PAGE_OFFSET?
> 
> No, we don't map the address below PAGE_OFFSET.
>     memstart_addr is the physical start address of RAM.
>     start is the kernel running physical address aligned with 64M.
> 
>     offset = memstart_addr - start
> 
> So if memstart_addr < start, the offset is negative. The PAGE_OFFSET - offset
> is the virtual start address we should use for the init 64M map. It's above
> the PAGE_OFFSET instead of below.

Oh.  I think it'd be more readable to do "offset = start -
memstart_addr" and add offset instead of subtracting it.

Also, offset should be phys_addr_t -- even if you don't expect to
support offsets greater than 4G on 32-bit, it's semantically the right
type to use.  Plus, "int" would break if this code were ever used with
64-bit.

If you're OK with these changes, I can fix it while applying.

-Scott

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2014-01-07 23:46           ` Scott Wood
@ 2014-01-08  2:42             ` Kevin Hao
  2014-01-08 21:46               ` Scott Wood
  2014-01-09  0:02               ` Scott Wood
  0 siblings, 2 replies; 17+ messages in thread
From: Kevin Hao @ 2014-01-08  2:42 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 2940 bytes --]

On Tue, Jan 07, 2014 at 05:46:04PM -0600, Scott Wood wrote:
> On Sat, 2014-01-04 at 14:34 +0800, Kevin Hao wrote:
> > Actually there still have one limitation that we have to make sure
> > that the kernel and dtb are in the 64M memory mapped by the init tlb entry.
> > I booted the kernel successfully by using the following u-boot commands:
> >   setenv fdt_high 0xffffffff
> >   dhcp 6000000 128.224.162.196:/vlm-boards/p5020/uImage
> >   tftp 6f00000 128.224.162.196:/vlm-boards/p5020/p5020ds.dtb
> >   bootm 6000000 - 6f00000                                                                                                                                         
> 
> OK, that was it -- I hadn't set fdt_high and thus U-Boot was relocating
> the fdt under 64M.
> 
> We should probably be using ioremap_prot() (or some other mechanism) to

It is too early to use ioremap_prot() for this case.

> create a special mapping, rather than assuming the fdt is covered by the
> initial TLB entry.  That doesn't need to happen as part of this
> patchset, of course, as it's not a new limitation.

In order to fix this limitation we would have to create a separate map for
the dtb if it is not covered by the init 64M tlb. I would like to give it
a try if I can get some time.

> 
> > > I'm having a hard time following the logic here.  What is PAGE_OFFSET -
> > > offset supposed to be?  Why would we map anything belowe PAGE_OFFSET?
> > 
> > No, we don't map the address below PAGE_OFFSET.
> >     memstart_addr is the physical start address of RAM.
> >     start is the kernel running physical address aligned with 64M.
> > 
> >     offset = memstart_addr - start
> > 
> > So if memstart_addr < start, the offset is negative. The PAGE_OFFSET - offset
> > is the virtual start address we should use for the init 64M map. It's above
> > the PAGE_OFFSET instead of below.
> 
> Oh.  I think it'd be more readable to do "offset = start -
> memstart_addr" and add offset instead of subtracting it.

Yes, I agree. The reason that I use "offset = memstart_addr - start" is that
it seems "memstart_addr" is always greater than "start" when we are booting
a kdump kernel with a kernel option like "crashkernel=64M@80M". :-)

> 
> Also, offset should be phys_addr_t -- even if you don't expect to
> support offsets greater than 4G on 32-bit, it's semantically the right
> type to use.  Plus, "int" would break if this code were ever used with
> 64-bit.

I thought about using phy_addr_t for the "offset" originally but gave it up
for the following reasons:
  * It will not be greater than 4G.
  * We have to use the ugly #ifdef CONFIG_PHYS_64BIT in restore_to_as0().
  * Need more registers for arguments for restore_to_as0().

Of course you can change it to phys_addr_t if you prefer.

Thanks,
Kevin
> 
> If you're OK with these changes, I can fix it while applying.
> 
> -Scott
> 
> 

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2014-01-08  2:42             ` Kevin Hao
@ 2014-01-08 21:46               ` Scott Wood
  2014-01-09  0:02               ` Scott Wood
  1 sibling, 0 replies; 17+ messages in thread
From: Scott Wood @ 2014-01-08 21:46 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On Wed, 2014-01-08 at 10:42 +0800, Kevin Hao wrote:
> On Tue, Jan 07, 2014 at 05:46:04PM -0600, Scott Wood wrote:
> > On Sat, 2014-01-04 at 14:34 +0800, Kevin Hao wrote:
> > > > I'm having a hard time following the logic here.  What is PAGE_OFFSET -
> > > > offset supposed to be?  Why would we map anything belowe PAGE_OFFSET?
> > > 
> > > No, we don't map the address below PAGE_OFFSET.
> > >     memstart_addr is the physical start address of RAM.
> > >     start is the kernel running physical address aligned with 64M.
> > > 
> > >     offset = memstart_addr - start
> > > 
> > > So if memstart_addr < start, the offset is negative. The PAGE_OFFSET - offset
> > > is the virtual start address we should use for the init 64M map. It's above
> > > the PAGE_OFFSET instead of below.
> > 
> > Oh.  I think it'd be more readable to do "offset = start -
> > memstart_addr" and add offset instead of subtracting it.
> 
> Yes, I agree. The reason that I use "offset = memstart_addr - start" is that
> it seems "memstart_addr" is always greater than "start" when we are booting
> a kdump kernel with a kernel option like "crashkernel=64M@80M". :-)

...so there is a situation where you map below PAGE_OFFSET. :-)
 
> > Also, offset should be phys_addr_t -- even if you don't expect to
> > support offsets greater than 4G on 32-bit, it's semantically the right
> > type to use.  Plus, "int" would break if this code were ever used with
> > 64-bit.
> 
> I thought about using phy_addr_t for the "offset" originally but gave it up
> for the following reasons:
>   * It will not be greater than 4G.
>   * We have to use the ugly #ifdef CONFIG_PHYS_64BIT in restore_to_as0().
>   * Need more registers for arguments for restore_to_as0().
> 
> Of course you can change it to phys_addr_t if you prefer.

I'd at least like to make it "long".

-Scott

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2014-01-08  2:42             ` Kevin Hao
  2014-01-08 21:46               ` Scott Wood
@ 2014-01-09  0:02               ` Scott Wood
  2014-01-09  1:39                 ` Kevin Hao
  1 sibling, 1 reply; 17+ messages in thread
From: Scott Wood @ 2014-01-09  0:02 UTC (permalink / raw)
  To: Kevin Hao; +Cc: linuxppc

On Wed, Jan 08, 2014 at 10:42:35AM +0800, Kevin Hao wrote:
> On Tue, Jan 07, 2014 at 05:46:04PM -0600, Scott Wood wrote:
> > Oh.  I think it'd be more readable to do "offset = start -
> > memstart_addr" and add offset instead of subtracting it.
> 
> Yes, I agree. The reason that I use "offset = memstart_addr - start" is that
> it seems "memstart_addr" is always greater than "start" when we are booting
> a kdump kernel with a kernel option like "crashkernel=64M@80M". :-)
> 
> > 
> > Also, offset should be phys_addr_t -- even if you don't expect to
> > support offsets greater than 4G on 32-bit, it's semantically the right
> > type to use.  Plus, "int" would break if this code were ever used with
> > 64-bit.
> 
> I thought about using phy_addr_t for the "offset" originally but gave it up
> for the following reasons:
>   * It will not be greater than 4G.
>   * We have to use the ugly #ifdef CONFIG_PHYS_64BIT in restore_to_as0().
>   * Need more registers for arguments for restore_to_as0().
> 
> Of course you can change it to phys_addr_t if you prefer.

Here's the diff I made when applying (also changed the subf in patch 9 to
add)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 71e08df..b1f7edc 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1251,7 +1251,7 @@ _GLOBAL(switch_to_as1)
  * Restore to the address space 0 and also invalidate the tlb entry created
  * by switch_to_as1.
  * r3 - the tlb entry which should be invalidated
- * r4 - __pa(PAGE_OFFSET in AS0) - __pa(PAGE_OFFSET in AS1)
+ * r4 - __pa(PAGE_OFFSET in AS1) - __pa(PAGE_OFFSET in AS0)
  * r5 - device tree virtual address. If r4 is 0, r5 is ignored.
 */
 _GLOBAL(restore_to_as0)
@@ -1266,8 +1266,8 @@ _GLOBAL(restore_to_as0)
 	 * so we need calculate the right jump and device tree address based
 	 * on the offset passed by r4.
 	 */
-	subf	r9,r4,r9
-	subf	r5,r4,r5
+	add	r9,r9,r4
+	add	r5,r5,r4
 
 2:	mfmsr	r7
 	li	r8,(MSR_IS | MSR_DS)
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index ce0c7d7..95deb9fd 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -291,7 +291,8 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 	 * and do a second relocation.
 	 */
 	if (start != memstart_addr) {
-		int n, offset = memstart_addr - start;
+		int n;
+		long offset = start - memstart_addr;
 
 		is_second_reloc = 1;
 		n = switch_to_as1();
@@ -299,7 +300,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 		if (memstart_addr > start)
 			map_mem_in_cams(0x4000000, CONFIG_LOWMEM_CAM_NUM);
 		else
-			map_mem_in_cams_addr(start, PAGE_OFFSET - offset,
+			map_mem_in_cams_addr(start, PAGE_OFFSET + offset,
 					0x4000000, CONFIG_LOWMEM_CAM_NUM);
 		restore_to_as0(n, offset, __va(dt_ptr));
 		/* We should never reach here */

-Scott

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [v3, 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel
  2014-01-09  0:02               ` Scott Wood
@ 2014-01-09  1:39                 ` Kevin Hao
  0 siblings, 0 replies; 17+ messages in thread
From: Kevin Hao @ 2014-01-09  1:39 UTC (permalink / raw)
  To: Scott Wood; +Cc: linuxppc

[-- Attachment #1: Type: text/plain, Size: 1375 bytes --]

On Wed, Jan 08, 2014 at 06:02:19PM -0600, Scott Wood wrote:
> On Wed, Jan 08, 2014 at 10:42:35AM +0800, Kevin Hao wrote:
> > On Tue, Jan 07, 2014 at 05:46:04PM -0600, Scott Wood wrote:
> > > Oh.  I think it'd be more readable to do "offset = start -
> > > memstart_addr" and add offset instead of subtracting it.
> > 
> > Yes, I agree. The reason that I use "offset = memstart_addr - start" is that
> > it seems "memstart_addr" is always greater than "start" when we are booting
> > a kdump kernel with a kernel option like "crashkernel=64M@80M". :-)
> > 
> > > 
> > > Also, offset should be phys_addr_t -- even if you don't expect to
> > > support offsets greater than 4G on 32-bit, it's semantically the right
> > > type to use.  Plus, "int" would break if this code were ever used with
> > > 64-bit.
> > 
> > I thought about using phy_addr_t for the "offset" originally but gave it up
> > for the following reasons:
> >   * It will not be greater than 4G.
> >   * We have to use the ugly #ifdef CONFIG_PHYS_64BIT in restore_to_as0().
> >   * Need more registers for arguments for restore_to_as0().
> > 
> > Of course you can change it to phys_addr_t if you prefer.
> 
> Here's the diff I made when applying (also changed the subf in patch 9 to
> add)

Looks fine to me. I also done a boot test and it works pretty well.
Thanks Scott.

Kevin

[-- Attachment #2: Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2014-01-09  1:39 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-07  1:18 [PATCH v3 0/7] powerpc: enable the relocatable support for fsl booke 32bit kernel Kevin Hao
2013-08-07  1:18 ` [PATCH v3 1/7] powerpc/fsl_booke: protect the access to MAS7 Kevin Hao
2013-08-07  1:18 ` [PATCH v3 2/7] powerpc/fsl_booke: introduce get_phys_addr function Kevin Hao
2013-08-07  1:18 ` [PATCH v3 3/7] powerpc: enable the relocatable support for the fsl booke 32bit kernel Kevin Hao
2013-12-18 23:48   ` [v3, " Scott Wood
2013-12-20  7:43     ` Kevin Hao
2014-01-04  0:49       ` Scott Wood
2014-01-04  6:34         ` Kevin Hao
2014-01-07 23:46           ` Scott Wood
2014-01-08  2:42             ` Kevin Hao
2014-01-08 21:46               ` Scott Wood
2014-01-09  0:02               ` Scott Wood
2014-01-09  1:39                 ` Kevin Hao
2013-08-07  1:18 ` [PATCH v3 4/7] powerpc/fsl_booke: set the tlb entry for the kernel address in AS1 Kevin Hao
2013-08-07  1:18 ` [PATCH v3 5/7] powerpc: introduce early_get_first_memblock_info Kevin Hao
2013-08-07  1:18 ` [PATCH v3 6/7] powerpc/fsl_booke: make sure PAGE_OFFSET map to memstart_addr for relocatable kernel Kevin Hao
2013-08-07  1:18 ` [PATCH v3 7/7] powerpc/fsl_booke: enable the relocatable for the kdump kernel Kevin Hao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).