All of lore.kernel.org
 help / color / mirror / Atom feed
* [v3][PATCH 0/8] powerpc/book3e: support kexec and kdump
@ 2013-07-09  8:03 ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

This patchset is used to support kexec and kdump on book3e.

Tested on fsl-p5040 DS.

v3:

* add one patch to rename interrupt_end_book3e with __end_interrupts
  then we can have a unique lable for book3e and book3s.
* add some comments for "book3e/kexec/kdump: enable kexec for kernel"
* clean "book3e/kexec/kdump: introduce a kexec kernel flag"

v2:
* rebase on merge branch

v1:
* improve some patch head
* rebase on next branch with patch 7

----------------------------------------------------------------
Tiejun Chen (8):
      powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
      powerpc/book3e: support CONFIG_RELOCATABLE
      book3e/kexec/kdump: enable kexec for kernel
      book3e/kexec/kdump: create a 1:1 TLB mapping
      book3e/kexec/kdump: introduce a kexec kernel flag
      book3e/kexec/kdump: implement ppc64 kexec specfic
      book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
      book3e/kexec/kdump: recover "r4 = 0" to create the initial TLB

 arch/powerpc/Kconfig                     |    2 +-
 arch/powerpc/include/asm/exception-64e.h |   11 +++
 arch/powerpc/include/asm/page.h          |    2 +
 arch/powerpc/include/asm/smp.h           |    1 +
 arch/powerpc/kernel/exceptions-64e.S     |   26 +++++-
 arch/powerpc/kernel/head_64.S            |   48 +++++++++-
 arch/powerpc/kernel/machine_kexec_64.c   |  148 +++++++++++++++++-------------
 arch/powerpc/kernel/misc_64.S            |   67 +++++++++++++-
 arch/powerpc/platforms/85xx/smp.c        |   33 ++++++-
 9 files changed, 257 insertions(+), 81 deletions(-)

 arch/powerpc/kernel/exceptions-64e.S |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Tiejun

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [v3][PATCH 0/8] powerpc/book3e: support kexec and kdump
@ 2013-07-09  8:03 ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

This patchset is used to support kexec and kdump on book3e.

Tested on fsl-p5040 DS.

v3:

* add one patch to rename interrupt_end_book3e with __end_interrupts
  then we can have a unique lable for book3e and book3s.
* add some comments for "book3e/kexec/kdump: enable kexec for kernel"
* clean "book3e/kexec/kdump: introduce a kexec kernel flag"

v2:
* rebase on merge branch

v1:
* improve some patch head
* rebase on next branch with patch 7

----------------------------------------------------------------
Tiejun Chen (8):
      powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
      powerpc/book3e: support CONFIG_RELOCATABLE
      book3e/kexec/kdump: enable kexec for kernel
      book3e/kexec/kdump: create a 1:1 TLB mapping
      book3e/kexec/kdump: introduce a kexec kernel flag
      book3e/kexec/kdump: implement ppc64 kexec specfic
      book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
      book3e/kexec/kdump: recover "r4 = 0" to create the initial TLB

 arch/powerpc/Kconfig                     |    2 +-
 arch/powerpc/include/asm/exception-64e.h |   11 +++
 arch/powerpc/include/asm/page.h          |    2 +
 arch/powerpc/include/asm/smp.h           |    1 +
 arch/powerpc/kernel/exceptions-64e.S     |   26 +++++-
 arch/powerpc/kernel/head_64.S            |   48 +++++++++-
 arch/powerpc/kernel/machine_kexec_64.c   |  148 +++++++++++++++++-------------
 arch/powerpc/kernel/misc_64.S            |   67 +++++++++++++-
 arch/powerpc/platforms/85xx/smp.c        |   33 ++++++-
 9 files changed, 257 insertions(+), 81 deletions(-)

 arch/powerpc/kernel/exceptions-64e.S |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Tiejun

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
  2013-07-09  8:03 ` Tiejun Chen
@ 2013-07-09  8:03   ` Tiejun Chen
  -1 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

We can rename 'interrupt_end_book3e' with '__end_interrupts' then
book3s/book3e can share this unique label to make sure we can use
this conveniently.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/kernel/exceptions-64e.S |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 645170a..a518e48 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -309,8 +309,8 @@ interrupt_base_book3e:					/* fake trap */
 	EXCEPTION_STUB(0x300, hypercall)
 	EXCEPTION_STUB(0x320, ehpriv)
 
-	.globl interrupt_end_book3e
-interrupt_end_book3e:
+	.globl __end_interrupts
+__end_interrupts:
 
 /* Critical Input Interrupt */
 	START_EXCEPTION(critical_input);
@@ -493,7 +493,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	beq+	1f
 
 	LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)
-	LOAD_REG_IMMEDIATE(r15,interrupt_end_book3e)
+	LOAD_REG_IMMEDIATE(r15,__end_interrupts)
 	cmpld	cr0,r10,r14
 	cmpld	cr1,r10,r15
 	blt+	cr0,1f
@@ -559,7 +559,7 @@ kernel_dbg_exc:
 	beq+	1f
 
 	LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)
-	LOAD_REG_IMMEDIATE(r15,interrupt_end_book3e)
+	LOAD_REG_IMMEDIATE(r15,__end_interrupts)
 	cmpld	cr0,r10,r14
 	cmpld	cr1,r10,r15
 	blt+	cr0,1f
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
@ 2013-07-09  8:03   ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

We can rename 'interrupt_end_book3e' with '__end_interrupts' then
book3s/book3e can share this unique label to make sure we can use
this conveniently.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/kernel/exceptions-64e.S |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 645170a..a518e48 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -309,8 +309,8 @@ interrupt_base_book3e:					/* fake trap */
 	EXCEPTION_STUB(0x300, hypercall)
 	EXCEPTION_STUB(0x320, ehpriv)
 
-	.globl interrupt_end_book3e
-interrupt_end_book3e:
+	.globl __end_interrupts
+__end_interrupts:
 
 /* Critical Input Interrupt */
 	START_EXCEPTION(critical_input);
@@ -493,7 +493,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	beq+	1f
 
 	LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)
-	LOAD_REG_IMMEDIATE(r15,interrupt_end_book3e)
+	LOAD_REG_IMMEDIATE(r15,__end_interrupts)
 	cmpld	cr0,r10,r14
 	cmpld	cr1,r10,r15
 	blt+	cr0,1f
@@ -559,7 +559,7 @@ kernel_dbg_exc:
 	beq+	1f
 
 	LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)
-	LOAD_REG_IMMEDIATE(r15,interrupt_end_book3e)
+	LOAD_REG_IMMEDIATE(r15,__end_interrupts)
 	cmpld	cr0,r10,r14
 	cmpld	cr1,r10,r15
 	blt+	cr0,1f
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 2/8] powerpc/book3e: support CONFIG_RELOCATABLE
  2013-07-09  8:03 ` Tiejun Chen
@ 2013-07-09  8:03   ` Tiejun Chen
  -1 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

book3e is different with book3s since 3s includes the exception
vectors code in head_64.S as it relies on absolute addressing
which is only possible within this compilation unit. So we have
to get that label address with got.

And when boot a relocated kernel, we should reset ipvr properly again
after .relocate.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/include/asm/exception-64e.h |   11 +++++++++++
 arch/powerpc/kernel/exceptions-64e.S     |   18 +++++++++++++++++-
 arch/powerpc/kernel/head_64.S            |   25 +++++++++++++++++++++++++
 3 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h
index 51fa43e..371a77f 100644
--- a/arch/powerpc/include/asm/exception-64e.h
+++ b/arch/powerpc/include/asm/exception-64e.h
@@ -214,10 +214,21 @@ exc_##label##_book3e:
 #define TLB_MISS_STATS_SAVE_INFO_BOLTED
 #endif
 
+#ifndef CONFIG_RELOCATABLE
 #define SET_IVOR(vector_number, vector_offset)	\
 	li	r3,vector_offset@l; 		\
 	ori	r3,r3,interrupt_base_book3e@l;	\
 	mtspr	SPRN_IVOR##vector_number,r3;
+#else /* !CONFIG_RELOCATABLE */
+/* In relocatable case the value of the constant expression 'expr' is only
+ * offset. So instead, we should loads the address of label 'name'.
+ */
+#define SET_IVOR(vector_number, vector_offset)	\
+	LOAD_REG_ADDR(r3,interrupt_base_book3e);\
+	rlwinm	r3,r3,0,15,0;			\
+	ori	r3,r3,vector_offset@l;		\
+	mtspr	SPRN_IVOR##vector_number,r3;
+#endif /* CONFIG_RELOCATABLE */
 
 #endif /* _ASM_POWERPC_EXCEPTION_64E_H */
 
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index a518e48..be3b4b1 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1097,7 +1097,15 @@ skpinv:	addi	r6,r6,1				/* Increment */
  * r4 = MAS0 w/TLBSEL & ESEL for the temp mapping
  */
 	/* Now we branch the new virtual address mapped by this entry */
+#ifdef CONFIG_RELOCATABLE
+	/* We have to find out address from lr. */
+	bl	1f		/* Find our address */
+1:	mflr	r6
+	addi	r6,r6,(2f - 1b)
+	tovirt(r6,r6)
+#else
 	LOAD_REG_IMMEDIATE(r6,2f)
+#endif
 	lis	r7,MSR_KERNEL@h
 	ori	r7,r7,MSR_KERNEL@l
 	mtspr	SPRN_SRR0,r6
@@ -1348,9 +1356,17 @@ _GLOBAL(book3e_secondary_thread_init)
 	mflr	r28
 	b	3b
 
-_STATIC(init_core_book3e)
+_GLOBAL(init_core_book3e)
 	/* Establish the interrupt vector base */
+#ifdef CONFIG_RELOCATABLE
+/* In relocatable case the value of the constant expression 'expr' is only
+ * offset. So instead, we should loads the address of label 'name'.
+ */
+	tovirt(r2,r2)
+	LOAD_REG_ADDR(r3, interrupt_base_book3e)
+#else
 	LOAD_REG_IMMEDIATE(r3, interrupt_base_book3e)
+#endif
 	mtspr	SPRN_IVPR,r3
 	sync
 	blr
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index b61363d..550f8fb 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -414,12 +414,25 @@ _STATIC(__after_prom_start)
 	/* process relocations for the final address of the kernel */
 	lis	r25,PAGE_OFFSET@highest	/* compute virtual base of kernel */
 	sldi	r25,r25,32
+#if defined(CONFIG_PPC_BOOK3E)
+	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
+#endif
 	lwz	r7,__run_at_load-_stext(r26)
+#if defined(CONFIG_PPC_BOOK3E)
+	tophys(r26,r26)			/* Restore for the remains. */
+#endif
 	cmplwi	cr0,r7,1	/* flagged to stay where we are ? */
 	bne	1f
 	add	r25,r25,r26
 1:	mr	r3,r25
 	bl	.relocate
+#if defined(CONFIG_PPC_BOOK3E)
+	/* In relocatable case we always have to load the address of label 'name'
+	 * to set IVPR. So after .relocate we have to update IVPR with current
+	 * address of label.
+	 */
+	bl	.init_core_book3e
+#endif
 #endif
 
 /*
@@ -447,12 +460,24 @@ _STATIC(__after_prom_start)
  * variable __run_at_load, if it is set the kernel is treated as relocatable
  * kernel, otherwise it will be moved to PHYSICAL_START
  */
+#if defined(CONFIG_PPC_BOOK3E)
+	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
+#endif
 	lwz	r7,__run_at_load-_stext(r26)
+#if defined(CONFIG_PPC_BOOK3E)
+	tophys(r26,r26)			/* Restore for the remains. */
+#endif
 	cmplwi	cr0,r7,1
 	bne	3f
 
+#ifdef CONFIG_PPC_BOOK3E
+	LOAD_REG_ADDR(r5, __end_interrupts)
+	LOAD_REG_ADDR(r11, _stext)
+	sub	r5,r5,r11
+#else
 	/* just copy interrupts */
 	LOAD_REG_IMMEDIATE(r5, __end_interrupts - _stext)
+#endif
 	b	5f
 3:
 #endif
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 2/8] powerpc/book3e: support CONFIG_RELOCATABLE
@ 2013-07-09  8:03   ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

book3e is different with book3s since 3s includes the exception
vectors code in head_64.S as it relies on absolute addressing
which is only possible within this compilation unit. So we have
to get that label address with got.

And when boot a relocated kernel, we should reset ipvr properly again
after .relocate.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/include/asm/exception-64e.h |   11 +++++++++++
 arch/powerpc/kernel/exceptions-64e.S     |   18 +++++++++++++++++-
 arch/powerpc/kernel/head_64.S            |   25 +++++++++++++++++++++++++
 3 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h
index 51fa43e..371a77f 100644
--- a/arch/powerpc/include/asm/exception-64e.h
+++ b/arch/powerpc/include/asm/exception-64e.h
@@ -214,10 +214,21 @@ exc_##label##_book3e:
 #define TLB_MISS_STATS_SAVE_INFO_BOLTED
 #endif
 
+#ifndef CONFIG_RELOCATABLE
 #define SET_IVOR(vector_number, vector_offset)	\
 	li	r3,vector_offset@l; 		\
 	ori	r3,r3,interrupt_base_book3e@l;	\
 	mtspr	SPRN_IVOR##vector_number,r3;
+#else /* !CONFIG_RELOCATABLE */
+/* In relocatable case the value of the constant expression 'expr' is only
+ * offset. So instead, we should loads the address of label 'name'.
+ */
+#define SET_IVOR(vector_number, vector_offset)	\
+	LOAD_REG_ADDR(r3,interrupt_base_book3e);\
+	rlwinm	r3,r3,0,15,0;			\
+	ori	r3,r3,vector_offset@l;		\
+	mtspr	SPRN_IVOR##vector_number,r3;
+#endif /* CONFIG_RELOCATABLE */
 
 #endif /* _ASM_POWERPC_EXCEPTION_64E_H */
 
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index a518e48..be3b4b1 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1097,7 +1097,15 @@ skpinv:	addi	r6,r6,1				/* Increment */
  * r4 = MAS0 w/TLBSEL & ESEL for the temp mapping
  */
 	/* Now we branch the new virtual address mapped by this entry */
+#ifdef CONFIG_RELOCATABLE
+	/* We have to find out address from lr. */
+	bl	1f		/* Find our address */
+1:	mflr	r6
+	addi	r6,r6,(2f - 1b)
+	tovirt(r6,r6)
+#else
 	LOAD_REG_IMMEDIATE(r6,2f)
+#endif
 	lis	r7,MSR_KERNEL@h
 	ori	r7,r7,MSR_KERNEL@l
 	mtspr	SPRN_SRR0,r6
@@ -1348,9 +1356,17 @@ _GLOBAL(book3e_secondary_thread_init)
 	mflr	r28
 	b	3b
 
-_STATIC(init_core_book3e)
+_GLOBAL(init_core_book3e)
 	/* Establish the interrupt vector base */
+#ifdef CONFIG_RELOCATABLE
+/* In relocatable case the value of the constant expression 'expr' is only
+ * offset. So instead, we should loads the address of label 'name'.
+ */
+	tovirt(r2,r2)
+	LOAD_REG_ADDR(r3, interrupt_base_book3e)
+#else
 	LOAD_REG_IMMEDIATE(r3, interrupt_base_book3e)
+#endif
 	mtspr	SPRN_IVPR,r3
 	sync
 	blr
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index b61363d..550f8fb 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -414,12 +414,25 @@ _STATIC(__after_prom_start)
 	/* process relocations for the final address of the kernel */
 	lis	r25,PAGE_OFFSET@highest	/* compute virtual base of kernel */
 	sldi	r25,r25,32
+#if defined(CONFIG_PPC_BOOK3E)
+	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
+#endif
 	lwz	r7,__run_at_load-_stext(r26)
+#if defined(CONFIG_PPC_BOOK3E)
+	tophys(r26,r26)			/* Restore for the remains. */
+#endif
 	cmplwi	cr0,r7,1	/* flagged to stay where we are ? */
 	bne	1f
 	add	r25,r25,r26
 1:	mr	r3,r25
 	bl	.relocate
+#if defined(CONFIG_PPC_BOOK3E)
+	/* In relocatable case we always have to load the address of label 'name'
+	 * to set IVPR. So after .relocate we have to update IVPR with current
+	 * address of label.
+	 */
+	bl	.init_core_book3e
+#endif
 #endif
 
 /*
@@ -447,12 +460,24 @@ _STATIC(__after_prom_start)
  * variable __run_at_load, if it is set the kernel is treated as relocatable
  * kernel, otherwise it will be moved to PHYSICAL_START
  */
+#if defined(CONFIG_PPC_BOOK3E)
+	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
+#endif
 	lwz	r7,__run_at_load-_stext(r26)
+#if defined(CONFIG_PPC_BOOK3E)
+	tophys(r26,r26)			/* Restore for the remains. */
+#endif
 	cmplwi	cr0,r7,1
 	bne	3f
 
+#ifdef CONFIG_PPC_BOOK3E
+	LOAD_REG_ADDR(r5, __end_interrupts)
+	LOAD_REG_ADDR(r11, _stext)
+	sub	r5,r5,r11
+#else
 	/* just copy interrupts */
 	LOAD_REG_IMMEDIATE(r5, __end_interrupts - _stext)
+#endif
 	b	5f
 3:
 #endif
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 3/8] book3e/kexec/kdump: enable kexec for kernel
  2013-07-09  8:03 ` Tiejun Chen
@ 2013-07-09  8:03   ` Tiejun Chen
  -1 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

We need to active KEXEC for book3e and bypass or convert non-book3e stuff
in kexec coverage.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/Kconfig                   |    2 +-
 arch/powerpc/kernel/machine_kexec_64.c |  148 ++++++++++++++++++--------------
 arch/powerpc/kernel/misc_64.S          |    6 ++
 3 files changed, 89 insertions(+), 67 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5374776..d945435 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -357,7 +357,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
 	bool "kexec system call"
-	depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP))
+	depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP)) || PPC_BOOK3E
 	help
 	  kexec is a system call that implements the ability to shutdown your
 	  current kernel, and to start another kernel.  It is like a reboot
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 611acdf..ee153a8 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -30,72 +30,6 @@
 #include <asm/smp.h>
 #include <asm/hw_breakpoint.h>
 
-int default_machine_kexec_prepare(struct kimage *image)
-{
-	int i;
-	unsigned long begin, end;	/* limits of segment */
-	unsigned long low, high;	/* limits of blocked memory range */
-	struct device_node *node;
-	const unsigned long *basep;
-	const unsigned int *sizep;
-
-	if (!ppc_md.hpte_clear_all)
-		return -ENOENT;
-
-	/*
-	 * Since we use the kernel fault handlers and paging code to
-	 * handle the virtual mode, we must make sure no destination
-	 * overlaps kernel static data or bss.
-	 */
-	for (i = 0; i < image->nr_segments; i++)
-		if (image->segment[i].mem < __pa(_end))
-			return -ETXTBSY;
-
-	/*
-	 * For non-LPAR, we absolutely can not overwrite the mmu hash
-	 * table, since we are still using the bolted entries in it to
-	 * do the copy.  Check that here.
-	 *
-	 * It is safe if the end is below the start of the blocked
-	 * region (end <= low), or if the beginning is after the
-	 * end of the blocked region (begin >= high).  Use the
-	 * boolean identity !(a || b)  === (!a && !b).
-	 */
-	if (htab_address) {
-		low = __pa(htab_address);
-		high = low + htab_size_bytes;
-
-		for (i = 0; i < image->nr_segments; i++) {
-			begin = image->segment[i].mem;
-			end = begin + image->segment[i].memsz;
-
-			if ((begin < high) && (end > low))
-				return -ETXTBSY;
-		}
-	}
-
-	/* We also should not overwrite the tce tables */
-	for_each_node_by_type(node, "pci") {
-		basep = of_get_property(node, "linux,tce-base", NULL);
-		sizep = of_get_property(node, "linux,tce-size", NULL);
-		if (basep == NULL || sizep == NULL)
-			continue;
-
-		low = *basep;
-		high = low + (*sizep);
-
-		for (i = 0; i < image->nr_segments; i++) {
-			begin = image->segment[i].mem;
-			end = begin + image->segment[i].memsz;
-
-			if ((begin < high) && (end > low))
-				return -ETXTBSY;
-		}
-	}
-
-	return 0;
-}
-
 #define IND_FLAGS (IND_DESTINATION | IND_INDIRECTION | IND_DONE | IND_SOURCE)
 
 static void copy_segments(unsigned long ind)
@@ -367,6 +301,87 @@ void default_machine_kexec(struct kimage *image)
 	/* NOTREACHED */
 }
 
+#ifdef CONFIG_PPC_BOOK3E
+int default_machine_kexec_prepare(struct kimage *image)
+{
+	int i;
+	/*
+	 * Since we use the kernel fault handlers and paging code to
+	 * handle the virtual mode, we must make sure no destination
+	 * overlaps kernel static data or bss.
+	 */
+	for (i = 0; i < image->nr_segments; i++)
+		if (image->segment[i].mem < __pa(_end))
+			return -ETXTBSY;
+	return 0;
+}
+#else /* CONFIG_PPC_BOOK3E */
+int default_machine_kexec_prepare(struct kimage *image)
+{
+	int i;
+	unsigned long begin, end;	/* limits of segment */
+	unsigned long low, high;	/* limits of blocked memory range */
+	struct device_node *node;
+	const unsigned long *basep;
+	const unsigned int *sizep;
+
+	if (!ppc_md.hpte_clear_all)
+		return -ENOENT;
+
+	/*
+	 * Since we use the kernel fault handlers and paging code to
+	 * handle the virtual mode, we must make sure no destination
+	 * overlaps kernel static data or bss.
+	 */
+	for (i = 0; i < image->nr_segments; i++)
+		if (image->segment[i].mem < __pa(_end))
+			return -ETXTBSY;
+
+	/*
+	 * For non-LPAR, we absolutely can not overwrite the mmu hash
+	 * table, since we are still using the bolted entries in it to
+	 * do the copy.  Check that here.
+	 *
+	 * It is safe if the end is below the start of the blocked
+	 * region (end <= low), or if the beginning is after the
+	 * end of the blocked region (begin >= high).  Use the
+	 * boolean identity !(a || b)  === (!a && !b).
+	 */
+	if (htab_address) {
+		low = __pa(htab_address);
+		high = low + htab_size_bytes;
+
+		for (i = 0; i < image->nr_segments; i++) {
+			begin = image->segment[i].mem;
+			end = begin + image->segment[i].memsz;
+
+			if ((begin < high) && (end > low))
+				return -ETXTBSY;
+		}
+	}
+
+	/* We also should not overwrite the tce tables */
+	for_each_node_by_type(node, "pci") {
+		basep = of_get_property(node, "linux,tce-base", NULL);
+		sizep = of_get_property(node, "linux,tce-size", NULL);
+		if (basep == NULL || sizep == NULL)
+			continue;
+
+		low = *basep;
+		high = low + (*sizep);
+
+		for (i = 0; i < image->nr_segments; i++) {
+			begin = image->segment[i].mem;
+			end = begin + image->segment[i].memsz;
+
+			if ((begin < high) && (end > low))
+				return -ETXTBSY;
+		}
+	}
+
+	return 0;
+}
+
 /* Values we need to export to the second kernel via the device tree. */
 static unsigned long htab_base;
 
@@ -411,3 +426,4 @@ static int __init export_htab_values(void)
 	return 0;
 }
 late_initcall(export_htab_values);
+#endif /* !CONFIG_PPC_BOOK3E */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index 6820e45..f1a7ce7 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -543,9 +543,13 @@ _GLOBAL(kexec_sequence)
 	lhz	r25,PACAHWCPUID(r13)	/* get our phys cpu from paca */
 
 	/* disable interrupts, we are overwriting kernel data next */
+#ifndef CONFIG_PPC_BOOK3E
 	mfmsr	r3
 	rlwinm	r3,r3,0,17,15
 	mtmsrd	r3,1
+#else
+	wrteei	0
+#endif
 
 	/* copy dest pages, flush whole dest image */
 	mr	r3,r29
@@ -567,10 +571,12 @@ _GLOBAL(kexec_sequence)
 	li	r6,1
 	stw	r6,kexec_flag-1b(5)
 
+#ifndef CONFIG_PPC_BOOK3E
 	/* clear out hardware hash page table and tlb */
 	ld	r5,0(r27)		/* deref function descriptor */
 	mtctr	r5
 	bctrl				/* ppc_md.hpte_clear_all(void); */
+#endif
 
 /*
  *   kexec image calling is:
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 3/8] book3e/kexec/kdump: enable kexec for kernel
@ 2013-07-09  8:03   ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

We need to active KEXEC for book3e and bypass or convert non-book3e stuff
in kexec coverage.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/Kconfig                   |    2 +-
 arch/powerpc/kernel/machine_kexec_64.c |  148 ++++++++++++++++++--------------
 arch/powerpc/kernel/misc_64.S          |    6 ++
 3 files changed, 89 insertions(+), 67 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5374776..d945435 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -357,7 +357,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
 	bool "kexec system call"
-	depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP))
+	depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP)) || PPC_BOOK3E
 	help
 	  kexec is a system call that implements the ability to shutdown your
 	  current kernel, and to start another kernel.  It is like a reboot
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 611acdf..ee153a8 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -30,72 +30,6 @@
 #include <asm/smp.h>
 #include <asm/hw_breakpoint.h>
 
-int default_machine_kexec_prepare(struct kimage *image)
-{
-	int i;
-	unsigned long begin, end;	/* limits of segment */
-	unsigned long low, high;	/* limits of blocked memory range */
-	struct device_node *node;
-	const unsigned long *basep;
-	const unsigned int *sizep;
-
-	if (!ppc_md.hpte_clear_all)
-		return -ENOENT;
-
-	/*
-	 * Since we use the kernel fault handlers and paging code to
-	 * handle the virtual mode, we must make sure no destination
-	 * overlaps kernel static data or bss.
-	 */
-	for (i = 0; i < image->nr_segments; i++)
-		if (image->segment[i].mem < __pa(_end))
-			return -ETXTBSY;
-
-	/*
-	 * For non-LPAR, we absolutely can not overwrite the mmu hash
-	 * table, since we are still using the bolted entries in it to
-	 * do the copy.  Check that here.
-	 *
-	 * It is safe if the end is below the start of the blocked
-	 * region (end <= low), or if the beginning is after the
-	 * end of the blocked region (begin >= high).  Use the
-	 * boolean identity !(a || b)  === (!a && !b).
-	 */
-	if (htab_address) {
-		low = __pa(htab_address);
-		high = low + htab_size_bytes;
-
-		for (i = 0; i < image->nr_segments; i++) {
-			begin = image->segment[i].mem;
-			end = begin + image->segment[i].memsz;
-
-			if ((begin < high) && (end > low))
-				return -ETXTBSY;
-		}
-	}
-
-	/* We also should not overwrite the tce tables */
-	for_each_node_by_type(node, "pci") {
-		basep = of_get_property(node, "linux,tce-base", NULL);
-		sizep = of_get_property(node, "linux,tce-size", NULL);
-		if (basep == NULL || sizep == NULL)
-			continue;
-
-		low = *basep;
-		high = low + (*sizep);
-
-		for (i = 0; i < image->nr_segments; i++) {
-			begin = image->segment[i].mem;
-			end = begin + image->segment[i].memsz;
-
-			if ((begin < high) && (end > low))
-				return -ETXTBSY;
-		}
-	}
-
-	return 0;
-}
-
 #define IND_FLAGS (IND_DESTINATION | IND_INDIRECTION | IND_DONE | IND_SOURCE)
 
 static void copy_segments(unsigned long ind)
@@ -367,6 +301,87 @@ void default_machine_kexec(struct kimage *image)
 	/* NOTREACHED */
 }
 
+#ifdef CONFIG_PPC_BOOK3E
+int default_machine_kexec_prepare(struct kimage *image)
+{
+	int i;
+	/*
+	 * Since we use the kernel fault handlers and paging code to
+	 * handle the virtual mode, we must make sure no destination
+	 * overlaps kernel static data or bss.
+	 */
+	for (i = 0; i < image->nr_segments; i++)
+		if (image->segment[i].mem < __pa(_end))
+			return -ETXTBSY;
+	return 0;
+}
+#else /* CONFIG_PPC_BOOK3E */
+int default_machine_kexec_prepare(struct kimage *image)
+{
+	int i;
+	unsigned long begin, end;	/* limits of segment */
+	unsigned long low, high;	/* limits of blocked memory range */
+	struct device_node *node;
+	const unsigned long *basep;
+	const unsigned int *sizep;
+
+	if (!ppc_md.hpte_clear_all)
+		return -ENOENT;
+
+	/*
+	 * Since we use the kernel fault handlers and paging code to
+	 * handle the virtual mode, we must make sure no destination
+	 * overlaps kernel static data or bss.
+	 */
+	for (i = 0; i < image->nr_segments; i++)
+		if (image->segment[i].mem < __pa(_end))
+			return -ETXTBSY;
+
+	/*
+	 * For non-LPAR, we absolutely can not overwrite the mmu hash
+	 * table, since we are still using the bolted entries in it to
+	 * do the copy.  Check that here.
+	 *
+	 * It is safe if the end is below the start of the blocked
+	 * region (end <= low), or if the beginning is after the
+	 * end of the blocked region (begin >= high).  Use the
+	 * boolean identity !(a || b)  === (!a && !b).
+	 */
+	if (htab_address) {
+		low = __pa(htab_address);
+		high = low + htab_size_bytes;
+
+		for (i = 0; i < image->nr_segments; i++) {
+			begin = image->segment[i].mem;
+			end = begin + image->segment[i].memsz;
+
+			if ((begin < high) && (end > low))
+				return -ETXTBSY;
+		}
+	}
+
+	/* We also should not overwrite the tce tables */
+	for_each_node_by_type(node, "pci") {
+		basep = of_get_property(node, "linux,tce-base", NULL);
+		sizep = of_get_property(node, "linux,tce-size", NULL);
+		if (basep == NULL || sizep == NULL)
+			continue;
+
+		low = *basep;
+		high = low + (*sizep);
+
+		for (i = 0; i < image->nr_segments; i++) {
+			begin = image->segment[i].mem;
+			end = begin + image->segment[i].memsz;
+
+			if ((begin < high) && (end > low))
+				return -ETXTBSY;
+		}
+	}
+
+	return 0;
+}
+
 /* Values we need to export to the second kernel via the device tree. */
 static unsigned long htab_base;
 
@@ -411,3 +426,4 @@ static int __init export_htab_values(void)
 	return 0;
 }
 late_initcall(export_htab_values);
+#endif /* !CONFIG_PPC_BOOK3E */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index 6820e45..f1a7ce7 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -543,9 +543,13 @@ _GLOBAL(kexec_sequence)
 	lhz	r25,PACAHWCPUID(r13)	/* get our phys cpu from paca */
 
 	/* disable interrupts, we are overwriting kernel data next */
+#ifndef CONFIG_PPC_BOOK3E
 	mfmsr	r3
 	rlwinm	r3,r3,0,17,15
 	mtmsrd	r3,1
+#else
+	wrteei	0
+#endif
 
 	/* copy dest pages, flush whole dest image */
 	mr	r3,r29
@@ -567,10 +571,12 @@ _GLOBAL(kexec_sequence)
 	li	r6,1
 	stw	r6,kexec_flag-1b(5)
 
+#ifndef CONFIG_PPC_BOOK3E
 	/* clear out hardware hash page table and tlb */
 	ld	r5,0(r27)		/* deref function descriptor */
 	mtctr	r5
 	bctrl				/* ppc_md.hpte_clear_all(void); */
+#endif
 
 /*
  *   kexec image calling is:
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 4/8] book3e/kexec/kdump: create a 1:1 TLB mapping
  2013-07-09  8:03 ` Tiejun Chen
@ 2013-07-09  8:03   ` Tiejun Chen
  -1 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

book3e have no real MMU mode so we have to create a 1:1 TLB
mapping to make sure we can access the real physical address.
And correct something to support this pseudo real mode on book3e.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/kernel/head_64.S |    9 ++++---
 arch/powerpc/kernel/misc_64.S |   55 ++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 60 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 550f8fb..7dc56be 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -447,12 +447,12 @@ _STATIC(__after_prom_start)
 	tovirt(r3,r3)			/* on booke, we already run at PAGE_OFFSET */
 #endif
 	mr.	r4,r26			/* In some cases the loader may  */
+#if defined(CONFIG_PPC_BOOK3E)
+	tovirt(r4,r4)
+#endif
 	beq	9f			/* have already put us at zero */
 	li	r6,0x100		/* Start offset, the first 0x100 */
 					/* bytes were copied earlier.	 */
-#ifdef CONFIG_PPC_BOOK3E
-	tovirt(r6,r6)			/* on booke, we already run at PAGE_OFFSET */
-#endif
 
 #ifdef CONFIG_RELOCATABLE
 /*
@@ -495,6 +495,9 @@ _STATIC(__after_prom_start)
 p_end:	.llong	_end - _stext
 
 4:	/* Now copy the rest of the kernel up to _end */
+#if defined(CONFIG_PPC_BOOK3E)
+	tovirt(r26,r26)
+#endif
 	addis	r5,r26,(p_end - _stext)@ha
 	ld	r5,(p_end - _stext)@l(r5)	/* get _end */
 5:	bl	.copy_and_flush		/* copy the rest */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index f1a7ce7..20cbb98 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -460,6 +460,49 @@ kexec_flag:
 
 
 #ifdef CONFIG_KEXEC
+#ifdef CONFIG_PPC_BOOK3E
+/* BOOK3E have no a real MMU mode so we have to setup the initial TLB
+ * for a core to map v:0 to p:0 as 1:1. This current implementation
+ * assume that 1G is enough for kexec.
+ */
+#include <asm/mmu.h>
+kexec_create_tlb:
+	/* Invalidate all TLBs to avoid any TLB conflict. */
+	PPC_TLBILX_ALL(0,R0)
+	sync
+	isync
+
+	mfspr	r10,SPRN_TLB1CFG
+	andi.	r10,r10,TLBnCFG_N_ENTRY	/* Extract # entries */
+	subi	r10,r10,1		/* Often its always safe to use last */
+	lis	r9,MAS0_TLBSEL(1)@h
+	rlwimi	r9,r10,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r9) */
+
+/* Setup a temp mapping v:0 to p:0 as 1:1 and return to it.
+ */
+#ifdef CONFIG_SMP
+#define M_IF_SMP	MAS2_M
+#else
+#define M_IF_SMP	0
+#endif
+	mtspr	SPRN_MAS0,r9
+
+	lis	r9,(MAS1_VALID|MAS1_IPROT)@h
+	ori	r9,r9,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l
+	mtspr	SPRN_MAS1,r9
+
+	LOAD_REG_IMMEDIATE(r9, 0x0 | M_IF_SMP)
+	mtspr	SPRN_MAS2,r9
+
+	LOAD_REG_IMMEDIATE(r9, 0x0 | MAS3_SR | MAS3_SW | MAS3_SX)
+	mtspr	SPRN_MAS3,r9
+	li	r9,0
+	mtspr	SPRN_MAS7,r9
+
+	tlbwe
+	isync
+	blr
+#endif
 
 /* kexec_smp_wait(void)
  *
@@ -473,6 +516,10 @@ kexec_flag:
  */
 _GLOBAL(kexec_smp_wait)
 	lhz	r3,PACAHWCPUID(r13)
+#ifdef CONFIG_PPC_BOOK3E
+	/* Create a 1:1 mapping. */
+	bl	kexec_create_tlb
+#endif
 	bl	real_mode
 
 	li	r4,KEXEC_STATE_REAL_MODE
@@ -489,6 +536,7 @@ _GLOBAL(kexec_smp_wait)
  * don't overwrite r3 here, it is live for kexec_wait above.
  */
 real_mode:	/* assume normal blr return */
+#ifndef CONFIG_PPC_BOOK3E
 1:	li	r9,MSR_RI
 	li	r10,MSR_DR|MSR_IR
 	mflr	r11		/* return address to SRR0 */
@@ -500,7 +548,10 @@ real_mode:	/* assume normal blr return */
 	mtspr	SPRN_SRR1,r10
 	mtspr	SPRN_SRR0,r11
 	rfid
-
+#else
+	/* the real mode is nothing for book3e. */
+	blr
+#endif
 
 /*
  * kexec_sequence(newstack, start, image, control, clear_all())
@@ -549,6 +600,8 @@ _GLOBAL(kexec_sequence)
 	mtmsrd	r3,1
 #else
 	wrteei	0
+	/* Create a 1:1 mapping. */
+	bl	kexec_create_tlb
 #endif
 
 	/* copy dest pages, flush whole dest image */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 4/8] book3e/kexec/kdump: create a 1:1 TLB mapping
@ 2013-07-09  8:03   ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

book3e have no real MMU mode so we have to create a 1:1 TLB
mapping to make sure we can access the real physical address.
And correct something to support this pseudo real mode on book3e.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/kernel/head_64.S |    9 ++++---
 arch/powerpc/kernel/misc_64.S |   55 ++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 60 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 550f8fb..7dc56be 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -447,12 +447,12 @@ _STATIC(__after_prom_start)
 	tovirt(r3,r3)			/* on booke, we already run at PAGE_OFFSET */
 #endif
 	mr.	r4,r26			/* In some cases the loader may  */
+#if defined(CONFIG_PPC_BOOK3E)
+	tovirt(r4,r4)
+#endif
 	beq	9f			/* have already put us at zero */
 	li	r6,0x100		/* Start offset, the first 0x100 */
 					/* bytes were copied earlier.	 */
-#ifdef CONFIG_PPC_BOOK3E
-	tovirt(r6,r6)			/* on booke, we already run at PAGE_OFFSET */
-#endif
 
 #ifdef CONFIG_RELOCATABLE
 /*
@@ -495,6 +495,9 @@ _STATIC(__after_prom_start)
 p_end:	.llong	_end - _stext
 
 4:	/* Now copy the rest of the kernel up to _end */
+#if defined(CONFIG_PPC_BOOK3E)
+	tovirt(r26,r26)
+#endif
 	addis	r5,r26,(p_end - _stext)@ha
 	ld	r5,(p_end - _stext)@l(r5)	/* get _end */
 5:	bl	.copy_and_flush		/* copy the rest */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index f1a7ce7..20cbb98 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -460,6 +460,49 @@ kexec_flag:
 
 
 #ifdef CONFIG_KEXEC
+#ifdef CONFIG_PPC_BOOK3E
+/* BOOK3E have no a real MMU mode so we have to setup the initial TLB
+ * for a core to map v:0 to p:0 as 1:1. This current implementation
+ * assume that 1G is enough for kexec.
+ */
+#include <asm/mmu.h>
+kexec_create_tlb:
+	/* Invalidate all TLBs to avoid any TLB conflict. */
+	PPC_TLBILX_ALL(0,R0)
+	sync
+	isync
+
+	mfspr	r10,SPRN_TLB1CFG
+	andi.	r10,r10,TLBnCFG_N_ENTRY	/* Extract # entries */
+	subi	r10,r10,1		/* Often its always safe to use last */
+	lis	r9,MAS0_TLBSEL(1)@h
+	rlwimi	r9,r10,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r9) */
+
+/* Setup a temp mapping v:0 to p:0 as 1:1 and return to it.
+ */
+#ifdef CONFIG_SMP
+#define M_IF_SMP	MAS2_M
+#else
+#define M_IF_SMP	0
+#endif
+	mtspr	SPRN_MAS0,r9
+
+	lis	r9,(MAS1_VALID|MAS1_IPROT)@h
+	ori	r9,r9,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l
+	mtspr	SPRN_MAS1,r9
+
+	LOAD_REG_IMMEDIATE(r9, 0x0 | M_IF_SMP)
+	mtspr	SPRN_MAS2,r9
+
+	LOAD_REG_IMMEDIATE(r9, 0x0 | MAS3_SR | MAS3_SW | MAS3_SX)
+	mtspr	SPRN_MAS3,r9
+	li	r9,0
+	mtspr	SPRN_MAS7,r9
+
+	tlbwe
+	isync
+	blr
+#endif
 
 /* kexec_smp_wait(void)
  *
@@ -473,6 +516,10 @@ kexec_flag:
  */
 _GLOBAL(kexec_smp_wait)
 	lhz	r3,PACAHWCPUID(r13)
+#ifdef CONFIG_PPC_BOOK3E
+	/* Create a 1:1 mapping. */
+	bl	kexec_create_tlb
+#endif
 	bl	real_mode
 
 	li	r4,KEXEC_STATE_REAL_MODE
@@ -489,6 +536,7 @@ _GLOBAL(kexec_smp_wait)
  * don't overwrite r3 here, it is live for kexec_wait above.
  */
 real_mode:	/* assume normal blr return */
+#ifndef CONFIG_PPC_BOOK3E
 1:	li	r9,MSR_RI
 	li	r10,MSR_DR|MSR_IR
 	mflr	r11		/* return address to SRR0 */
@@ -500,7 +548,10 @@ real_mode:	/* assume normal blr return */
 	mtspr	SPRN_SRR1,r10
 	mtspr	SPRN_SRR0,r11
 	rfid
-
+#else
+	/* the real mode is nothing for book3e. */
+	blr
+#endif
 
 /*
  * kexec_sequence(newstack, start, image, control, clear_all())
@@ -549,6 +600,8 @@ _GLOBAL(kexec_sequence)
 	mtmsrd	r3,1
 #else
 	wrteei	0
+	/* Create a 1:1 mapping. */
+	bl	kexec_create_tlb
 #endif
 
 	/* copy dest pages, flush whole dest image */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 5/8] book3e/kexec/kdump: introduce a kexec kernel flag
  2013-07-09  8:03 ` Tiejun Chen
@ 2013-07-09  8:03   ` Tiejun Chen
  -1 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

We need to introduce a flag to indicate we're already running
a kexec kernel then we can go proper path. For example, We
shouldn't access spin_table from the bootloader to up any secondary
cpu for kexec kernel, and kexec kernel already know how to jump to
generic_secondary_smp_init.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/include/asm/smp.h    |    1 +
 arch/powerpc/kernel/head_64.S     |   10 ++++++++++
 arch/powerpc/kernel/misc_64.S     |    6 ++++++
 arch/powerpc/platforms/85xx/smp.c |   20 +++++++++++++++-----
 4 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index ffbaabe..59165a3 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -200,6 +200,7 @@ extern void generic_secondary_thread_init(void);
 extern unsigned long __secondary_hold_spinloop;
 extern unsigned long __secondary_hold_acknowledge;
 extern char __secondary_hold;
+extern unsigned long __run_at_kexec;
 
 extern void __early_start(void);
 #endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 7dc56be..0b46c9d 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -89,6 +89,10 @@ __secondary_hold_spinloop:
 __secondary_hold_acknowledge:
 	.llong	0x0
 
+	.globl	__run_at_kexec
+__run_at_kexec:
+	.llong	0x0	/* Flag for the secondary kernel from kexec. */
+
 #ifdef CONFIG_RELOCATABLE
 	/* This flag is set to 1 by a loader if the kernel should run
 	 * at the loaded address instead of the linked address.  This
@@ -417,6 +421,12 @@ _STATIC(__after_prom_start)
 #if defined(CONFIG_PPC_BOOK3E)
 	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
 #endif
+#if defined(CONFIG_KEXEC) || defined(CONFIG_CRASH_DUMP)
+	/* If relocated we need to restore this flag on that relocated address. */
+	ld	r7,__run_at_kexec-_stext(r26)
+	std	r7,__run_at_kexec-_stext(r26)
+#endif
+
 	lwz	r7,__run_at_load-_stext(r26)
 #if defined(CONFIG_PPC_BOOK3E)
 	tophys(r26,r26)			/* Restore for the remains. */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index 20cbb98..c89aead 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -619,6 +619,12 @@ _GLOBAL(kexec_sequence)
 	bl	.copy_and_flush	/* (dest, src, copy limit, start offset) */
 1:	/* assume normal blr return */
 
+	/* notify we're going into kexec kernel for SMP. */
+	LOAD_REG_ADDR(r3,__run_at_kexec)
+	li	r4,1
+	std	r4,0(r3)
+	sync
+
 	/* release other cpus to the new kernel secondary start at 0x60 */
 	mflr	r5
 	li	r6,1
diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
index 5ced4f5..14d461b 100644
--- a/arch/powerpc/platforms/85xx/smp.c
+++ b/arch/powerpc/platforms/85xx/smp.c
@@ -150,6 +150,9 @@ static int smp_85xx_kick_cpu(int nr)
 	int hw_cpu = get_hard_smp_processor_id(nr);
 	int ioremappable;
 	int ret = 0;
+#ifdef CONFIG_PPC64
+	unsigned long *ptr = NULL;
+#endif
 
 	WARN_ON(nr < 0 || nr >= NR_CPUS);
 	WARN_ON(hw_cpu < 0 || hw_cpu >= NR_CPUS);
@@ -238,11 +241,18 @@ out:
 #else
 	smp_generic_kick_cpu(nr);
 
-	flush_spin_table(spin_table);
-	out_be32(&spin_table->pir, hw_cpu);
-	out_be64((u64 *)(&spin_table->addr_h),
-	  __pa((u64)*((unsigned long long *)generic_secondary_smp_init)));
-	flush_spin_table(spin_table);
+	ptr  = (unsigned long *)((unsigned long)&__run_at_kexec);
+	/* We shouldn't access spin_table from the bootloader to up any
+	 * secondary cpu for kexec kernel, and kexec kernel already
+	 * know how to jump to generic_secondary_smp_init.
+	 */
+	if (!*ptr) {
+		flush_spin_table(spin_table);
+		out_be32(&spin_table->pir, hw_cpu);
+		out_be64((u64 *)(&spin_table->addr_h),
+		 __pa((u64)*((unsigned long long *)generic_secondary_smp_init)));
+		flush_spin_table(spin_table);
+	}
 #endif
 
 	local_irq_restore(flags);
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 5/8] book3e/kexec/kdump: introduce a kexec kernel flag
@ 2013-07-09  8:03   ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

We need to introduce a flag to indicate we're already running
a kexec kernel then we can go proper path. For example, We
shouldn't access spin_table from the bootloader to up any secondary
cpu for kexec kernel, and kexec kernel already know how to jump to
generic_secondary_smp_init.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/include/asm/smp.h    |    1 +
 arch/powerpc/kernel/head_64.S     |   10 ++++++++++
 arch/powerpc/kernel/misc_64.S     |    6 ++++++
 arch/powerpc/platforms/85xx/smp.c |   20 +++++++++++++++-----
 4 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
index ffbaabe..59165a3 100644
--- a/arch/powerpc/include/asm/smp.h
+++ b/arch/powerpc/include/asm/smp.h
@@ -200,6 +200,7 @@ extern void generic_secondary_thread_init(void);
 extern unsigned long __secondary_hold_spinloop;
 extern unsigned long __secondary_hold_acknowledge;
 extern char __secondary_hold;
+extern unsigned long __run_at_kexec;
 
 extern void __early_start(void);
 #endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 7dc56be..0b46c9d 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -89,6 +89,10 @@ __secondary_hold_spinloop:
 __secondary_hold_acknowledge:
 	.llong	0x0
 
+	.globl	__run_at_kexec
+__run_at_kexec:
+	.llong	0x0	/* Flag for the secondary kernel from kexec. */
+
 #ifdef CONFIG_RELOCATABLE
 	/* This flag is set to 1 by a loader if the kernel should run
 	 * at the loaded address instead of the linked address.  This
@@ -417,6 +421,12 @@ _STATIC(__after_prom_start)
 #if defined(CONFIG_PPC_BOOK3E)
 	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
 #endif
+#if defined(CONFIG_KEXEC) || defined(CONFIG_CRASH_DUMP)
+	/* If relocated we need to restore this flag on that relocated address. */
+	ld	r7,__run_at_kexec-_stext(r26)
+	std	r7,__run_at_kexec-_stext(r26)
+#endif
+
 	lwz	r7,__run_at_load-_stext(r26)
 #if defined(CONFIG_PPC_BOOK3E)
 	tophys(r26,r26)			/* Restore for the remains. */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index 20cbb98..c89aead 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -619,6 +619,12 @@ _GLOBAL(kexec_sequence)
 	bl	.copy_and_flush	/* (dest, src, copy limit, start offset) */
 1:	/* assume normal blr return */
 
+	/* notify we're going into kexec kernel for SMP. */
+	LOAD_REG_ADDR(r3,__run_at_kexec)
+	li	r4,1
+	std	r4,0(r3)
+	sync
+
 	/* release other cpus to the new kernel secondary start at 0x60 */
 	mflr	r5
 	li	r6,1
diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
index 5ced4f5..14d461b 100644
--- a/arch/powerpc/platforms/85xx/smp.c
+++ b/arch/powerpc/platforms/85xx/smp.c
@@ -150,6 +150,9 @@ static int smp_85xx_kick_cpu(int nr)
 	int hw_cpu = get_hard_smp_processor_id(nr);
 	int ioremappable;
 	int ret = 0;
+#ifdef CONFIG_PPC64
+	unsigned long *ptr = NULL;
+#endif
 
 	WARN_ON(nr < 0 || nr >= NR_CPUS);
 	WARN_ON(hw_cpu < 0 || hw_cpu >= NR_CPUS);
@@ -238,11 +241,18 @@ out:
 #else
 	smp_generic_kick_cpu(nr);
 
-	flush_spin_table(spin_table);
-	out_be32(&spin_table->pir, hw_cpu);
-	out_be64((u64 *)(&spin_table->addr_h),
-	  __pa((u64)*((unsigned long long *)generic_secondary_smp_init)));
-	flush_spin_table(spin_table);
+	ptr  = (unsigned long *)((unsigned long)&__run_at_kexec);
+	/* We shouldn't access spin_table from the bootloader to up any
+	 * secondary cpu for kexec kernel, and kexec kernel already
+	 * know how to jump to generic_secondary_smp_init.
+	 */
+	if (!*ptr) {
+		flush_spin_table(spin_table);
+		out_be32(&spin_table->pir, hw_cpu);
+		out_be64((u64 *)(&spin_table->addr_h),
+		 __pa((u64)*((unsigned long long *)generic_secondary_smp_init)));
+		flush_spin_table(spin_table);
+	}
 #endif
 
 	local_irq_restore(flags);
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 6/8] book3e/kexec/kdump: implement ppc64 kexec specfic
  2013-07-09  8:03 ` Tiejun Chen
@ 2013-07-09  8:03   ` Tiejun Chen
  -1 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

ppc64 kexec mechanism has a different implementation with ppc32.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/platforms/85xx/smp.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
index 14d461b..d862808 100644
--- a/arch/powerpc/platforms/85xx/smp.c
+++ b/arch/powerpc/platforms/85xx/smp.c
@@ -276,6 +276,7 @@ struct smp_ops_t smp_85xx_ops = {
 };
 
 #ifdef CONFIG_KEXEC
+#ifdef CONFIG_PPC32
 atomic_t kexec_down_cpus = ATOMIC_INIT(0);
 
 void mpc85xx_smp_kexec_cpu_down(int crash_shutdown, int secondary)
@@ -294,6 +295,14 @@ static void mpc85xx_smp_kexec_down(void *arg)
 	if (ppc_md.kexec_cpu_down)
 		ppc_md.kexec_cpu_down(0,1);
 }
+#else
+void mpc85xx_smp_kexec_cpu_down(int crash_shutdown, int secondary)
+{
+	local_irq_disable();
+	hard_irq_disable();
+	mpic_teardown_this_cpu(secondary);
+}
+#endif
 
 static void map_and_flush(unsigned long paddr)
 {
@@ -345,11 +354,14 @@ static void mpc85xx_smp_flush_dcache_kexec(struct kimage *image)
 
 static void mpc85xx_smp_machine_kexec(struct kimage *image)
 {
+#ifdef CONFIG_PPC32
 	int timeout = INT_MAX;
 	int i, num_cpus = num_present_cpus();
+#endif
 
 	mpc85xx_smp_flush_dcache_kexec(image);
 
+#ifdef CONFIG_PPC32
 	if (image->type == KEXEC_TYPE_DEFAULT)
 		smp_call_function(mpc85xx_smp_kexec_down, NULL, 0);
 
@@ -367,6 +379,7 @@ static void mpc85xx_smp_machine_kexec(struct kimage *image)
 		if ( i == smp_processor_id() ) continue;
 		mpic_reset_core(i);
 	}
+#endif
 
 	default_machine_kexec(image);
 }
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 6/8] book3e/kexec/kdump: implement ppc64 kexec specfic
@ 2013-07-09  8:03   ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

ppc64 kexec mechanism has a different implementation with ppc32.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/platforms/85xx/smp.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
index 14d461b..d862808 100644
--- a/arch/powerpc/platforms/85xx/smp.c
+++ b/arch/powerpc/platforms/85xx/smp.c
@@ -276,6 +276,7 @@ struct smp_ops_t smp_85xx_ops = {
 };
 
 #ifdef CONFIG_KEXEC
+#ifdef CONFIG_PPC32
 atomic_t kexec_down_cpus = ATOMIC_INIT(0);
 
 void mpc85xx_smp_kexec_cpu_down(int crash_shutdown, int secondary)
@@ -294,6 +295,14 @@ static void mpc85xx_smp_kexec_down(void *arg)
 	if (ppc_md.kexec_cpu_down)
 		ppc_md.kexec_cpu_down(0,1);
 }
+#else
+void mpc85xx_smp_kexec_cpu_down(int crash_shutdown, int secondary)
+{
+	local_irq_disable();
+	hard_irq_disable();
+	mpic_teardown_this_cpu(secondary);
+}
+#endif
 
 static void map_and_flush(unsigned long paddr)
 {
@@ -345,11 +354,14 @@ static void mpc85xx_smp_flush_dcache_kexec(struct kimage *image)
 
 static void mpc85xx_smp_machine_kexec(struct kimage *image)
 {
+#ifdef CONFIG_PPC32
 	int timeout = INT_MAX;
 	int i, num_cpus = num_present_cpus();
+#endif
 
 	mpc85xx_smp_flush_dcache_kexec(image);
 
+#ifdef CONFIG_PPC32
 	if (image->type == KEXEC_TYPE_DEFAULT)
 		smp_call_function(mpc85xx_smp_kexec_down, NULL, 0);
 
@@ -367,6 +379,7 @@ static void mpc85xx_smp_machine_kexec(struct kimage *image)
 		if ( i == smp_processor_id() ) continue;
 		mpic_reset_core(i);
 	}
+#endif
 
 	default_machine_kexec(image);
 }
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
  2013-07-09  8:03 ` Tiejun Chen
@ 2013-07-09  8:03   ` Tiejun Chen
  -1 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

Book3e is always aligned 1GB to create TLB so we should
use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
get __pa/__va properly while boot kdump.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/include/asm/page.h |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 988c812..5b00081 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -112,6 +112,8 @@ extern long long virt_phys_offset;
 /* See Description below for VIRT_PHYS_OFFSET */
 #ifdef CONFIG_RELOCATABLE_PPC32
 #define VIRT_PHYS_OFFSET virt_phys_offset
+#elif defined(CONFIG_PPC_BOOK3E_64)
+#define VIRT_PHYS_OFFSET (KERNELBASE - MEMORY_START)
 #else
 #define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
 #endif
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
@ 2013-07-09  8:03   ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

Book3e is always aligned 1GB to create TLB so we should
use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
get __pa/__va properly while boot kdump.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/include/asm/page.h |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 988c812..5b00081 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -112,6 +112,8 @@ extern long long virt_phys_offset;
 /* See Description below for VIRT_PHYS_OFFSET */
 #ifdef CONFIG_RELOCATABLE_PPC32
 #define VIRT_PHYS_OFFSET virt_phys_offset
+#elif defined(CONFIG_PPC_BOOK3E_64)
+#define VIRT_PHYS_OFFSET (KERNELBASE - MEMORY_START)
 #else
 #define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
 #endif
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 8/8] book3e/kexec/kdump: recover "r4 = 0" to create the initial TLB
  2013-07-09  8:03 ` Tiejun Chen
@ 2013-07-09  8:03   ` Tiejun Chen
  -1 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

In commit 96f013f, "powerpc/kexec: Add kexec "hold" support for Book3e
processors", requires that GPR4 survive the "hold" process, for IBM Blue
Gene/Q with with some very strange firmware. But for FSL Book3E, r4 = 1
to indicate that the initial TLB entry for this core already exists so
we still should set r4 with 0 to create that initial TLB.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/kernel/head_64.S |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 0b46c9d..d546c5e 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -127,6 +127,10 @@ __secondary_hold:
 	/* Grab our physical cpu number */
 	mr	r24,r3
 	/* stash r4 for book3e */
+#ifdef CONFIG_PPC_FSL_BOOK3E
+	/* we need to setup initial TLB entry. */
+	li	r4,0
+#endif
 	mr	r25,r4
 
 	/* Tell the master cpu we're here */
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [v3][PATCH 8/8] book3e/kexec/kdump: recover "r4 = 0" to create the initial TLB
@ 2013-07-09  8:03   ` Tiejun Chen
  0 siblings, 0 replies; 42+ messages in thread
From: Tiejun Chen @ 2013-07-09  8:03 UTC (permalink / raw)
  To: benh; +Cc: linuxppc-dev, linux-kernel

In commit 96f013f, "powerpc/kexec: Add kexec "hold" support for Book3e
processors", requires that GPR4 survive the "hold" process, for IBM Blue
Gene/Q with with some very strange firmware. But for FSL Book3E, r4 = 1
to indicate that the initial TLB entry for this core already exists so
we still should set r4 with 0 to create that initial TLB.

Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
---
 arch/powerpc/kernel/head_64.S |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 0b46c9d..d546c5e 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -127,6 +127,10 @@ __secondary_hold:
 	/* Grab our physical cpu number */
 	mr	r24,r3
 	/* stash r4 for book3e */
+#ifdef CONFIG_PPC_FSL_BOOK3E
+	/* we need to setup initial TLB entry. */
+	li	r4,0
+#endif
 	mr	r25,r4
 
 	/* Tell the master cpu we're here */
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* RE: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-07-10  5:17     ` Bhushan Bharat-R65777
  -1 siblings, 0 replies; 42+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-10  5:17 UTC (permalink / raw)
  To: Tiejun Chen, benh; +Cc: linuxppc-dev, linux-kernel



> -----Original Message-----
> From: Linuxppc-dev [mailto:linuxppc-dev-
> bounces+bharat.bhushan=freescale.com@lists.ozlabs.org] On Behalf Of Tiejun Chen
> Sent: Tuesday, July 09, 2013 1:33 PM
> To: benh@kernel.crashing.org
> Cc: linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> Subject: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with
> __end_interrupts
> 
> We can rename 'interrupt_end_book3e' with '__end_interrupts' then book3s/book3e
> can share this unique label to make sure we can use this conveniently.

I think we can be consistent with start and end names, no?

-Bharat

> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/kernel/exceptions-64e.S |    8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/exceptions-64e.S
> b/arch/powerpc/kernel/exceptions-64e.S
> index 645170a..a518e48 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -309,8 +309,8 @@ interrupt_base_book3e:					/* fake
> trap */
>  	EXCEPTION_STUB(0x300, hypercall)
>  	EXCEPTION_STUB(0x320, ehpriv)
> 
> -	.globl interrupt_end_book3e
> -interrupt_end_book3e:
> +	.globl __end_interrupts
> +__end_interrupts:
> 
>  /* Critical Input Interrupt */
>  	START_EXCEPTION(critical_input);
> @@ -493,7 +493,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
>  	beq+	1f
> 
>  	LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)
> -	LOAD_REG_IMMEDIATE(r15,interrupt_end_book3e)
> +	LOAD_REG_IMMEDIATE(r15,__end_interrupts)
>  	cmpld	cr0,r10,r14
>  	cmpld	cr1,r10,r15
>  	blt+	cr0,1f
> @@ -559,7 +559,7 @@ kernel_dbg_exc:
>  	beq+	1f
> 
>  	LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)
> -	LOAD_REG_IMMEDIATE(r15,interrupt_end_book3e)
> +	LOAD_REG_IMMEDIATE(r15,__end_interrupts)
>  	cmpld	cr0,r10,r14
>  	cmpld	cr1,r10,r15
>  	blt+	cr0,1f
> --
> 1.7.9.5
> 
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev



^ permalink raw reply	[flat|nested] 42+ messages in thread

* RE: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
@ 2013-07-10  5:17     ` Bhushan Bharat-R65777
  0 siblings, 0 replies; 42+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-10  5:17 UTC (permalink / raw)
  To: Tiejun Chen, benh; +Cc: linuxppc-dev, linux-kernel



> -----Original Message-----
> From: Linuxppc-dev [mailto:linuxppc-dev-
> bounces+bharat.bhushan=3Dfreescale.com@lists.ozlabs.org] On Behalf Of Tie=
jun Chen
> Sent: Tuesday, July 09, 2013 1:33 PM
> To: benh@kernel.crashing.org
> Cc: linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> Subject: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with
> __end_interrupts
>=20
> We can rename 'interrupt_end_book3e' with '__end_interrupts' then book3s/=
book3e
> can share this unique label to make sure we can use this conveniently.

I think we can be consistent with start and end names, no?

-Bharat

>=20
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/kernel/exceptions-64e.S |    8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>=20
> diff --git a/arch/powerpc/kernel/exceptions-64e.S
> b/arch/powerpc/kernel/exceptions-64e.S
> index 645170a..a518e48 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -309,8 +309,8 @@ interrupt_base_book3e:					/* fake
> trap */
>  	EXCEPTION_STUB(0x300, hypercall)
>  	EXCEPTION_STUB(0x320, ehpriv)
>=20
> -	.globl interrupt_end_book3e
> -interrupt_end_book3e:
> +	.globl __end_interrupts
> +__end_interrupts:
>=20
>  /* Critical Input Interrupt */
>  	START_EXCEPTION(critical_input);
> @@ -493,7 +493,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
>  	beq+	1f
>=20
>  	LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)
> -	LOAD_REG_IMMEDIATE(r15,interrupt_end_book3e)
> +	LOAD_REG_IMMEDIATE(r15,__end_interrupts)
>  	cmpld	cr0,r10,r14
>  	cmpld	cr1,r10,r15
>  	blt+	cr0,1f
> @@ -559,7 +559,7 @@ kernel_dbg_exc:
>  	beq+	1f
>=20
>  	LOAD_REG_IMMEDIATE(r14,interrupt_base_book3e)
> -	LOAD_REG_IMMEDIATE(r15,interrupt_end_book3e)
> +	LOAD_REG_IMMEDIATE(r15,__end_interrupts)
>  	cmpld	cr0,r10,r14
>  	cmpld	cr1,r10,r15
>  	blt+	cr0,1f
> --
> 1.7.9.5
>=20
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev

^ permalink raw reply	[flat|nested] 42+ messages in thread

* RE: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-07-10  5:20     ` Bhushan Bharat-R65777
  -1 siblings, 0 replies; 42+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-10  5:20 UTC (permalink / raw)
  To: Tiejun Chen, benh, Wood Scott-B07421; +Cc: linuxppc-dev, linux-kernel



> -----Original Message-----
> From: Linuxppc-dev [mailto:linuxppc-dev-
> bounces+bharat.bhushan=freescale.com@lists.ozlabs.org] On Behalf Of Tiejun Chen
> Sent: Tuesday, July 09, 2013 1:33 PM
> To: benh@kernel.crashing.org
> Cc: linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> Subject: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
> 
> Book3e is always aligned 1GB to create TLB so we should
> use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
> get __pa/__va properly while boot kdump.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/include/asm/page.h |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 988c812..5b00081 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -112,6 +112,8 @@ extern long long virt_phys_offset;
>  /* See Description below for VIRT_PHYS_OFFSET */
>  #ifdef CONFIG_RELOCATABLE_PPC32
>  #define VIRT_PHYS_OFFSET virt_phys_offset
> +#elif defined(CONFIG_PPC_BOOK3E_64)
> +#define VIRT_PHYS_OFFSET (KERNELBASE - MEMORY_START)

Can you please explain this code a bit more. I am not understanding this part:)

-Bharat

>  #else
>  #define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
>  #endif
> --
> 1.7.9.5
> 
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev



^ permalink raw reply	[flat|nested] 42+ messages in thread

* RE: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
@ 2013-07-10  5:20     ` Bhushan Bharat-R65777
  0 siblings, 0 replies; 42+ messages in thread
From: Bhushan Bharat-R65777 @ 2013-07-10  5:20 UTC (permalink / raw)
  To: Tiejun Chen, benh, Wood Scott-B07421; +Cc: linuxppc-dev, linux-kernel



> -----Original Message-----
> From: Linuxppc-dev [mailto:linuxppc-dev-
> bounces+bharat.bhushan=3Dfreescale.com@lists.ozlabs.org] On Behalf Of Tie=
jun Chen
> Sent: Tuesday, July 09, 2013 1:33 PM
> To: benh@kernel.crashing.org
> Cc: linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
> Subject: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
>=20
> Book3e is always aligned 1GB to create TLB so we should
> use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
> get __pa/__va properly while boot kdump.
>=20
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/include/asm/page.h |    2 ++
>  1 file changed, 2 insertions(+)
>=20
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/p=
age.h
> index 988c812..5b00081 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -112,6 +112,8 @@ extern long long virt_phys_offset;
>  /* See Description below for VIRT_PHYS_OFFSET */
>  #ifdef CONFIG_RELOCATABLE_PPC32
>  #define VIRT_PHYS_OFFSET virt_phys_offset
> +#elif defined(CONFIG_PPC_BOOK3E_64)
> +#define VIRT_PHYS_OFFSET (KERNELBASE - MEMORY_START)

Can you please explain this code a bit more. I am not understanding this pa=
rt:)

-Bharat

>  #else
>  #define VIRT_PHYS_OFFSET (KERNELBASE - PHYSICAL_START)
>  #endif
> --
> 1.7.9.5
>=20
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
  2013-07-10  5:17     ` Bhushan Bharat-R65777
@ 2013-07-10  5:39       ` tiejun.chen
  -1 siblings, 0 replies; 42+ messages in thread
From: tiejun.chen @ 2013-07-10  5:39 UTC (permalink / raw)
  To: Bhushan Bharat-R65777; +Cc: benh, linuxppc-dev, linux-kernel

On 07/10/2013 01:17 PM, Bhushan Bharat-R65777 wrote:
>
>
>> -----Original Message-----
>> From: Linuxppc-dev [mailto:linuxppc-dev-
>> bounces+bharat.bhushan=freescale.com@lists.ozlabs.org] On Behalf Of Tiejun Chen
>> Sent: Tuesday, July 09, 2013 1:33 PM
>> To: benh@kernel.crashing.org
>> Cc: linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
>> Subject: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with
>> __end_interrupts
>>
>> We can rename 'interrupt_end_book3e' with '__end_interrupts' then book3s/book3e
>> can share this unique label to make sure we can use this conveniently.
>
> I think we can be consistent with start and end names, no?

Are you saying to rename 'interrupt_base_book3e' with '__base_interrupts' here? 
But seems optional since book3s have no this similar start label, so I'd like to 
keep that as original now.

Lets listen other comments firstly.

Tiejun


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
@ 2013-07-10  5:39       ` tiejun.chen
  0 siblings, 0 replies; 42+ messages in thread
From: tiejun.chen @ 2013-07-10  5:39 UTC (permalink / raw)
  To: Bhushan Bharat-R65777; +Cc: linuxppc-dev, linux-kernel

On 07/10/2013 01:17 PM, Bhushan Bharat-R65777 wrote:
>
>
>> -----Original Message-----
>> From: Linuxppc-dev [mailto:linuxppc-dev-
>> bounces+bharat.bhushan=freescale.com@lists.ozlabs.org] On Behalf Of Tiejun Chen
>> Sent: Tuesday, July 09, 2013 1:33 PM
>> To: benh@kernel.crashing.org
>> Cc: linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
>> Subject: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with
>> __end_interrupts
>>
>> We can rename 'interrupt_end_book3e' with '__end_interrupts' then book3s/book3e
>> can share this unique label to make sure we can use this conveniently.
>
> I think we can be consistent with start and end names, no?

Are you saying to rename 'interrupt_base_book3e' with '__base_interrupts' here? 
But seems optional since book3s have no this similar start label, so I'd like to 
keep that as original now.

Lets listen other comments firstly.

Tiejun

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
  2013-07-10  5:20     ` Bhushan Bharat-R65777
@ 2013-07-10  5:46       ` tiejun.chen
  -1 siblings, 0 replies; 42+ messages in thread
From: tiejun.chen @ 2013-07-10  5:46 UTC (permalink / raw)
  To: Bhushan Bharat-R65777; +Cc: benh, Wood Scott-B07421, linuxppc-dev, linux-kernel

On 07/10/2013 01:20 PM, Bhushan Bharat-R65777 wrote:
>
>
>> -----Original Message-----
>> From: Linuxppc-dev [mailto:linuxppc-dev-
>> bounces+bharat.bhushan=freescale.com@lists.ozlabs.org] On Behalf Of Tiejun Chen
>> Sent: Tuesday, July 09, 2013 1:33 PM
>> To: benh@kernel.crashing.org
>> Cc: linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
>> Subject: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
>>
>> Book3e is always aligned 1GB to create TLB so we should
>> use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
>> get __pa/__va properly while boot kdump.
>>
>> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
>> ---
>>   arch/powerpc/include/asm/page.h |    2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> index 988c812..5b00081 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -112,6 +112,8 @@ extern long long virt_phys_offset;
>>   /* See Description below for VIRT_PHYS_OFFSET */
>>   #ifdef CONFIG_RELOCATABLE_PPC32
>>   #define VIRT_PHYS_OFFSET virt_phys_offset
>> +#elif defined(CONFIG_PPC_BOOK3E_64)
>> +#define VIRT_PHYS_OFFSET (KERNELBASE - MEMORY_START)
>
> Can you please explain this code a bit more. I am not understanding this part:)

Nothing is special, we only need to redefine this to make sure __va()/__pa() can 
work well for BOOk3E-64 in BOOKE case:

#ifdef CONFIG_BOOKE
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
#define __pa(x) ((unsigned long)(x) - VIRT_PHYS_OFFSET)

And the arch/powerpc/include/asm/page.h file has more descriptions inline :)

Tiejun

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
@ 2013-07-10  5:46       ` tiejun.chen
  0 siblings, 0 replies; 42+ messages in thread
From: tiejun.chen @ 2013-07-10  5:46 UTC (permalink / raw)
  To: Bhushan Bharat-R65777; +Cc: linuxppc-dev, linux-kernel, Wood Scott-B07421

On 07/10/2013 01:20 PM, Bhushan Bharat-R65777 wrote:
>
>
>> -----Original Message-----
>> From: Linuxppc-dev [mailto:linuxppc-dev-
>> bounces+bharat.bhushan=freescale.com@lists.ozlabs.org] On Behalf Of Tiejun Chen
>> Sent: Tuesday, July 09, 2013 1:33 PM
>> To: benh@kernel.crashing.org
>> Cc: linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org
>> Subject: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
>>
>> Book3e is always aligned 1GB to create TLB so we should
>> use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
>> get __pa/__va properly while boot kdump.
>>
>> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
>> ---
>>   arch/powerpc/include/asm/page.h |    2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> index 988c812..5b00081 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -112,6 +112,8 @@ extern long long virt_phys_offset;
>>   /* See Description below for VIRT_PHYS_OFFSET */
>>   #ifdef CONFIG_RELOCATABLE_PPC32
>>   #define VIRT_PHYS_OFFSET virt_phys_offset
>> +#elif defined(CONFIG_PPC_BOOK3E_64)
>> +#define VIRT_PHYS_OFFSET (KERNELBASE - MEMORY_START)
>
> Can you please explain this code a bit more. I am not understanding this part:)

Nothing is special, we only need to redefine this to make sure __va()/__pa() can 
work well for BOOk3E-64 in BOOKE case:

#ifdef CONFIG_BOOKE
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
#define __pa(x) ((unsigned long)(x) - VIRT_PHYS_OFFSET)

And the arch/powerpc/include/asm/page.h file has more descriptions inline :)

Tiejun

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-12-18  3:03     ` Scott Wood
  -1 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:03 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: benh, linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> We can rename 'interrupt_end_book3e' with '__end_interrupts' then
> book3s/book3e can share this unique label to make sure we can use
> this conveniently.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>

What users of this do you plan to share between book3s and book3e?  I'm
not seeing any existing book3s users that are obviously applicable to
book3e -- they mainly involve copying exception vectors which we
shouldn't need to do.

-Scott



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts
@ 2013-12-18  3:03     ` Scott Wood
  0 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:03 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> We can rename 'interrupt_end_book3e' with '__end_interrupts' then
> book3s/book3e can share this unique label to make sure we can use
> this conveniently.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>

What users of this do you plan to share between book3s and book3e?  I'm
not seeing any existing book3s users that are obviously applicable to
book3e -- they mainly involve copying exception vectors which we
shouldn't need to do.

-Scott

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 2/8] powerpc/book3e: support CONFIG_RELOCATABLE
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-12-18  3:29     ` Scott Wood
  -1 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:29 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: benh, linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> book3e is different with book3s since 3s includes the exception
> vectors code in head_64.S as it relies on absolute addressing
> which is only possible within this compilation unit. So we have
> to get that label address with got.
> 
> And when boot a relocated kernel, we should reset ipvr properly again
> after .relocate.

ivpr?

> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/include/asm/exception-64e.h |   11 +++++++++++
>  arch/powerpc/kernel/exceptions-64e.S     |   18 +++++++++++++++++-
>  arch/powerpc/kernel/head_64.S            |   25 +++++++++++++++++++++++++
>  3 files changed, 53 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h
> index 51fa43e..371a77f 100644
> --- a/arch/powerpc/include/asm/exception-64e.h
> +++ b/arch/powerpc/include/asm/exception-64e.h
> @@ -214,10 +214,21 @@ exc_##label##_book3e:
>  #define TLB_MISS_STATS_SAVE_INFO_BOLTED
>  #endif
>  
> +#ifndef CONFIG_RELOCATABLE

Please use positive logic (ifdef/else, rather than ifndef/else).

>  #define SET_IVOR(vector_number, vector_offset)	\
>  	li	r3,vector_offset@l; 		\
>  	ori	r3,r3,interrupt_base_book3e@l;	\
>  	mtspr	SPRN_IVOR##vector_number,r3;
> +#else /* !CONFIG_RELOCATABLE */
> +/* In relocatable case the value of the constant expression 'expr' is only
> + * offset. So instead, we should loads the address of label 'name'.
> + */
> +#define SET_IVOR(vector_number, vector_offset)	\
> +	LOAD_REG_ADDR(r3,interrupt_base_book3e);\
> +	rlwinm	r3,r3,0,15,0;			\
> +	ori	r3,r3,vector_offset@l;		\
> +	mtspr	SPRN_IVOR##vector_number,r3;
> +#endif /* CONFIG_RELOCATABLE */

Please use the more readable 4-operand version of "rlwinm".

Is there a reason why this new code is only used with
CONFIG_RELOCATABLE?  If @got doesn't work without CONFIG_RELOCATABLE,
then the ifdef should be pushed into LOAD_REG_ADDR.

Likewise with other ifdefs on CONFIG_RELOCATABLE.

> -_STATIC(init_core_book3e)
> +_GLOBAL(init_core_book3e)
>  	/* Establish the interrupt vector base */
> +#ifdef CONFIG_RELOCATABLE
> +/* In relocatable case the value of the constant expression 'expr' is only
> + * offset. So instead, we should loads the address of label 'name'.
> + */
> +	tovirt(r2,r2)
> +	LOAD_REG_ADDR(r3, interrupt_base_book3e)
> +#else
>  	LOAD_REG_IMMEDIATE(r3, interrupt_base_book3e)
> +#endif

I'm having a hard time parsing the comment.  Plus, it feels wrong to
decouple tovirt(r2,r2) from the call to relative_toc.

>  	mtspr	SPRN_IVPR,r3
>  	sync
>  	blr
> diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
> index b61363d..550f8fb 100644
> --- a/arch/powerpc/kernel/head_64.S
> +++ b/arch/powerpc/kernel/head_64.S
> @@ -414,12 +414,25 @@ _STATIC(__after_prom_start)
>  	/* process relocations for the final address of the kernel */
>  	lis	r25,PAGE_OFFSET@highest	/* compute virtual base of kernel */
>  	sldi	r25,r25,32
> +#if defined(CONFIG_PPC_BOOK3E)
> +	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
> +#endif
>  	lwz	r7,__run_at_load-_stext(r26)
> +#if defined(CONFIG_PPC_BOOK3E)
> +	tophys(r26,r26)			/* Restore for the remains. */
> +#endif
>  	cmplwi	cr0,r7,1	/* flagged to stay where we are ? */
>  	bne	1f
>  	add	r25,r25,r26
>  1:	mr	r3,r25
>  	bl	.relocate
> +#if defined(CONFIG_PPC_BOOK3E)
> +	/* In relocatable case we always have to load the address of label 'name'
> +	 * to set IVPR. So after .relocate we have to update IVPR with current
> +	 * address of label.
> +	 */
> +	bl	.init_core_book3e
> +#endif

Maybe this function should be renamed to something ivpr-specific, so
nothing else gets added there.

>  #endif
>  
>  /*
> @@ -447,12 +460,24 @@ _STATIC(__after_prom_start)
>   * variable __run_at_load, if it is set the kernel is treated as relocatable
>   * kernel, otherwise it will be moved to PHYSICAL_START
>   */
> +#if defined(CONFIG_PPC_BOOK3E)
> +	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
> +#endif
>  	lwz	r7,__run_at_load-_stext(r26)
> +#if defined(CONFIG_PPC_BOOK3E)
> +	tophys(r26,r26)			/* Restore for the remains. */
> +#endif
>  	cmplwi	cr0,r7,1
>  	bne	3f
>  
> +#ifdef CONFIG_PPC_BOOK3E
> +	LOAD_REG_ADDR(r5, __end_interrupts)
> +	LOAD_REG_ADDR(r11, _stext)
> +	sub	r5,r5,r11
> +#else
>  	/* just copy interrupts */
>  	LOAD_REG_IMMEDIATE(r5, __end_interrupts - _stext)
> +#endif

Can't we skip the interrupt copying on book3e?  And if not for some
reason, why start at _stext?

-Scott



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 2/8] powerpc/book3e: support CONFIG_RELOCATABLE
@ 2013-12-18  3:29     ` Scott Wood
  0 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:29 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> book3e is different with book3s since 3s includes the exception
> vectors code in head_64.S as it relies on absolute addressing
> which is only possible within this compilation unit. So we have
> to get that label address with got.
> 
> And when boot a relocated kernel, we should reset ipvr properly again
> after .relocate.

ivpr?

> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/include/asm/exception-64e.h |   11 +++++++++++
>  arch/powerpc/kernel/exceptions-64e.S     |   18 +++++++++++++++++-
>  arch/powerpc/kernel/head_64.S            |   25 +++++++++++++++++++++++++
>  3 files changed, 53 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h
> index 51fa43e..371a77f 100644
> --- a/arch/powerpc/include/asm/exception-64e.h
> +++ b/arch/powerpc/include/asm/exception-64e.h
> @@ -214,10 +214,21 @@ exc_##label##_book3e:
>  #define TLB_MISS_STATS_SAVE_INFO_BOLTED
>  #endif
>  
> +#ifndef CONFIG_RELOCATABLE

Please use positive logic (ifdef/else, rather than ifndef/else).

>  #define SET_IVOR(vector_number, vector_offset)	\
>  	li	r3,vector_offset@l; 		\
>  	ori	r3,r3,interrupt_base_book3e@l;	\
>  	mtspr	SPRN_IVOR##vector_number,r3;
> +#else /* !CONFIG_RELOCATABLE */
> +/* In relocatable case the value of the constant expression 'expr' is only
> + * offset. So instead, we should loads the address of label 'name'.
> + */
> +#define SET_IVOR(vector_number, vector_offset)	\
> +	LOAD_REG_ADDR(r3,interrupt_base_book3e);\
> +	rlwinm	r3,r3,0,15,0;			\
> +	ori	r3,r3,vector_offset@l;		\
> +	mtspr	SPRN_IVOR##vector_number,r3;
> +#endif /* CONFIG_RELOCATABLE */

Please use the more readable 4-operand version of "rlwinm".

Is there a reason why this new code is only used with
CONFIG_RELOCATABLE?  If @got doesn't work without CONFIG_RELOCATABLE,
then the ifdef should be pushed into LOAD_REG_ADDR.

Likewise with other ifdefs on CONFIG_RELOCATABLE.

> -_STATIC(init_core_book3e)
> +_GLOBAL(init_core_book3e)
>  	/* Establish the interrupt vector base */
> +#ifdef CONFIG_RELOCATABLE
> +/* In relocatable case the value of the constant expression 'expr' is only
> + * offset. So instead, we should loads the address of label 'name'.
> + */
> +	tovirt(r2,r2)
> +	LOAD_REG_ADDR(r3, interrupt_base_book3e)
> +#else
>  	LOAD_REG_IMMEDIATE(r3, interrupt_base_book3e)
> +#endif

I'm having a hard time parsing the comment.  Plus, it feels wrong to
decouple tovirt(r2,r2) from the call to relative_toc.

>  	mtspr	SPRN_IVPR,r3
>  	sync
>  	blr
> diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
> index b61363d..550f8fb 100644
> --- a/arch/powerpc/kernel/head_64.S
> +++ b/arch/powerpc/kernel/head_64.S
> @@ -414,12 +414,25 @@ _STATIC(__after_prom_start)
>  	/* process relocations for the final address of the kernel */
>  	lis	r25,PAGE_OFFSET@highest	/* compute virtual base of kernel */
>  	sldi	r25,r25,32
> +#if defined(CONFIG_PPC_BOOK3E)
> +	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
> +#endif
>  	lwz	r7,__run_at_load-_stext(r26)
> +#if defined(CONFIG_PPC_BOOK3E)
> +	tophys(r26,r26)			/* Restore for the remains. */
> +#endif
>  	cmplwi	cr0,r7,1	/* flagged to stay where we are ? */
>  	bne	1f
>  	add	r25,r25,r26
>  1:	mr	r3,r25
>  	bl	.relocate
> +#if defined(CONFIG_PPC_BOOK3E)
> +	/* In relocatable case we always have to load the address of label 'name'
> +	 * to set IVPR. So after .relocate we have to update IVPR with current
> +	 * address of label.
> +	 */
> +	bl	.init_core_book3e
> +#endif

Maybe this function should be renamed to something ivpr-specific, so
nothing else gets added there.

>  #endif
>  
>  /*
> @@ -447,12 +460,24 @@ _STATIC(__after_prom_start)
>   * variable __run_at_load, if it is set the kernel is treated as relocatable
>   * kernel, otherwise it will be moved to PHYSICAL_START
>   */
> +#if defined(CONFIG_PPC_BOOK3E)
> +	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
> +#endif
>  	lwz	r7,__run_at_load-_stext(r26)
> +#if defined(CONFIG_PPC_BOOK3E)
> +	tophys(r26,r26)			/* Restore for the remains. */
> +#endif
>  	cmplwi	cr0,r7,1
>  	bne	3f
>  
> +#ifdef CONFIG_PPC_BOOK3E
> +	LOAD_REG_ADDR(r5, __end_interrupts)
> +	LOAD_REG_ADDR(r11, _stext)
> +	sub	r5,r5,r11
> +#else
>  	/* just copy interrupts */
>  	LOAD_REG_IMMEDIATE(r5, __end_interrupts - _stext)
> +#endif

Can't we skip the interrupt copying on book3e?  And if not for some
reason, why start at _stext?

-Scott

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 3/8] book3e/kexec/kdump: enable kexec for kernel
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-12-18  3:35     ` Scott Wood
  -1 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:35 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: benh, linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> We need to active KEXEC for book3e and bypass or convert non-book3e stuff
> in kexec coverage.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/Kconfig                   |    2 +-
>  arch/powerpc/kernel/machine_kexec_64.c |  148 ++++++++++++++++++--------------
>  arch/powerpc/kernel/misc_64.S          |    6 ++
>  3 files changed, 89 insertions(+), 67 deletions(-)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 5374776..d945435 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -357,7 +357,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
>  
>  config KEXEC
>  	bool "kexec system call"
> -	depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP))
> +	depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP)) || PPC_BOOK3E

Please remove the outher parentheses, and especially don't put
PPC_BOOK3E on the outside of them when there's no reason to group the
other items together.

> @@ -367,6 +301,87 @@ void default_machine_kexec(struct kimage *image)
>  	/* NOTREACHED */
>  }
>  
> +#ifdef CONFIG_PPC_BOOK3E
> +int default_machine_kexec_prepare(struct kimage *image)
> +{
> +	int i;
> +	/*
> +	 * Since we use the kernel fault handlers and paging code to
> +	 * handle the virtual mode, we must make sure no destination
> +	 * overlaps kernel static data or bss.
> +	 */
> +	for (i = 0; i < image->nr_segments; i++)
> +		if (image->segment[i].mem < __pa(_end))
> +			return -ETXTBSY;
> +	return 0;

Factor out this common code rather than duplicate it.

-Scott



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 3/8] book3e/kexec/kdump: enable kexec for kernel
@ 2013-12-18  3:35     ` Scott Wood
  0 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:35 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> We need to active KEXEC for book3e and bypass or convert non-book3e stuff
> in kexec coverage.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/Kconfig                   |    2 +-
>  arch/powerpc/kernel/machine_kexec_64.c |  148 ++++++++++++++++++--------------
>  arch/powerpc/kernel/misc_64.S          |    6 ++
>  3 files changed, 89 insertions(+), 67 deletions(-)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 5374776..d945435 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -357,7 +357,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
>  
>  config KEXEC
>  	bool "kexec system call"
> -	depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP))
> +	depends on (PPC_BOOK3S || FSL_BOOKE || (44x && !SMP)) || PPC_BOOK3E

Please remove the outher parentheses, and especially don't put
PPC_BOOK3E on the outside of them when there's no reason to group the
other items together.

> @@ -367,6 +301,87 @@ void default_machine_kexec(struct kimage *image)
>  	/* NOTREACHED */
>  }
>  
> +#ifdef CONFIG_PPC_BOOK3E
> +int default_machine_kexec_prepare(struct kimage *image)
> +{
> +	int i;
> +	/*
> +	 * Since we use the kernel fault handlers and paging code to
> +	 * handle the virtual mode, we must make sure no destination
> +	 * overlaps kernel static data or bss.
> +	 */
> +	for (i = 0; i < image->nr_segments; i++)
> +		if (image->segment[i].mem < __pa(_end))
> +			return -ETXTBSY;
> +	return 0;

Factor out this common code rather than duplicate it.

-Scott

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 4/8] book3e/kexec/kdump: create a 1:1 TLB mapping
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-12-18  3:39     ` Scott Wood
  -1 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:39 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: benh, linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> book3e have no real MMU mode so we have to create a 1:1 TLB
> mapping to make sure we can access the real physical address.
> And correct something to support this pseudo real mode on book3e.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>

Why do we need to be able to directly access physical addresses?

> diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
> index f1a7ce7..20cbb98 100644
> --- a/arch/powerpc/kernel/misc_64.S
> +++ b/arch/powerpc/kernel/misc_64.S
> @@ -460,6 +460,49 @@ kexec_flag:
>  
> 
>  #ifdef CONFIG_KEXEC
> +#ifdef CONFIG_PPC_BOOK3E
> +/* BOOK3E have no a real MMU mode so we have to setup the initial TLB
> + * for a core to map v:0 to p:0 as 1:1. This current implementation
> + * assume that 1G is enough for kexec.
> + */
> +#include <asm/mmu.h>

#includes go at the top of the file.

> +kexec_create_tlb:
> +	/* Invalidate all TLBs to avoid any TLB conflict. */
> +	PPC_TLBILX_ALL(0,R0)
> +	sync
> +	isync
> +
> +	mfspr	r10,SPRN_TLB1CFG
> +	andi.	r10,r10,TLBnCFG_N_ENTRY	/* Extract # entries */
> +	subi	r10,r10,1		/* Often its always safe to use last */
> +	lis	r9,MAS0_TLBSEL(1)@h
> +	rlwimi	r9,r10,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r9) */

Hardcoding TLB1 makes this FSL-specific code, but you've put it in a
non-FSL-specific place.

> +/* Setup a temp mapping v:0 to p:0 as 1:1 and return to it.
> + */
> +#ifdef CONFIG_SMP
> +#define M_IF_SMP	MAS2_M
> +#else
> +#define M_IF_SMP	0
> +#endif
> +	mtspr	SPRN_MAS0,r9
> +
> +	lis	r9,(MAS1_VALID|MAS1_IPROT)@h
> +	ori	r9,r9,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l
> +	mtspr	SPRN_MAS1,r9

What if the machine has less than 1 GiB of RAM?  We could get
speculative accesses to non-present addresses.

Though it looks like the normal 64-bit init sequence has the same
problem...

-Scott



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 4/8] book3e/kexec/kdump: create a 1:1 TLB mapping
@ 2013-12-18  3:39     ` Scott Wood
  0 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:39 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> book3e have no real MMU mode so we have to create a 1:1 TLB
> mapping to make sure we can access the real physical address.
> And correct something to support this pseudo real mode on book3e.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>

Why do we need to be able to directly access physical addresses?

> diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
> index f1a7ce7..20cbb98 100644
> --- a/arch/powerpc/kernel/misc_64.S
> +++ b/arch/powerpc/kernel/misc_64.S
> @@ -460,6 +460,49 @@ kexec_flag:
>  
> 
>  #ifdef CONFIG_KEXEC
> +#ifdef CONFIG_PPC_BOOK3E
> +/* BOOK3E have no a real MMU mode so we have to setup the initial TLB
> + * for a core to map v:0 to p:0 as 1:1. This current implementation
> + * assume that 1G is enough for kexec.
> + */
> +#include <asm/mmu.h>

#includes go at the top of the file.

> +kexec_create_tlb:
> +	/* Invalidate all TLBs to avoid any TLB conflict. */
> +	PPC_TLBILX_ALL(0,R0)
> +	sync
> +	isync
> +
> +	mfspr	r10,SPRN_TLB1CFG
> +	andi.	r10,r10,TLBnCFG_N_ENTRY	/* Extract # entries */
> +	subi	r10,r10,1		/* Often its always safe to use last */
> +	lis	r9,MAS0_TLBSEL(1)@h
> +	rlwimi	r9,r10,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r9) */

Hardcoding TLB1 makes this FSL-specific code, but you've put it in a
non-FSL-specific place.

> +/* Setup a temp mapping v:0 to p:0 as 1:1 and return to it.
> + */
> +#ifdef CONFIG_SMP
> +#define M_IF_SMP	MAS2_M
> +#else
> +#define M_IF_SMP	0
> +#endif
> +	mtspr	SPRN_MAS0,r9
> +
> +	lis	r9,(MAS1_VALID|MAS1_IPROT)@h
> +	ori	r9,r9,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l
> +	mtspr	SPRN_MAS1,r9

What if the machine has less than 1 GiB of RAM?  We could get
speculative accesses to non-present addresses.

Though it looks like the normal 64-bit init sequence has the same
problem...

-Scott

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 5/8] book3e/kexec/kdump: introduce a kexec kernel flag
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-12-18  3:42     ` Scott Wood
  -1 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:42 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: benh, linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> We need to introduce a flag to indicate we're already running
> a kexec kernel then we can go proper path. For example, We
> shouldn't access spin_table from the bootloader to up any secondary
> cpu for kexec kernel, and kexec kernel already know how to jump to
> generic_secondary_smp_init.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/include/asm/smp.h    |    1 +
>  arch/powerpc/kernel/head_64.S     |   10 ++++++++++
>  arch/powerpc/kernel/misc_64.S     |    6 ++++++
>  arch/powerpc/platforms/85xx/smp.c |   20 +++++++++++++++-----
>  4 files changed, 32 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
> index ffbaabe..59165a3 100644
> --- a/arch/powerpc/include/asm/smp.h
> +++ b/arch/powerpc/include/asm/smp.h
> @@ -200,6 +200,7 @@ extern void generic_secondary_thread_init(void);
>  extern unsigned long __secondary_hold_spinloop;
>  extern unsigned long __secondary_hold_acknowledge;
>  extern char __secondary_hold;
> +extern unsigned long __run_at_kexec;
>  
>  extern void __early_start(void);
>  #endif /* __ASSEMBLY__ */
> diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
> index 7dc56be..0b46c9d 100644
> --- a/arch/powerpc/kernel/head_64.S
> +++ b/arch/powerpc/kernel/head_64.S
> @@ -89,6 +89,10 @@ __secondary_hold_spinloop:
>  __secondary_hold_acknowledge:
>  	.llong	0x0
>  
> +	.globl	__run_at_kexec
> +__run_at_kexec:
> +	.llong	0x0	/* Flag for the secondary kernel from kexec. */
> +

No leading underscores please -- and why does this need to be 64-bit?

>  #ifdef CONFIG_RELOCATABLE
>  	/* This flag is set to 1 by a loader if the kernel should run
>  	 * at the loaded address instead of the linked address.  This
> @@ -417,6 +421,12 @@ _STATIC(__after_prom_start)
>  #if defined(CONFIG_PPC_BOOK3E)
>  	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
>  #endif
> +#if defined(CONFIG_KEXEC) || defined(CONFIG_CRASH_DUMP)
> +	/* If relocated we need to restore this flag on that relocated address. */
> +	ld	r7,__run_at_kexec-_stext(r26)
> +	std	r7,__run_at_kexec-_stext(r26)
> +#endif
> +
>  	lwz	r7,__run_at_load-_stext(r26)
>  #if defined(CONFIG_PPC_BOOK3E)
>  	tophys(r26,r26)			/* Restore for the remains. */
> diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
> index 20cbb98..c89aead 100644
> --- a/arch/powerpc/kernel/misc_64.S
> +++ b/arch/powerpc/kernel/misc_64.S
> @@ -619,6 +619,12 @@ _GLOBAL(kexec_sequence)
>  	bl	.copy_and_flush	/* (dest, src, copy limit, start offset) */
>  1:	/* assume normal blr return */
>  
> +	/* notify we're going into kexec kernel for SMP. */
> +	LOAD_REG_ADDR(r3,__run_at_kexec)
> +	li	r4,1
> +	std	r4,0(r3)
> +	sync
> +
>  	/* release other cpus to the new kernel secondary start at 0x60 */
>  	mflr	r5
>  	li	r6,1
> diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
> index 5ced4f5..14d461b 100644
> --- a/arch/powerpc/platforms/85xx/smp.c
> +++ b/arch/powerpc/platforms/85xx/smp.c
> @@ -150,6 +150,9 @@ static int smp_85xx_kick_cpu(int nr)
>  	int hw_cpu = get_hard_smp_processor_id(nr);
>  	int ioremappable;
>  	int ret = 0;
> +#ifdef CONFIG_PPC64
> +	unsigned long *ptr = NULL;
> +#endif

Looks like an unnecessary initialization.

>  
>  	WARN_ON(nr < 0 || nr >= NR_CPUS);
>  	WARN_ON(hw_cpu < 0 || hw_cpu >= NR_CPUS);
> @@ -238,11 +241,18 @@ out:
>  #else
>  	smp_generic_kick_cpu(nr);
>  
> -	flush_spin_table(spin_table);
> -	out_be32(&spin_table->pir, hw_cpu);
> -	out_be64((u64 *)(&spin_table->addr_h),
> -	  __pa((u64)*((unsigned long long *)generic_secondary_smp_init)));
> -	flush_spin_table(spin_table);
> +	ptr  = (unsigned long *)((unsigned long)&__run_at_kexec);
> +	/* We shouldn't access spin_table from the bootloader to up any
> +	 * secondary cpu for kexec kernel, and kexec kernel already
> +	 * know how to jump to generic_secondary_smp_init.
> +	 */
> +	if (!*ptr) {
> +		flush_spin_table(spin_table);
> +		out_be32(&spin_table->pir, hw_cpu);
> +		out_be64((u64 *)(&spin_table->addr_h),
> +		 __pa((u64)*((unsigned long long *)generic_secondary_smp_init)));
> +		flush_spin_table(spin_table);
> +	}
>  #endif

Please use a more descriptive name than "ptr".

How is all that different than just:

	if (!__run_at_kexec) {
		...
	}

-Scott



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 5/8] book3e/kexec/kdump: introduce a kexec kernel flag
@ 2013-12-18  3:42     ` Scott Wood
  0 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:42 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> We need to introduce a flag to indicate we're already running
> a kexec kernel then we can go proper path. For example, We
> shouldn't access spin_table from the bootloader to up any secondary
> cpu for kexec kernel, and kexec kernel already know how to jump to
> generic_secondary_smp_init.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/include/asm/smp.h    |    1 +
>  arch/powerpc/kernel/head_64.S     |   10 ++++++++++
>  arch/powerpc/kernel/misc_64.S     |    6 ++++++
>  arch/powerpc/platforms/85xx/smp.c |   20 +++++++++++++++-----
>  4 files changed, 32 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/smp.h b/arch/powerpc/include/asm/smp.h
> index ffbaabe..59165a3 100644
> --- a/arch/powerpc/include/asm/smp.h
> +++ b/arch/powerpc/include/asm/smp.h
> @@ -200,6 +200,7 @@ extern void generic_secondary_thread_init(void);
>  extern unsigned long __secondary_hold_spinloop;
>  extern unsigned long __secondary_hold_acknowledge;
>  extern char __secondary_hold;
> +extern unsigned long __run_at_kexec;
>  
>  extern void __early_start(void);
>  #endif /* __ASSEMBLY__ */
> diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
> index 7dc56be..0b46c9d 100644
> --- a/arch/powerpc/kernel/head_64.S
> +++ b/arch/powerpc/kernel/head_64.S
> @@ -89,6 +89,10 @@ __secondary_hold_spinloop:
>  __secondary_hold_acknowledge:
>  	.llong	0x0
>  
> +	.globl	__run_at_kexec
> +__run_at_kexec:
> +	.llong	0x0	/* Flag for the secondary kernel from kexec. */
> +

No leading underscores please -- and why does this need to be 64-bit?

>  #ifdef CONFIG_RELOCATABLE
>  	/* This flag is set to 1 by a loader if the kernel should run
>  	 * at the loaded address instead of the linked address.  This
> @@ -417,6 +421,12 @@ _STATIC(__after_prom_start)
>  #if defined(CONFIG_PPC_BOOK3E)
>  	tovirt(r26,r26)			/* on booke, we already run at PAGE_OFFSET */
>  #endif
> +#if defined(CONFIG_KEXEC) || defined(CONFIG_CRASH_DUMP)
> +	/* If relocated we need to restore this flag on that relocated address. */
> +	ld	r7,__run_at_kexec-_stext(r26)
> +	std	r7,__run_at_kexec-_stext(r26)
> +#endif
> +
>  	lwz	r7,__run_at_load-_stext(r26)
>  #if defined(CONFIG_PPC_BOOK3E)
>  	tophys(r26,r26)			/* Restore for the remains. */
> diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
> index 20cbb98..c89aead 100644
> --- a/arch/powerpc/kernel/misc_64.S
> +++ b/arch/powerpc/kernel/misc_64.S
> @@ -619,6 +619,12 @@ _GLOBAL(kexec_sequence)
>  	bl	.copy_and_flush	/* (dest, src, copy limit, start offset) */
>  1:	/* assume normal blr return */
>  
> +	/* notify we're going into kexec kernel for SMP. */
> +	LOAD_REG_ADDR(r3,__run_at_kexec)
> +	li	r4,1
> +	std	r4,0(r3)
> +	sync
> +
>  	/* release other cpus to the new kernel secondary start at 0x60 */
>  	mflr	r5
>  	li	r6,1
> diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
> index 5ced4f5..14d461b 100644
> --- a/arch/powerpc/platforms/85xx/smp.c
> +++ b/arch/powerpc/platforms/85xx/smp.c
> @@ -150,6 +150,9 @@ static int smp_85xx_kick_cpu(int nr)
>  	int hw_cpu = get_hard_smp_processor_id(nr);
>  	int ioremappable;
>  	int ret = 0;
> +#ifdef CONFIG_PPC64
> +	unsigned long *ptr = NULL;
> +#endif

Looks like an unnecessary initialization.

>  
>  	WARN_ON(nr < 0 || nr >= NR_CPUS);
>  	WARN_ON(hw_cpu < 0 || hw_cpu >= NR_CPUS);
> @@ -238,11 +241,18 @@ out:
>  #else
>  	smp_generic_kick_cpu(nr);
>  
> -	flush_spin_table(spin_table);
> -	out_be32(&spin_table->pir, hw_cpu);
> -	out_be64((u64 *)(&spin_table->addr_h),
> -	  __pa((u64)*((unsigned long long *)generic_secondary_smp_init)));
> -	flush_spin_table(spin_table);
> +	ptr  = (unsigned long *)((unsigned long)&__run_at_kexec);
> +	/* We shouldn't access spin_table from the bootloader to up any
> +	 * secondary cpu for kexec kernel, and kexec kernel already
> +	 * know how to jump to generic_secondary_smp_init.
> +	 */
> +	if (!*ptr) {
> +		flush_spin_table(spin_table);
> +		out_be32(&spin_table->pir, hw_cpu);
> +		out_be64((u64 *)(&spin_table->addr_h),
> +		 __pa((u64)*((unsigned long long *)generic_secondary_smp_init)));
> +		flush_spin_table(spin_table);
> +	}
>  #endif

Please use a more descriptive name than "ptr".

How is all that different than just:

	if (!__run_at_kexec) {
		...
	}

-Scott

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 6/8] book3e/kexec/kdump: implement ppc64 kexec specfic
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-12-18  3:45     ` Scott Wood
  -1 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:45 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: benh, linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> ppc64 kexec mechanism has a different implementation with ppc32.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>

Could you describe the relevant differences?

-Scott



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 6/8] book3e/kexec/kdump: implement ppc64 kexec specfic
@ 2013-12-18  3:45     ` Scott Wood
  0 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:45 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> ppc64 kexec mechanism has a different implementation with ppc32.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>

Could you describe the relevant differences?

-Scott

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-12-18  3:48     ` Scott Wood
  -1 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:48 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: benh, linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> Book3e is always aligned 1GB to create TLB so we should
> use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
> get __pa/__va properly while boot kdump.

What if MEMORY_START - PHYSICAL_START >= 1 GiB?

What about the comment that says we can't use MEMORY_START before
parsing the device tree?

-Scott



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET
@ 2013-12-18  3:48     ` Scott Wood
  0 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:48 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> Book3e is always aligned 1GB to create TLB so we should
> use (KERNELBASE - MEMORY_START) as VIRT_PHYS_OFFSET to
> get __pa/__va properly while boot kdump.

What if MEMORY_START - PHYSICAL_START >= 1 GiB?

What about the comment that says we can't use MEMORY_START before
parsing the device tree?

-Scott

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 8/8] book3e/kexec/kdump: recover "r4 = 0" to create the initial TLB
  2013-07-09  8:03   ` Tiejun Chen
@ 2013-12-18  3:50     ` Scott Wood
  -1 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:50 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: benh, linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> In commit 96f013f, "powerpc/kexec: Add kexec "hold" support for Book3e
> processors", requires that GPR4 survive the "hold" process, for IBM Blue
> Gene/Q with with some very strange firmware. But for FSL Book3E, r4 = 1
> to indicate that the initial TLB entry for this core already exists so
> we still should set r4 with 0 to create that initial TLB.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/kernel/head_64.S |    4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
> index 0b46c9d..d546c5e 100644
> --- a/arch/powerpc/kernel/head_64.S
> +++ b/arch/powerpc/kernel/head_64.S
> @@ -127,6 +127,10 @@ __secondary_hold:
>  	/* Grab our physical cpu number */
>  	mr	r24,r3
>  	/* stash r4 for book3e */
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +	/* we need to setup initial TLB entry. */
> +	li	r4,0
> +#endif
>  	mr	r25,r4

This breaks being able to build one kernel that supports both FSL book3e
and IBM book3e.

-Scott



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [v3][PATCH 8/8] book3e/kexec/kdump: recover "r4 = 0" to create the initial TLB
@ 2013-12-18  3:50     ` Scott Wood
  0 siblings, 0 replies; 42+ messages in thread
From: Scott Wood @ 2013-12-18  3:50 UTC (permalink / raw)
  To: Tiejun Chen; +Cc: linuxppc-dev, linux-kernel

On Tue, 2013-07-09 at 16:03 +0800, Tiejun Chen wrote:
> In commit 96f013f, "powerpc/kexec: Add kexec "hold" support for Book3e
> processors", requires that GPR4 survive the "hold" process, for IBM Blue
> Gene/Q with with some very strange firmware. But for FSL Book3E, r4 = 1
> to indicate that the initial TLB entry for this core already exists so
> we still should set r4 with 0 to create that initial TLB.
> 
> Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
> ---
>  arch/powerpc/kernel/head_64.S |    4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
> index 0b46c9d..d546c5e 100644
> --- a/arch/powerpc/kernel/head_64.S
> +++ b/arch/powerpc/kernel/head_64.S
> @@ -127,6 +127,10 @@ __secondary_hold:
>  	/* Grab our physical cpu number */
>  	mr	r24,r3
>  	/* stash r4 for book3e */
> +#ifdef CONFIG_PPC_FSL_BOOK3E
> +	/* we need to setup initial TLB entry. */
> +	li	r4,0
> +#endif
>  	mr	r25,r4

This breaks being able to build one kernel that supports both FSL book3e
and IBM book3e.

-Scott

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2013-12-18  3:50 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-09  8:03 [v3][PATCH 0/8] powerpc/book3e: support kexec and kdump Tiejun Chen
2013-07-09  8:03 ` Tiejun Chen
2013-07-09  8:03 ` [v3][PATCH 1/8] powerpc/book3e: rename interrupt_end_book3e with __end_interrupts Tiejun Chen
2013-07-09  8:03   ` Tiejun Chen
2013-07-10  5:17   ` Bhushan Bharat-R65777
2013-07-10  5:17     ` Bhushan Bharat-R65777
2013-07-10  5:39     ` tiejun.chen
2013-07-10  5:39       ` tiejun.chen
2013-12-18  3:03   ` Scott Wood
2013-12-18  3:03     ` Scott Wood
2013-07-09  8:03 ` [v3][PATCH 2/8] powerpc/book3e: support CONFIG_RELOCATABLE Tiejun Chen
2013-07-09  8:03   ` Tiejun Chen
2013-12-18  3:29   ` Scott Wood
2013-12-18  3:29     ` Scott Wood
2013-07-09  8:03 ` [v3][PATCH 3/8] book3e/kexec/kdump: enable kexec for kernel Tiejun Chen
2013-07-09  8:03   ` Tiejun Chen
2013-12-18  3:35   ` Scott Wood
2013-12-18  3:35     ` Scott Wood
2013-07-09  8:03 ` [v3][PATCH 4/8] book3e/kexec/kdump: create a 1:1 TLB mapping Tiejun Chen
2013-07-09  8:03   ` Tiejun Chen
2013-12-18  3:39   ` Scott Wood
2013-12-18  3:39     ` Scott Wood
2013-07-09  8:03 ` [v3][PATCH 5/8] book3e/kexec/kdump: introduce a kexec kernel flag Tiejun Chen
2013-07-09  8:03   ` Tiejun Chen
2013-12-18  3:42   ` Scott Wood
2013-12-18  3:42     ` Scott Wood
2013-07-09  8:03 ` [v3][PATCH 6/8] book3e/kexec/kdump: implement ppc64 kexec specfic Tiejun Chen
2013-07-09  8:03   ` Tiejun Chen
2013-12-18  3:45   ` Scott Wood
2013-12-18  3:45     ` Scott Wood
2013-07-09  8:03 ` [v3][PATCH 7/8] book3e/kexec/kdump: redefine VIRT_PHYS_OFFSET Tiejun Chen
2013-07-09  8:03   ` Tiejun Chen
2013-07-10  5:20   ` Bhushan Bharat-R65777
2013-07-10  5:20     ` Bhushan Bharat-R65777
2013-07-10  5:46     ` tiejun.chen
2013-07-10  5:46       ` tiejun.chen
2013-12-18  3:48   ` Scott Wood
2013-12-18  3:48     ` Scott Wood
2013-07-09  8:03 ` [v3][PATCH 8/8] book3e/kexec/kdump: recover "r4 = 0" to create the initial TLB Tiejun Chen
2013-07-09  8:03   ` Tiejun Chen
2013-12-18  3:50   ` Scott Wood
2013-12-18  3:50     ` Scott Wood

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.